patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11943475
MODE FOR INVENTION A variety of modifications may be made to the present invention and there are various embodiments of the present invention, examples of which will now be provided with reference to drawings and described in detail. However, the present invention is not limited thereto, although the exemplary embodiments can be construed as including all modifications, equivalents, or substitutes in a technical concept and a technical scope of the present invention. The similar reference numerals refer to the same or similar functions in various aspects. In the drawings, the shapes and dimensions of elements may be exaggerated for clarity. In the following detailed description of the present invention, references are made to the accompanying drawings that show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to implement the present disclosure. It should be understood that various embodiments of the present disclosure, although different, are not necessarily mutually exclusive. For example, specific features, structures, and characteristics described herein, in connection with one embodiment, may be implemented within other embodiments without departing from the spirit and scope of the present disclosure. In addition, it should be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to what the claims claim. Terms used in the specification, ‘first’, ‘second’, etc. can be used to describe various components, but the components are not to be construed as being limited to the terms. The terms are only used to differentiate one component from other components. For example, the ‘first’ component may be named the ‘second’ component without departing from the scope of the present invention, and the ‘second’ component may also be similarly named the ‘first’ component. The term ‘and/or’ includes a combination of a plurality of items or any one of a plurality of terms. It will be understood that when an element is simply referred to as being ‘connected to’ or ‘coupled to’ another element without being ‘directly connected to’ or ‘directly coupled to’ another element in the present description, it may be ‘directly connected to’ or ‘directly coupled to’ another element or be connected to or coupled to another element, having the other element intervening therebetween. In contrast, it should be understood that when an element is referred to as being “directly coupled” or “directly connected” to another element, there are no intervening elements present. Furthermore, constitutional parts shown in the embodiments of the present invention are independently shown so as to represent characteristic functions different from each other. Thus, it does not mean that each constitutional part is constituted in a constitutional unit of separated hardware or software. In other words, each constitutional part includes each of enumerated constitutional parts for convenience. Thus, at least two constitutional parts of each constitutional part may be combined to form one constitutional part or one constitutional part may be divided into a plurality of constitutional parts to perform each function. The embodiment where each constitutional part is combined and the embodiment where one constitutional part is divided are also included in the scope of the present invention, if not departing from the essence of the present invention. The terms used in the present specification are merely used to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression of the plural, unless it has a clearly different meaning in the context. In the present specification, it is to be understood that terms such as “including”, “having”, etc. are intended to indicate the existence of the features, numbers, steps, actions, elements, parts, or combinations thereof disclosed in the specification, and are not intended to preclude the possibility that one or more other features, numbers, steps, actions, elements, parts, or combinations thereof may exist or may be added. In other words, when a specific element is referred to as being “included”, elements other than the corresponding element are not excluded, but additional elements may be included in embodiments of the present invention or the scope of the present invention. In addition, some of constituents may not be indispensable constituents performing essential functions of the present invention but be selective constituents improving only performance thereof. The present invention may be implemented by including only the indispensable constitutional parts for implementing the essence of the present invention except the constituents used in improving performance. The structure including only the indispensable constituents except the selective constituents used in improving only performance is also included in the scope of the present invention. Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In describing exemplary embodiments of the present invention, well-known functions or constructions will not be described in detail since they may unnecessarily obscure the understanding of the present invention. The same constituent elements in the drawings are denoted by the same reference numerals, and a repeated description of the same elements will be omitted. Hereinafter, an image may mean a picture configuring a video, or may mean the video itself. For example, “encoding or decoding or both of an image” may mean “encoding or decoding or both of a moving picture”, and may mean “encoding or decoding or both of one image among images of a moving picture.” Hereinafter, terms “moving picture” and “video” may be used as the same meaning and be replaced with each other. Hereinafter, a target image may be an encoding target image which is a target of encoding and/or a decoding target image which is a target of decoding. Also, a target image may be an input image inputted to an encoding apparatus, and an input image inputted to a decoding apparatus. Here, a target image may have the same meaning with the current image. Hereinafter, terms “image”, “picture, “frame” and “screen” may be used as the same meaning and be replaced with each other. Hereinafter, a target block may be an encoding target block which is a target of encoding and/or a decoding target block which is a target of decoding. Also, a target block may be the current block which is a target of current encoding and/or decoding. For example, terms “target block” and “current block” may be used as the same meaning and be replaced with each other. Hereinafter, terms “block” and “unit” may be used as the same meaning and be replaced with each other. Or a “block” may represent a specific unit. Hereinafter, terms “region” and “segment” may be replaced with each other. Hereinafter, a specific signal may be a signal representing a specific block. For example, an original signal may be a signal representing a target block. A prediction signal may be a signal representing a prediction block. A residual signal may be a signal representing a residual block. In embodiments, each of specific information, data, flag, index, element and attribute, etc. may have a value. A value of information, data, flag, index, element and attribute equal to “0” may represent a logical false or the first predefined value. In other words, a value “0”, a false, a logical false and the first predefined value may be replaced with each other. A value of information, data, flag, index, element and attribute equal to “1” may represent a logical true or the second predefined value. In other words, a value “1”, a true, a logical true and the second predefined value may be replaced with each other. When a variable i or j is used for representing a column, a row or an index, a value of i may be an integer equal to or greater than 0, or equal to or greater than 1. That is, the column, the row, the index, etc. may be counted from 0 or may be counted from 1. DESCRIPTION OF TERMS Encoder: means an apparatus performing encoding. That is, means an encoding apparatus. Decoder: means an apparatus performing decoding. That is, means an decoding apparatus. Block: is an M×N array of a sample. Herein, M and N may mean positive integers, and the block may mean a sample array of a two-dimensional form. The block may refer to a unit. A current block my mean an encoding target block that becomes a target when encoding, or a decoding target block that becomes a target when decoding. In addition, the current block may be at least one of an encode block, a prediction block, a residual block, and a transform block. Sample: is a basic unit constituting a block. It may be expressed as a value from 0 to 2Bd−1 according to a bit depth (Bd). In the present invention, the sample may be used as a meaning of a pixel. That is, a sample, a pel, a pixel may have the same meaning with each other. Unit: may refer to an encoding and decoding unit. When encoding and decoding an image, the unit may be a region generated by partitioning a single image. In addition, the unit may mean a subdivided unit when a single image is partitioned into subdivided units during encoding or decoding. That is, an image may be partitioned into a plurality of units. When encoding and decoding an image, a predetermined process for each unit may be performed. A single unit may be partitioned into sub-units that have sizes smaller than the size of the unit. Depending on functions, the unit may mean a block, a macroblock, a coding tree unit, a code tree block, a coding unit, a coding block), a prediction unit, a prediction block, a residual unit), a residual block, a transform unit, a transform block, etc. In addition, in order to distinguish a unit from a block, the unit may include a luma component block, a chroma component block associated with the luma component block, and a syntax element of each color component block. The unit may have various sizes and forms, and particularly, the form of the unit may be a two-dimensional geometrical figure such as a square shape, a rectangular shape, a trapezoid shape, a triangular shape, a pentagonal shape, etc. In addition, unit information may include at least one of a unit type indicating the coding unit, the prediction unit, the transform unit, etc., and a unit size, a unit depth, a sequence of encoding and decoding of a unit, etc. Coding Tree Unit: is configured with a single coding tree block of a luma component Y, and two coding tree blocks related to chroma components Cb and Cr. In addition, it may mean that including the blocks and a syntax element of each block. Each coding tree unit may be partitioned by using at least one of a quad-tree partitioning method and a binary-tree partitioning method to configure a lower unit such as coding unit, prediction unit, transform unit, etc. It may be used as a term for designating a sample block that becomes a process unit when encoding/decoding an image as an input image. Coding Tree Block: may be used as a term for designating any one of a Y coding tree block, Cb coding tree block, and Cr coding tree block. Neighbor Block: may mean a block adjacent to a current block. The block adjacent to the current block may mean a block that comes into contact with a boundary of the current block, or a block positioned within a predetermined distance from the current block. The neighbor block may mean a block adjacent to a vertex of the current block. Herein, the block adjacent to the vertex of the current block may mean a block vertically adjacent to a neighbor block that is horizontally adjacent to the current block, or a block horizontally adjacent to a neighbor block that is vertically adjacent to the current block. Reconstructed Neighbor block: may mean a neighbor block adjacent to a current block and which has been already spatially/temporally encoded or decoded. Herein, the reconstructed neighbor block may mean a reconstructed neighbor unit. A reconstructed spatial neighbor block may be a block within a current picture and which has been already reconstructed through encoding or decoding or both. A reconstructed temporal neighbor block is a block at a corresponding position as the current block of the current picture within a reference image, or a neighbor block thereof. Unit Depth: may mean a partitioned degree of a unit. In a tree structure, the highest node (Root Node) may correspond to the first unit which is not partitioned. Also, the highest node may have the least depth value. In this case, the highest node may have a depth of level 0. A node having a depth of level 1 may represent a unit generated by partitioning once the first unit. A node having a depth of level 2 may represent a unit generated by partitioning twice the first unit. A node having a depth of level n may represent a unit generated by partitioning n-times the first unit. A Leaf Node may be the lowest node and a node which cannot be partitioned further. A depth of a Leaf Node may be the maximum level. For example, a predefined value of the maximum level may be 3. A depth of a root node may be the lowest and a depth of a leaf node may be the deepest. In addition, when a unit is expressed as a tree structure, a level in which a unit is present may mean a unit depth. Bitstream: may mean a bitstream including encoding image information. Parameter Set: corresponds to header information among a configuration within a bitstream. At least one of a video parameter set, a sequence parameter set, a picture parameter set, and an adaptation parameter set may be included in a parameter set. In addition, a parameter set may include a slice header, and tile header information. Parsing: may mean determination of a value of a syntax element by performing entropy decoding, or may mean the entropy decoding itself. Symbol: may mean at least one of a syntax element, a coding parameter, and a transform coefficient value of an encoding/decoding target unit. In addition, the symbol may mean an entropy encoding target or an entropy decoding result. Prediction Mode: may be information indicating a mode encoded/decoded with intra prediction or a mode encoded/decoded with inter prediction. Prediction Unit: may mean a basic unit when performing prediction such as inter-prediction, intra-prediction, inter-compensation, intra-compensation, and motion compensation. A single prediction unit may be partitioned into a plurality of partitions having a smaller size, or may be partitioned into a plurality of lower prediction units. A plurality of partitions may be a basic unit in performing prediction or compensation. A partition which is generated by dividing a prediction unit may also be a prediction unit. Prediction Unit Partition: may mean a form obtained by partitioning a prediction unit. Transform Unit: may mean a basic unit when performing encoding/decoding such as transform, inverse-transform, quantization, dequantization, transform coefficient encoding/decoding of a residual signal. A single transform unit may be partitioned into a plurality of lower-level transform units having a smaller size. Here, transformation/inverse-transformation may comprise at least one among the first transformation/the first inverse-transformation and the second transformation/the second inverse-transformation. Scaling: may mean a process of multiplying a quantized level by a factor. A transform coefficient may be generated by scaling a quantized level. The scaling also may be referred to as dequantization. Quantization Parameter: may mean a value used when generating a quantized level using a transform coefficient during quantization. The quantization parameter also may mean a value used when generating a transform coefficient by scaling a quantized level during dequantization. The quantization parameter may be a value mapped on a quantization step size. Delta Quantization Parameter: may mean a difference value between a predicted quantization parameter and a quantization parameter of an encoding/decoding target unit. Scan: may mean a method of sequencing coefficients within a unit, a block or a matrix. For example, changing a two-dimensional matrix of coefficients into a one-dimensional matrix may be referred to as scanning, and changing a one-dimensional matrix of coefficients into a two-dimensional matrix may be referred to as scanning or inverse scanning. Transform Coefficient: may mean a coefficient value generated after transform is performed in an encoder. It may mean a coefficient value generated after at least one of entropy decoding and dequantization is performed in a decoder. A quantized level obtained by quantizing a transform coefficient or a residual signal, or a quantized transform coefficient level also may fall within the meaning of the transform coefficient. Quantized Level: may mean a value generated by quantizing a transform coefficient or a residual signal in an encoder. Alternatively, the quantized level may mean a value that is a dequantization target to undergo dequantization in a decoder. Similarly, a quantized transform coefficient level that is a result of transform and quantization also may fall within the meaning of the quantized level. Non-zero Transform Coefficient: may mean a transform coefficient having a value other than zero, or a transform coefficient level or a quantized level having a value other than zero. Quantization Matrix: may mean a matrix used in a quantization process or a dequantization process performed to improve subjective or objective image quality. The quantization matrix also may be referred to as a scaling list. Quantization Matrix Coefficient: may mean each element within a quantization matrix. The quantization matrix coefficient also may be referred to as a matrix coefficient. Default Matrix: may mean a predetermined quantization matrix preliminarily defined in an encoder or a decoder. Non-default Matrix: may mean a quantization matrix that is not preliminarily defined in an encoder or a decoder but is signaled by a user. Statistic Value: a statistic value for at least one among a variable, an encoding parameter, a constant value, etc. which have a computable specific value may be one or more among an average value, a weighted average value, a weighted sum value, the minimum value, the maximum value, the most frequent value, a median value, an interpolated value of the corresponding specific values. FIG.1is a block diagram showing a configuration of an encoding apparatus according to an embodiment to which the present invention is applied. An encoding apparatus100may be an encoder, a video encoding apparatus, or an image encoding apparatus. A video may include at least one image. The encoding apparatus100may sequentially encode at least one image. Referring toFIG.1, the encoding apparatus100may include a motion prediction unit111, a motion compensation unit112, an intra-prediction unit120, a switch115, a subtractor125, a transform unit130, a quantization unit140, an entropy encoding unit150, a dequantization unit160, an inverse-transform unit170, an adder175, a filter unit180, and a reference picture buffer190. The encoding apparatus100may perform encoding of an input image by using an intra mode or an inter mode or both. In addition, encoding apparatus100may generate a bitstream including encoded information through encoding the input image, and output the generated bitstream. The generated bitstream may be stored in a computer readable recording medium, or may be streamed through a wired/wireless transmission medium. When an intra mode is used as a prediction mode, the switch115may be switched to an intra. Alternatively, when an inter mode is used as a prediction mode, the switch115may be switched to an inter mode. Herein, the intra mode may mean an intra-prediction mode, and the inter mode may mean an inter-prediction mode. The encoding apparatus100may generate a prediction block for an input block of the input image. In addition, the encoding apparatus100may encode a residual block using a residual of the input block and the prediction block after the prediction block being generated. The input image may be called as a current image that is a current encoding target. The input block may be called as a current block that is current encoding target, or as an encoding target block. When a prediction mode is an intra mode, the intra-prediction unit120may use a sample of a block that has been already encoded/decoded and is adjacent to a current block as a reference sample. The intra-prediction unit120may perform spatial prediction for the current block by using a reference sample, or generate prediction samples of an input block by performing spatial prediction. Herein, the intra prediction may mean intra-prediction, When a prediction mode is an inter mode, the motion prediction unit111may retrieve a region that best matches with an input block from a reference image when performing motion prediction, and deduce a motion vector by using the retrieved region. In this case, a search region may be used as the region. The reference image may be stored in the reference picture buffer190. Here, when encoding/decoding for the reference image is performed, it may be stored in the reference picture buffer190. The motion compensation unit112may generate a prediction block by performing motion compensation for the current block using a motion vector. Herein, inter-prediction may mean inter-prediction or motion compensation. When the value of the motion vector is not an integer, the motion prediction unit111and the motion compensation unit112may generate the prediction block by applying an interpolation filter to a partial region of the reference picture. In order to perform inter-picture prediction or motion compensation on a coding unit, it may be determined that which mode among a skip mode, a merge mode, an advanced motion vector prediction (AMVP) mode, and a current picture referring mode is used for motion prediction and motion compensation of a prediction unit included in the corresponding coding unit. Then, inter-picture prediction or motion compensation may be differently performed depending on the determined mode. The subtractor125may generate a residual block by using a residual of an input block and a prediction block. The residual block may be called as a residual signal. The residual signal may mean a difference between an original signal and a prediction signal. In addition, the residual signal may be a signal generated by transforming or quantizing, or transforming and quantizing a difference between the original signal and the prediction signal. The residual block may be a residual signal of a block unit. The transform unit130may generate a transform coefficient by performing transform of a residual block, and output the generated transform coefficient. Herein, the transform coefficient may be a coefficient value generated by performing transform of the residual block. When a transform skip mode is applied, the transform unit130may skip transform of the residual block. A quantized level may be generated by applying quantization to the transform coefficient or to the residual signal. Hereinafter, the quantized level may be also called as a transform coefficient in embodiments. The quantization unit140may generate a quantized level by quantizing the transform coefficient or the residual signal according to a parameter, and output the generated quantized level. Herein, the quantization unit140may quantize the transform coefficient by using a quantization matrix. The entropy encoding unit150may generate a bitstream by performing entropy encoding according to a probability distribution on values calculated by the quantization unit140or on coding parameter values calculated when performing encoding, and output the generated bitstream. The entropy encoding unit150may perform entropy encoding of sample information of an image and information for decoding an image. For example, the information for decoding the image may include a syntax element. When entropy encoding is applied, symbols are represented so that a smaller number of bits are assigned to a symbol having a high chance of being generated and a larger number of bits are assigned to a symbol having a low chance of being generated, and thus, the size of bit stream for symbols to be encoded may be decreased. The entropy encoding unit150may use an encoding method for entropy encoding such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), etc. For example, the entropy encoding unit150may perform entropy encoding by using a variable length coding/code (VLC) table. In addition, the entropy encoding unit150may deduce a binarization method of a target symbol and a probability model of a target symbol/bin, and perform arithmetic coding by using the deduced binarization method, and a context model. In order to encode a transform coefficient level (quantized level), the entropy encoding unit150may change a two-dimensional block form coefficient into a one-dimensional vector form by using a transform coefficient scanning method. A coding parameter may include information (flag, index, etc.) such as syntax element that is encoded in an encoder and signaled to a decoder, and information derived when performing encoding or decoding. The coding parameter may mean information required when encoding or decoding an image. For example, at least one value or a combination form of a unit/block size, a unit/block depth, unit/block partition information, unit/block shape, unit/block partition structure, whether to partition of a quad-tree form, whether to partition of a binary-tree form, a partition direction of a binary-tree form (horizontal direction or vertical direction), a partition form of a binary-tree form (symmetric partition or asymmetric partition), a prediction mode (intra prediction or inter prediction), a luma intra-prediction mode/direction, a chroma intra-prediction mode/direction, intra partition information, inter partition information, a coding block partition flag, a prediction block partition flag, a transform block partition flag, a reference sample filtering method, a reference sample filter tab, a reference sample filter coefficient, a prediction block filtering method, a prediction block filter tap, a prediction block filter coefficient, a prediction block boundary filtering method, a prediction block boundary filter tab, a prediction block boundary filter coefficient, an intra-prediction mode, an inter-prediction mode, motion information, a motion vector, a motion vector difference, a reference picture index, a inter-prediction angle, an inter-prediction indicator, a prediction list utilization flag, a reference picture list, a reference picture, a motion vector predictor index, a motion vector predictor candidate, a motion vector candidate list, whether to use a merge mode, a merge index, a merge candidate, a merge candidate list, whether to use a skip mode, an interpolation filter type, an interpolation filter tab, an interpolation filter coefficient, a motion vector size, a presentation accuracy of a motion vector, a transform type, a transform size, information of whether or not a primary (first) transform is used, information of whether or not a secondary transform is used, a primary transform index, a secondary transform index, information of whether or not a residual signal is present, a coded block pattern, a coded block flag (CBF), a quantization parameter, a quantization parameter residue, a quantization matrix, whether to apply an intra loop filter, an intra loop filter coefficient, an intra loop filter tab, an intra loop filter shape/form, whether to apply a deblocking filter, a deblocking filter coefficient, a deblocking filter tab, a deblocking filter strength, a deblocking filter shape/form, whether to apply an adaptive sample offset, an adaptive sample offset value, an adaptive sample offset category, an adaptive sample offset type, whether to apply an adaptive loop filter, an adaptive loop filter coefficient, an adaptive loop filter tab, an adaptive loop filter shape/form, a binarization/inverse-binarization method, a context model determining method, a context model updating method, whether to perform a regular mode, whether to perform a bypass mode, a context bin, a bypass bin, a significant coefficient flag, a last significant coefficient flag, a coded flag for a unit of a coefficient group, a position of the last significant coefficient, a flag for whether a value of a coefficient is larger than 1, a flag for whether a value of a coefficient is larger than 2, a flag for whether a value of a coefficient is larger than 3, information on a remaining coefficient value, a sign information, a reconstructed luma sample, a reconstructed chroma sample, a residual luma sample, a residual chroma sample, a luma transform coefficient, a chroma transform coefficient, a quantized luma level, a quantized chroma level, a transform coefficient level scanning method, a size of a motion vector search area at a decoder side, a shape of a motion vector search area at a decoder side, a number of time of a motion vector search at a decoder side, information on a CTU size, information on a minimum block size, information on a maximum block size, information on a maximum block depth, information on a minimum block depth, an image displaying/outputting sequence, slice identification information, a slice type, slice partition information, tile identification information, a tile type, tile partition information, a picture type, a bit depth of an input sample, a bit depth of a reconstruction sample, a bit depth of a residual sample, a bit depth of a transform coefficient, a bit depth of a quantized level, and information on a luma signal or information on a chroma signal may be included in the coding parameter. Herein, signaling the flag or index may mean that a corresponding flag or index is entropy encoded and included in a bitstream by an encoder, and may mean that the corresponding flag or index is entropy decoded from a bitstream by a decoder. When the encoding apparatus100performs encoding through inter-prediction, an encoded current image may be used as a reference image for another image that is processed afterwards. Accordingly, the encoding apparatus100may reconstruct or decode the encoded current image, or store the reconstructed or decoded image as a reference image in reference picture buffer190. A quantized level may be dequantized in the dequantization unit160, or may be inverse-transformed in the inverse-transform unit170. A dequantized or inverse-transformed coefficient or both may be added with a prediction block by the adder175. By adding the dequantized or inverse-transformed coefficient or both with the prediction block, a reconstructed block may be generated. Herein, the dequantized or inverse-transformed coefficient or both may mean a coefficient on which at least one of dequantization and inverse-transform is performed, and may mean a reconstructed residual block. A reconstructed block may pass through the filter unit180. The filter unit180may apply at least one of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to a reconstructed sample, a reconstructed block or a reconstructed image. The filter unit180may be called as an in-loop filter. The deblocking filter may remove block distortion generated in boundaries between blocks. In order to determine whether or not to apply a deblocking filter, whether or not to apply a deblocking filter to a current block may be determined based samples included in several rows or columns which are included in the block. When a deblocking filter is applied to a block, another filter may be applied according to a required deblocking filtering strength. In order to compensate an encoding error, a proper offset value may be added to a sample value by using a sample adaptive offset. The sample adaptive offset may correct an offset of a deblocked image from an original image by a sample unit. A method of partitioning samples of an image into a predetermined number of regions, determining a region to which an offset is applied, and applying the offset to the determined region, or a method of applying an offset in consideration of edge information on each sample may be used. The adaptive loop filter may perform filtering based on a comparison result of the filtered reconstructed image and the original image. Samples included in an image may be partitioned into predetermined groups, a filter to be applied to each group may be determined, and differential filtering may be performed for each group. Information of whether or not to apply the ALF may be signaled by coding units (CUs), and a form and coefficient of the ALF to be applied to each block may vary. The reconstructed block or the reconstructed image having passed through the filter unit180may be stored in the reference picture buffer190. A reconstructed block processed by the filter unit180may be a part of a reference image. That is, a reference image is a reconstructed image composed of reconstructed blocks processed by the filter unit180. The stored reference image may be used later in inter prediction or motion compensation. FIG.2is a block diagram showing a configuration of a decoding apparatus according to an embodiment and to which the present invention is applied. A decoding apparatus200may a decoder, a video decoding apparatus, or an image decoding apparatus. Referring toFIG.2, the decoding apparatus200may include an entropy decoding unit210, a dequantization unit220, a inverse-transform unit230, an intra-prediction unit240, a motion compensation unit250, an adder225, a filter unit260, and a reference picture buffer270. The decoding apparatus200may receive a bitstream output from the encoding apparatus100. The decoding apparatus200may receive a bitstream stored in a computer readable recording medium, or may receive a bitstream that is streamed through a wired/wireless transmission medium. The decoding apparatus200may decode the bitstream by using an intra mode or an inter mode. In addition, the decoding apparatus200may generate a reconstructed image generated through decoding or a decoded image, and output the reconstructed image or decoded image. When a prediction mode used when decoding is an intra mode, a switch may be switched to an intra. Alternatively, when a prediction mode used when decoding is an inter mode, a switch may be switched to an inter mode. The decoding apparatus200may obtain a reconstructed residual block by decoding the input bitstream, and generate a prediction block. When the reconstructed residual block and the prediction block are obtained, the decoding apparatus200may generate a reconstructed block that becomes a decoding target by adding the reconstructed residual block with the prediction block. The decoding target block may be called a current block. The entropy decoding unit210may generate symbols by entropy decoding the bitstream according to a probability distribution. The generated symbols may include a symbol of a quantized level form. Herein, an entropy decoding method may be a inverse-process of the entropy encoding method described above. In order to decode a transform coefficient level (quantized level), the entropy decoding unit210may change a one-directional vector form coefficient into a two-dimensional block form by using a transform coefficient scanning method. A quantized level may be dequantized in the dequantization unit220, or inverse-transformed in the inverse-transform unit230. The quantized level may be a result of dequantizing or inverse-transforming or both, and may be generated as a reconstructed residual block. Herein, the dequantization unit220may apply a quantization matrix to the quantized level. When an intra mode is used, the intra-prediction unit240may generate a prediction block by performing, for the current block, spatial prediction that uses a sample value of a block adjacent to a decoding target block and which has been already decoded. When an inter mode is used, the motion compensation unit250may generate a prediction block by performing, for the current block, motion compensation that uses a motion vector and a reference image stored in the reference picture buffer270. The adder225may generate a reconstructed block by adding the reconstructed residual block with the prediction block. The filter unit260may apply at least one of a deblocking filter, a sample adaptive offset, and an adaptive loop filter to the reconstructed block or reconstructed image. The filter unit260may output the reconstructed image. The reconstructed block or reconstructed image may be stored in the reference picture buffer270and used when performing inter-prediction. A reconstructed block processed by the filter unit260may be a part of a reference image. That is, a reference image is a reconstructed image composed of reconstructed blocks processed by the filter unit260. The stored reference image may be used later in inter prediction or motion compensation. FIG.3is a view schematically showing a partition structure of an image when encoding and decoding the image.FIG.3schematically shows an example of partitioning a single unit into a plurality of lower units. In order to efficiently partition an image, when encoding and decoding, a coding unit (CU) may be used. The coding unit may be used as a basic unit when encoding/decoding the image. In addition, the coding unit may be used as a unit for distinguishing an intra prediction mode and an inter prediction mode when encoding/decoding the image. The coding unit may be a basic unit used for prediction, transform, quantization, inverse-transform, dequantization, or an encoding/decoding process of a transform coefficient. Referring toFIG.3, an image300is sequentially partitioned in a largest coding unit (LCU), and a LCU unit is determined as a partition structure. Herein, the LCU may be used in the same meaning as a coding tree unit (CTU). A unit partitioning may mean partitioning a block associated with to the unit. In block partition information, information of a unit depth may be included. Depth information may represent a number of times or a degree or both in which a unit is partitioned. A single unit may be partitioned into a plurality of lower level units hierarchically associated with depth information based on a tree structure. In other words, a unit and a lower level unit generated by partitioning the unit may correspond to a node and a child node of the node, respectively. Each of partitioned lower unit may have depth information. Depth information may be information representing a size of a CU, and may be stored in each CU. Unit depth represents times and/or degrees related to partitioning a unit. Therefore, partitioning information of a lower-level unit may comprise information on a size of the lower-level unit. A partition structure may mean a distribution of a coding unit (CU) within an LCU310. Such a distribution may be determined according to whether or not to partition a single CU into a plurality (positive integer equal to or greater than 2 including 2, 4, 8, 16, etc.) of CUs. A horizontal size and a vertical size of the CU generated by partitioning may respectively be half of a horizontal size and a vertical size of the CU before partitioning, or may respectively have sizes smaller than a horizontal size and a vertical size before partitioning according to a number of times of partitioning. The CU may be recursively partitioned into a plurality of CUs. By the recursive partitioning, at least one among a height and a width of a CU after partitioning may decrease comparing with at least one among a height and a width of a CU before partitioning. Partitioning of the CU may be recursively performed until to a predefined depth or predefined size. For example, a depth of an LCU may be 0, and a depth of a smallest coding unit (SCU) may be a predefined maximum depth. Herein, the LCU may be a coding unit having a maximum coding unit size, and the SCU may be a coding unit having a minimum coding unit size as described above. Partitioning is started from the LCU310, a CU depth increases by 1 as a horizontal size or a vertical size or both of the CU decreases by partitioning. For example, for each depth, a CU which is not partitioned may have a size of 2N×2N. Also, in case of a CU which is partitioned, a CU with a size of 2N×2N may be partitioned into four CUs with a size of N×N. A size of N may decrease to half as a depth increase by 1. In addition, information whether or not the CU is partitioned may be represented by using partition information of the CU. The partition information may be 1-bit information. All CUs, except for a SCU, may include partition information. For example, when a value of partition information is the first value, the CU may not be partitioned, when a value of partition information is the second value, the CU may be partitioned. Referring toFIG.3, an LCU having a depth 0 may be a 64×64 block. 0 may be a minimum depth. A SCU having a depth 3 may be an 8×8 block. 3 may be a maximum depth. A CU of a 32×32 block and a 16×16 block may be respectively represented as a depth 1 and a depth 2. For example, when a single coding unit is partitioned into four coding units, a horizontal size and a vertical size of the four partitioned coding units may be a half size of a horizontal and vertical size of the CU before being partitioned. In one embodiment, when a coding unit having a 32×32 size is partitioned into four coding units, each of the four partitioned coding units may have a 16×16 size. When a single coding unit is partitioned into four coding units, it may be called that the coding unit may be partitioned into a quad-tree form. For example, when a single coding unit is partitioned into two coding units, a horizontal or vertical size of the two coding units may be a half of a horizontal or vertical size of the coding unit before being partitioned. For example, when a coding unit having a 32×32 size is partitioned in a vertical direction, each of two partitioned coding units may have a size of 16×32. When a single coding unit is partitioned into two coding units, it may be called that the coding unit is partitioned in a binary-tree form. An LCU320ofFIG.3is an example of an LCU to which both of partitioning of a quad-tree form and partitioning of a binary-tree form are applied. FIG.4is a view showing an intra-prediction process. Arrows from center to outside inFIG.4may represent prediction directions of intra prediction modes. Intra encoding and/or decoding may be performed by using a reference sample of a neighbor block of the current block. A neighbor block may be a reconstructed neighbor block. For example, intra encoding and/or decoding may be performed by using an encoding parameter or a value of a reference sample included in a reconstructed neighbor block. A prediction block may mean a block generated by performing intra prediction. A prediction block may correspond to at least one among CU, PU and TU. A unit of a prediction block may have a size of one among CU, PU and TU. A prediction block may be a square block having a size of 2×2, 4×4, 16×16, 32×32 or 64×64 etc. or may be a rectangular block having a size of 2×8, 4×8, 2×16, 4×16 and 8×16 etc. Intra prediction may be performed according to intra prediction mode for the current block. The number of intra prediction modes which the current block may have may be a fixed value and may be a value determined differently according to an attribute of a prediction block. For example, an attribute of a prediction block may comprise a size of a prediction block and a shape of a prediction block, etc. The number of intra-prediction modes may be fixed to N regardless of a block size. Or, the number of intra prediction modes may be 3, 5, 9, 17, 34, 35, 36, 65, or 67 etc. Alternatively, the number of intra-prediction modes may vary according to a block size or a color component type or both. For example, the number of intra prediction modes may vary according to whether the color component is a luma signal or a chroma signal. For example, as a block size becomes large, a number of intra-prediction modes may increase. Alternatively, a number of intra-prediction modes of a luma component block may be larger than a number of intra-prediction modes of a chroma component block. An intra-prediction mode may be a non-angular mode or an angular mode. The non-angular mode may be a DC mode or a planar mode, and the angular mode may be a prediction mode having a specific direction or angle. The intra-prediction mode may be expressed by at least one of a mode number, a mode value, a mode numeral, a mode angle, and mode direction. A number of intra-prediction modes may be M, which is larger than 1, including the non-angular and the angular mode. In order to intra-predict a current block, a step of determining whether or not samples included in a reconstructed neighbor block may be used as reference samples of the current block may be performed. When a sample that is not usable as a reference sample of the current block is present, a value obtained by duplicating or performing interpolation on at least one sample value among samples included in the reconstructed neighbor block or both may be used to replace with a non-usable sample value of a sample, thus the replaced sample value is used as a reference sample of the current block. When intra-predicting, a filter may be applied to at least one of a reference sample and a prediction sample based on an intra-prediction mode and a current block size. In case of a planar mode, when generating a prediction block of a current block, according to a position of a prediction target sample within a prediction block, a sample value of the prediction target sample may be generated by using a weighted sum of an upper and left side reference sample of a current sample, and a right upper side and left lower side reference sample of the current block. In addition, in case of a DC mode, when generating a prediction block of a current block, an average value of upper side and left side reference samples of the current block may be used. In addition, in case of an angular mode, a prediction block may be generated by using an upper side, a left side, a right upper side, and/or a left lower side reference sample of the current block. In order to generate a prediction sample value, interpolation of a real number unit may be performed. An intra-prediction mode of a current block may be entropy encoded/decoded by predicting an intra-prediction mode of a block present adjacent to the current block. When intra-prediction modes of the current block and the neighbor block are identical, information that the intra-prediction modes of the current block and the neighbor block are identical may be signaled by using predetermined flag information. In addition, indicator information of an intra-prediction mode that is identical to the intra-prediction mode of the current block among intra-prediction modes of a plurality of neighbor blocks may be signaled. When intra-prediction modes of the current block and the neighbor block are different, intra-prediction mode information of the current block may be entropy encoded/decoded by performing entropy encoding/decoding based on the intra-prediction mode of the neighbor block. FIG.5is a view illustrating a method of performing intra prediction on a current block according to an embodiment of the present invention. As shown inFIG.5, intra prediction may include an intra-prediction mode inducement step S510, a reference sample configuration step S520and/or an intra-prediction execution step S530. At the intra-prediction mode inducement step S510, the intra-prediction mode of the current block may be induced using at least one of a method of using an intra-prediction mode of a neighbor block, a method of decoding the intra-prediction mode of the current block (e.g., entropy decoding), a method of using an intra-prediction mode of a color component, a method of using an intra-prediction mode using a transform model, a method of using information on a size and/or a shape of the current block and a method of using a predetermined intra prediction mode indicator. In the method of using the intra-prediction mode of the neighbor block, the intra-prediction mode of the current block may be induced by using at least one of the intra-prediction mode of the neighbor block, a combination of one or more intra-prediction modes of the neighbor block, and/or an intra-prediction mode induced by using MPM lists. When using MPM lists, intra prediction mode of the current block may be encoded/decoded using at least one among an MPM list of the current block, an MPM list of the upper-layer block and an MPM list of the neighbor block. An MPM list may be constructed based on at least one among intra prediction mode information of adjacent blocks and frequency of intra prediction modes of adjacent blocks. At the reference sample configuration step S520, a reference sample selection step and/or a reference sample filtering step may be performed such that a reference sample may be configured. At the intra prediction execution step S530, at least one method of non-directionality prediction, directionality prediction, location-information-based prediction and/or prediction between color components is used to perform intra prediction of the current block. At the intra prediction execution step S530, filtering for a prediction sample may be executed. Hereinafter, the intra-prediction mode inducement step S510will be described in detail. A neighbor block of the current block may be at least one of lower left, left, upper left, upper, and upper right neighbor blocks of the current block. Among the neighbor blocks, only neighbor blocks that can use the intra-prediction mode may be used. Among the neighbor blocks of the current block, an intra-prediction mode of a neighbor block at a particular position may be induced as the intra-prediction mode of the current block. Alternatively, two or more neighbor blocks are selected, a statistic value of intra-prediction modes of the selected neighbor blocks may be induced as the intra-prediction mode of the current block. The intra-prediction mode may be indicated by at least one of a mode number, a mode value, and a mode angle. In the description, the statistic value may be at least one of a minimum value, a maximum value, an average value, a weighted average value, a mode, and a median value. The neighbor block at the particular position and/or the selected neighbor blocks may be a block(s) at a predefined fixed position. Alternatively, the block(s) may be specified based on information signaled through a bitstream. When using at least two intra-prediction modes, whether the intra-prediction mode has directionality or non-directionality may be considered. For example, among two or more intra-prediction modes, the intra-prediction mode of the current block may be induced using a directional intra-prediction mode. Alternatively, the intra-prediction mode of the current block may be induced using a non-directional intra-prediction mode. When the weighted average value is used as the statistic value, a relatively high weight may be assigned to a particular intra-prediction mode. The particular intra-prediction mode may be at least one of, for example, a vertical mode, a horizontal mode, a diagonal mode, a non-directionality mode. Alternatively, information on the particular intra-prediction mode may be signaled through a bitstream. Respective weights of particular intra-prediction modes may be equal to or different from each other. Alternatively, the weight may be determined based on a size of a neighbor block. For example, a relatively high weight may be assigned to an intra-prediction mode of a relatively large neighbor block. The intra-prediction mode of the current block may be induced using an MPM (Most Probable Mode). When using the MPM, an MPM list may be configured using N intra-prediction modes induced using the intra-prediction mode of the neighbor block. N is a positive integer, and may have a value that differs depending on a size and/or a shape of the current block. Alternatively, information on N may be signaled through a bitstream. Intra-prediction modes that may be included in the MPM list may be intra-prediction modes of lower left, left, upper left, upper and/or upper right neighbor blocks of the current block. Also, the non-directionality mode may be included in the MPM list. The intra-prediction modes may be included in the MPM list in a predetermined order. The predetermined order may be, for example, an order of a mode of a lower left block, a mode of an upper block, a Planar mode, a DC mode, a mode of a lower left block, a mode of an upper right block, and a mode of an upper left block. Alternatively, the predetermined order may be an order of a mode of a left block, a mode of an upper block, a Planar mode, a DC mode, a mode of a lower left block, a mode of an upper right block, and a mode of an upper left block. The MPM list may be configured to not include a duplicate mode. When the number of intra-prediction modes included in the MPM list is less than N, an additional intra-prediction mode may be included in the MPM list. The additional intra-prediction mode may be a mode corresponding to +k, −k of the directional intra-prediction mode included in the MPM list. An integer equal to or greater than one may be designated by k. Alternatively, at least one of a horizontal mode, a vertical mode, and a diagonal mode (a 45-degree angle mode, a 135-degree angle mode, and a 225-degree angle mode) may be included in the MPM list. Alternatively, a statistic value of at least one intra-prediction mode of the neighbor block may be used to induce an intra-prediction mode to be included in the MPM list. There may be several MPM lists, and several MPM lists may be configured in different methods. The intra-prediction mode included in each MPM list may not be duplicated. Information (e.g., flag information) indicating whether the intra-prediction mode of the current block is included in the MPM list may be signaled through a bitstream. When there are N MPM lists, N pieces of flag information may exist. Determining whether the intra-prediction mode of the current block exits in the MPM list may be performed in order for N MPM lists. Alternatively, information indicating a MPM list including the intra-prediction mode of the current block, among N MPM lists, may be signaled. When the intra-prediction mode of the current block is included in the MPM list, index information for specifying which mode, among modes included in the MPM list, may be signaled through a bitstream. Alternatively, a mode at a particular position (e.g., the first) of the MPM list may be induced as the intra-prediction mode of the current block. In configuring the MPM list, one MPM list may be configured for a predetermined-size block. When the predetermined-size block is partitioned into several sub-blocks, each of the several sub-blocks may use the configured MPM list. Alternatively, the intra-prediction mode of the current block may be induced using at least one of the intra-prediction mode of the current block induced using the MPM and the intra-prediction mode of the neighbor block. For example, when the intra-prediction mode of the current block induced using the MPM is Pred_mpm, the Pred_mpm is changed into a predetermined mode by using at least one intra-prediction mode of the neighbor block such that the intra-prediction mode of the current block may be induced. For example, the Pred_mpm may be increased or decreased by N by being compared with the size of the intra-prediction mode of the neighbor block. Here, N may be a predetermined integer, such as, +1, +2, +3, 0, −1, −2, −3, etc. Alternatively, when one of the Pred_mpm and a mode of the neighbor block is the non-directionality mode and the other one is the directionality mode, the non-directionality mode may be induced as the intra-prediction mode of the current block or the directionality mode may be induced as the intra-prediction mode of the current block. In the case where the intra prediction mode of a current block is derived using a Most Probable Mode (MPM) list, for example, at least one of the following MPM lists may be used. Or the intra prediction mode of the current block may be entropy-encoded/decoded.An MPM list for a current block.At least one of MPM lists for upper-layer blocks of a current block.At least one of MPM lists for neighbor blocks of a current block. Information required to make an MPM list, such as whether the MPM list of the current block is used, whether at least one of the MPM lists of the upper-layer blocks of the current block is used, and whether at least one of the MPM lists of the neighbor blocks of the current block is used, may be entropy-encoded/decoded in at least one of a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), an adaptation parameter set (APS), a slice, a header, a tile, a CTU, a CU, a PU, and a TU. An upper-layer block may be a block having a smaller depth value than the current block. In addition, the upper-layer block may refer to at least one of blocks including the current block among blocks having the smaller depth values. Herein, a depth value may mean a value that is increased by 1 each time a block is divided. For example, the depth value of an undivided CTU may be 0. A neighbor block may be at least one of blocks which are spatially and/or temporally neighboring to the current block. The neighbor blocks may have already been encoded/decoded. In addition, the neighbor block may have a depth (or size) equal to or different from that of the current block. The neighbor block may refer to a block at a predetermined position with respect to the current block. Herein, the predetermined position may be at least one of top left, top, top right, left, and bottom left positions with respect to the current block. Or the predetermined position may be a position within a picture different from a picture to which the current block belongs. A block at the predetermined position may refer to at least one of a collocated block of the current block in the different picture and/or a block neighboring to the collocated block. Or the block at the predetermined position may be a block having the same prediction mode as that of the current block in a specific area of the different picture, corresponding to the current block. The MPM list of the upper-layer block or the neighbor block may refer to an MPM list made based on the upper-layer block or the neighbor block. The intra prediction mode of an encoded/decoded block neighboring to the upper-layer block or the neighbor block may be added to the MPM list of the upper-layer block or the neighbor block. The intra prediction mode of the current block may be derived or entropy-encoded/decoded, using N MPM lists. Herein, N may be 0 or a positive integer. That is, the intra prediction mode of the current block may be derived or entropy-encoded/decoded, using a plurality of MPM lists. In addition, a plurality of MPM lists may refer to multiple MPM lists or multiple lists. N MPM lists for the current block may include at least one of an MPM list of the current block, an MPM list of an upper-layer block, and an MPM list of a neighbor block. Further, the N MPM lists may be generated, using at least one of coding parameters for the current block. The intra prediction mode of the current block may be derived using the made MPM list, or entropy-encoded/decoded. The plurality of MPM lists for the current block may include MPM lists for N upper-layer blocks. Herein, N is 0 or a positive integer. Information such as the number, depths, and/or depth ranges of upper-layer blocks, and/or the differences between the depth of the current block and the depths of the upper-layer blocks may be required in making an MPM list for an upper-layer block. The information required to make the MPM list of the upper-layer block may be entropy-encoded/decoded in at least one of a VPS, an SPS, a PPS, an APS, a slice header, a tile header, a CTU, a CU, a PU, and a TU. If a plurality of MPM lists for the current block includeMPM lists of upper-layer blocks, the number and/or depth values of the used upper-layer blocks may be derived using information about the size and/or depth of the current block. The plurality of MPM lists for the current block may include MPM lists for N neighbor blocks. The N neighbor blocks may include neighbor blocks at predetermined positions. N may be 0 or a positive integer. Information such as the number N, depth values, sizes, and/or position of the included neighbor blocks may be required to make an MPM list of a neighbor block. The information required to make an MPM list of a neighbor block may be entropy-encoded/decoded in at least one of a VPS, an SPS, a PPS, an APS, a slice header, a tile header, a CTU, a CU, a PU, and a TU. The number and/or positions of the neighbor blocks may be determined variably according to the size, shape, and/or position of the current block. An MPM list of a neighbor block may be made, if the depth value of the neighbor block is a predetermined value or falls within a predetermined range. The predetermined range may be defined by at least one of a minimum value or a maximum value. Information about the at least one of the minimum value or the maximum value may be entropy-encoded/decoded in the afore-described predetermined unit. An intra prediction mode derived based on at least one of the current block, the upper-layer block, and the neighbor block may be included in one MPM list for the current block. That is, if not a plurality of MPM lists but a single MPM list is used for the current block, the MPM list may be made up of at least one of intra prediction modes derived based on at least one of intra prediction modes based on at least one of the current block, the upper-layer block, and the neighbor block. If N MPM lists for the current block include an MPM list for at least one of upper-layer blocks and neighbor block, the order of making the N MPM lists may be determined. Herein, N may be 0 or a positive integer. The order of making the MPM lists may be preset in the encoder and the decoder. Or, the order of making the MPM lists may be determined based on a coding parameter of each corresponding block. Or, the order of making the MPM lists may be determined based on a coding parameter of the current block. Or, information about the order of making the MPM lists may be entropy-encoded/decoded. For example, with an MPM list of the current block used as the first MPM list, a plurality of MPM lists may be made for current block, using MPM lists for at least K upper-layer blocks arranged in an ascending or descending order of depth values. For example, with an MPM list of the current block used as the first MPM list, a plurality of MPM lists may be made for current block, using MPM lists for at least one of top left, left, bottom left, top, and top right neighbor blocks in a predetermined order. For example, a plurality of MPM lists may be made for current block in the order of an MPM list of the current block→MPM lists for K upper-layer blocks in a predetermined order→MPM lists for L neighbor blocks in a predetermined order. Or, a plurality of MPM lists may be made for current block in the order of an MPM list of the current block→MPM lists for L neighbor blocks in a predetermined order→MPM lists for K upper-layer blocks in a predetermined order. Herein, each of K and L may be 0 or a positive integer. A following MPM list may not include intra prediction modes of a preceding MPM list in the order. In addition, the variable length code of an indicator for the preceding MPM list may be shorter than the variable length code of an indicator for the following MPM list. In addition, the preceding MPM list may include a smaller number of candidates than the following MPM list. Indicators may be allocated to the MPM lists in the order of making the MPM lists. The redundancy check on the modes included in the MPM lists may be performed in the step of making a plurality of MPM lists. Or, the redundancy check may be performed after all of a plurality of MPM lists are made. Or, the redundancy check may be performed each time an intra prediction mode is included in an MPM list. The predetermined intra prediction modes added to substitute for redundant prediction modes may include at least one of intra prediction modes including, for example, INTRA_PLANAR, INTRA_DC, horizontal mode, vertical mode, 45-degree mode, 135-degree mode, 225-degree mode, MPM_LIST_K_MODE_X±delta, INTRA_DM, and INTRA_LM. Here, MPM_LIST_K_MODE_X may refer to a predetermined intra prediction mode included in the K-th MPM list. INTRA_DM may refer to an intra prediction mode in which an intra chroma prediction mode is determined to be identical to an intra luma prediction mode. In addition, INTRA_LM may refer to an intra prediction mode in which at least one of a chroma prediction/residual/reconstruction block is generated based on at least one of a luma prediction/residual/reconstruction block. In addition, delta may be a positive integer. If the intra prediction mode of the current block is derived using N MPM lists or the intra prediction mode of the current block is entropy-encoded/decoded, an indicator (MPM flag) indicating whether the intra prediction mode of the current block is included among the intra prediction modes of each of the N MPM lists may be entropy-encoded/decoded, for each of the N MPM lists. In the presence of the same intra prediction mode as the intra prediction mode of the current block among the intra prediction modes included in a specific one of the N MPM lists, index information (an MPM index) indicating the position or number of the intra prediction mode in the specific MPM list may be entropy-encoded. In addition, the same intra prediction mode as the intra prediction mode of the current block among the intra prediction modes included in the specific MPM list may be identified by entropy-decoding the index information. The index information may be entropy-encoded to a fixed-length code or a variable-length code. In addition, the intra prediction mode of the current block may be derived, using the index information. In the absence of the same intra prediction mode as the intra prediction mode of the current block among the intra prediction modes included in the N MPM lists, a remaining intra prediction mode of the current block may be entropy-encoded in the encoder. The remaining intra prediction mode may be used to identify the intra prediction mode of the current block that is not included in at least one of the MPM lists. Or, the remaining intra prediction mode may be used to identify the intra prediction mode of the current block that is included in none of the MPM lists. An MPM list may include a predetermined number N of candidate modes. Each of the N candidate modes may be represented by using MPM_mode_idxK (K is an integer belonging to 1 to N). An order for padding a candidate mode corresponding from idx1 to idxN to the MPM list may be adaptively determined on the basis of a number of frequencies of an intra mode of neighbor blocks. Herein, the intra mode may mean an intra-prediction mode or an intra-prediction angle. Alternatively, a type or arrangement order or both of the candidate modes may be determined by considering at least one of a position of a neighbor block, an intra-prediction mode of a neighbor block, whether or not an intra-prediction mode is angular mode, an angle of an intra-prediction mode, and a number of frequencies of intra-prediction mode. FIG.6is a view showing a neighbor block of a current block. A neighbor block of a current block to be encoded/decoded may include at least one of a left side block, a lower left side block, an upper side block, an upper right side block, and an upper left side block of the current block. For example, inFIG.6, when the current block is P, the left side block may be at least one of blocks L, M, and N. The lower left side block may be a block of O. The upper side block may be at least one of blocks G, H, and I. The upper right side block may be a block J. The upper left side block may be at least one of blocks D and E. When a number of candidate blocks of a neighbor block at a specific position is a positive integer N, an intra-prediction mode of a corresponding neighbor block may be determined by a combination of N candidate blocks. Herein, the combination may mean at least one of statistical values including a maximum value (max), a minimum value (min), a median value (median), a weight sum (floor calculation of weight sum, ceil calculation weight sum, or round calculation of weight sum) of intra-prediction modes of N candidate blocks. Alternatively, when a number of candidate blocks is N, an intra-prediction mode of a neighbor block may be determined on the basis of all or a part of intra-prediction modes of N candidate blocks. The part of intra-prediction modes may be a candidate block(s) at a position predetermined in an image encoding/decoding apparatus. For example, inFIG.6, when the current block is P, an intra-prediction mode of a left side block may be determined as an intra-prediction mode of a candidate block L, an intra-prediction mode of a candidate block M, or an intra-prediction mode of a candidate block N. Alternatively, an intra-prediction mode of a left side block may be determined by combining intra-prediction modes of at least two candidate blocks selected from three candidate blocks L, M, and N. By using intra-prediction modes of neighbor blocks of a current block, an MPM list may be initialized according to a predetermined order. FIG.7is a view showing an embodiment of initializing an MPM list of a current block. The initialization may mean adding an intra-prediction mode to the MPM list. FIG.7(a)is a view showing a case where a current block is a square block.FIGS.7(b) and7(c)are view showing a case where a current block is non-square block. In the embodiment shown inFIG.7, an MPM list of a current block may be configured by referencing an intra-prediction mode of seven neighbor blocks (left side 1, left side 2, upper side 1, upper side 2, lower left side, upper right side, and upper left side blocks). In the embodiment shown inFIG.7, a predetermined order of configuring an MPM list may vary according to a size or form or both of the current block. A predetermined order of initializing an MPM list may be an order of following blocks: left side 1→upper side 1→upper right side→lower left side→upper left side. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→lower left side→upper right side→upper left side. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→upper left side→upper right side→lower left side. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→upper left side→lower left side→upper right. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→left side 2→upper side 2→upper right side→lower left side→upper left side. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→left side 2→upper side 2→lower left side→upper right side→upper left side. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→left side 2→upper side 2→upper left side→upper right side→lower left side. Alternatively, the predetermined order may be an order of following blocks: left side 1→upper side 1→left side 2→upper side 2→upper left side→lower left side→upper right side. Alternatively, the predetermined order may be an order of following blocks: left side 2→upper side 2→left side 1→upper side 1→upper right side→lower left side→upper left side. Alternatively, the predetermined order may be an order of following blocks: left side 2→upper side 2→left side 1→upper side 1→lower left side→upper right side→upper left side. Alternatively, the predetermined order may be an order of following blocks: left side 2→upper side 2→left side 1→upper side 1→upper left side→upper right side→lower left side. Alternatively, the predetermined order may be an order of following blocks: left side 2→upper side 2→left side 1→upper side 1→upper left side→lower left side→upper right side. According to another embodiment of the present invention, when a current block is non-square block with a horizontal length longer than a vertical length, an intra-prediction mode of a left side neighbor block of the current block may be preferentially added to an MPM list. For example, an intra-prediction mode of neighbor blocks may be added to the MPM list in an order of following blocks: left side 1→left side 2→upper side 1→upper side 2→upper left side→lower left side→upper right side. When a current block is non-square block with vertical length longer than a horizontal length, an intra-prediction mode of an upper side neighbor block of the current block may be preferentially added to an MPM list. For example, an intra-prediction mode of neighbor blocks may be added to the MPM list in an order of following blocks: upper side 1→upper side 2→left side 1→left side 2→upper left side→lower left side→upper right side. In order to initialize an MPM list, another order other than the above example orders may be applied. Initializing an MPM list (or re-arrangement) may be performed in a predetermined block unit. Herein, the block unit may mean a coding block, a prediction block, or a transform block. Alternatively, initializing an MPM list (or re-arrangement) may be performed in a predetermined regional unit configured with a plurality of block units. The predetermined region may be a region defined for updating a number of frequencies of intra-prediction mode. A size or form or both of the region may be predetermined in an image encoding/decoding apparatus, or may be variably determined on the basis of information encoded for specifying the region. Alternatively, initializing an MPM list (or re-arrangement) may be selectively performed on the basis of a predetermined flag. The flag may be signaled from an image encoding apparatus, or may be derived by considering a number of frequencies of an intra-prediction mode of a neighbor block. Alternatively, initializing an MPM list (or re-arrangement) may be selectively performed on the basis of a size or form or both of a current block. For example, initializing an MPM list (or re-arrangement) may be performed when the current block has a specific form. Herein, the specific form may mean a square or non-square form, a horizontally long non-square or vertically long non-square form, or a symmetrical or asymmetrical form. For example, initializing an MPM list (or re-arrangement) may be performed when a size of the current block is equal to a predetermined threshold size, or may be performed when the size of the current block is smaller or greater than a predetermined threshold size. The threshold size may be a value predetermined in an image encoding/decoding apparatus, or may be determined on the basis of signaled information of the threshold size. Alternatively, initializing an MPM list (or re-arrangement) may be performed regardless of a size or form or both of the current block. When initializing an MPM list of a current block by using an intra-prediction mode of a neighbor block, a number of occurrence frequencies of each candidate mode of the MPM list may be initialized. For initializing the number of occurrence frequencies, at least one of methods described below may be used. For example, a number of frequencies of all candidate modes may be initialized by using the same predetermined value. The predetermined value may be a positive integer including zero. Alternatively, a number of frequencies of a candidate mode may be determined on the basis of a block size of a neighbor block. For example, when the neighbor block has a size and form of W×H, a number of frequencies of each candidate mode of an MPM list may be initialized as W*H, W, or H. Alternatively, a number of frequencies of each candidate mode may be differently initialized according to a position of a neighbor block of a current block. For example, a number of frequencies of left side neighbor blocks of the current block may be initialized as an H (height) value of a corresponding neighbor block. Similarly, a number of frequencies of upper side neighbor blocks of the current block may be initialized as a W (width) value of a corresponding neighbor block. Herein, a form of the current block (whether the current block is square or non-square, and in case the current block is non-square, whether the current block has a longer horizontal length or vertical length) may be considered. When a current block is a non-square block having a horizontal length longer than a vertical length, an MPM list may be initialized as a horizontal mode. In other words, the horizontal mode may be more preferentially added to the MPM list than other modes. Herein, the horizontal mode may mean the exact horizontal mode, or may mean all or a part of modes including a horizontal mode and modes adjacent to the horizontal mode (horizontal mode±Q). Herein, Q may be a positive integer. When a current block is non-square block having a vertical length longer than a horizontal length, an MPM list may be initialized as a vertical mode. In other words, the vertical mode may be more preferentially added to the MPM list than other modes. Herein, the vertical mode may mean the exact vertical mode, or may mean all or a part of modes including a vertical mode and modes adjacent to the vertical mode (vertical mode±Q). Herein, Q may be a positive integer. Alternatively, an MPM list may be initialized by using at least one of a PLANAR mode, a DC mode, a horizontal mode, a vertical mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 angular mode. In other words, at least one of a PLANAR mode, a DC mode, a horizontal mode, a vertical mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 degree angle mode may be more preferentially added to the MPM list than other modes. A mode candidate set including at least one of a PLANAR mode, a DC mode, a horizontal mode, a vertical mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 degree angle mode may be configured. In addition, an MPM list may be initialized by using the mode candidate set according to at least one of coding parameters such as size, form, etc. of a current block. After initializing an MPM list, a number of occurrence frequencies of each intra mode of neighbor blocks may be checked. For example, a candidate mode stored in the MPM list may be checked in an order from idx1 to idxN, and when an intra mode overlaps, a number of frequencies of the corresponding intra mode may be increased by K. K may be a predetermined positive integer. After initializing a number of occurrence frequencies of candidate modes stored in an MPM list of a current block, a number of frequencies of a corresponding mode may be updated when identical candidate modes are present in the MPM list. For example, whether or not an identical candidate mode is present within an MPM list may be checked in an order from idx1 to idxN. Herein, when a candidate mode that is currently checked is a mode corresponding to idxK (herein, K is a positive integer equal to or smaller than N), a number of frequencies of idxP may be updated by comparing candidate modes from idx1 to idxL (herein, L is a positive integer smaller than K), and adding a number of frequencies of idxK to a number of frequencies of idxP when the candidate mode of idxP (herein, P is a positive integer equal to or smaller than L) is identical to the candidate mode of idxK. Herein, the candidate mode corresponding to idxK may be removed from the MPM list. By performing the above process up to a candidate mode corresponding to idxN, an MPM list in which candidate modes do not overlap may be generated. Herein, a number of frequencies of each candidate mode of the generated MPM list may be identical or may vary. After updating a number of occurrence frequencies of candidate modes within a MPM list of a current block and updating the MPM list so as to candidate modes thereof do not overlap, candidate modes within the MPM list may be sorted in descending or ascending order on the basis of a number of occurrence frequencies. As another example of re-arranging candidate modes of an MPM list on the basis of a number of occurrence frequencies, when padding each candidate mode to the MPM list from idx1 to idxN according to a predetermined order that scans neighbor blocks of a current block, whether or not a candidate mode that is currently to be added overlaps with candidate modes stored in the MPM list may be checked, and an order of pre-stored candidate modes may be changed. For example, when adding a candidate mode corresponding to idxK (K is a positive integer equal to or smaller than N) to the MPM list, when the candidate mode overlaps with one of candidate modes from idx1 to idx(K−1) which are pre-stored in the MPM list, the pre-stored overlapping candidate mode may be replaced with a candidate mode corresponding to idx(K−J), and the idxK mode may not be added to the MPM list. Herein, J a positive integer smaller than K. Updating of a number of occurrence frequencies of an intra-prediction mode of a neighbor block and re-initializing may be performed in at least one of a block unit, a CTU unit, a slice unit, a picture unit, and a group of picture (GOP) unit. For example, a number of occurrence frequencies of an intra-prediction mode of a neighbor block may be accumulated in a unit of a current block, and a number of frequencies of all intra-prediction modes before the following block may be initialized as zero. Alternatively, a number of occurrence frequencies of an intra-prediction mode of a neighbor block may be accumulated in a unit of N blocks, and a number of frequencies of all intra-prediction mode may be initialized as zero before encoding/decoding the following N blocks after encoding/decoding the N blocks. Alternatively, a number of occurrence frequencies of an intra-prediction mode of a neighbor block may be accumulated in a unit of CTU, slice or picture, and a number of frequencies of all intra-prediction modes may be initialized as zero before encoding/decoding the following CTU, slice or picture after encoding/decoding all blocks within the CTU, slice or picture. A number of occurrence frequencies of candidate modes stored in an MPM list of a current block may be checked, and an order of the candidate modes stored in the MPM list may be re-arranged in descending or ascending order according to the number of occurrence frequencies. Herein, when a plurality of intra-prediction modes having the same number of occurrence frequencies is present, the existing order may be maintained or changed. Sorting in descending order may be performed on the basis of a number of occurrence frequencies, and when a current block has five neighbor blocks which are left side, lower left side, upper side, upper right side, and upper left side blocks, candidate modes of an MPM list may be re-arranged by using at last one of methods described below. For example, when all of five neighbor blocks have an intra-prediction mode different from each other, and a number of frequencies is one, the existing order may be maintained, and re-arrangement of candidate modes of an MPM list may not be performed. For example, when one intra-prediction mode having a number of occurrence frequencies of two, three intra-prediction modes having a number of occurrence frequencies of one are present, the one intra-prediction mode having a number of occurrence frequencies of two may be assigned to idx1 of an MPM list, and the three intra-prediction modes having a number of occurrence frequencies of one may be respectively assigned to idx2, idx3, idx4 while maintaining the existing order. For example, when two intra-prediction modes having a number of occurrence frequencies of two, and one intra-prediction mode having a number of occurrence frequencies of one are present, the two intra-prediction modes having a number of occurrence frequencies of two may be respectively assigned to idx1, idx2 of an MPM list while maintaining the existing order, and the one intra-prediction mode having a number of occurrence frequencies of one may be assigned to idx3. For example, when one intra-prediction mode having a number of occurrence frequencies of three, and two intra-prediction mode having a number of occurrence frequencies of one are present, the one intra-prediction mode having a number of occurrence frequencies of three may be assigned to idx1 of an MPM list, and the two intra-prediction modes having a number of occurrence frequencies of two may be respectively assigned to idx2, idx3 while maintaining the existing order. For example, when one intra-prediction mode having a number of occurrence frequencies of three, and one intra-prediction mode having a number of occurrence frequencies of two are present, the one intra-prediction mode having a number of occurrence frequencies of three may be assigned to idx1 of an MPM list, and the one intra-prediction mode having a number of occurrence frequencies of two may be assigned to idx2. For example, when one intra-prediction mode having a number of occurrence frequencies of four, and one intra-prediction mode having a number of occurrence frequencies of one are present, the one intra-prediction mode having a number of occurrence frequencies of four may be assigned to idx1 of an MPM list, and the one intra-prediction mode having a number of occurrence frequencies of one may be assigned to idx2. As another example of re-arranging candidate modes of an MPM list on the basis of a number of occurrence frequencies, when padding each candidate mode to the MPM list from idx1 to idxN according to a predetermined order that scans neighbor blocks of a current block, whether or not a candidate mode that is currently to be added overlaps with candidate modes stored in the MPM list may be checked, and an order of pre-stored candidate modes may be changed. For example, when adding a candidate mode corresponding to idxK (K is a positive integer equal to or smaller than N) to the MPM list, when the candidate mode overlaps with one of candidate modes from idx1 to idx(K−1) which are pre-stored in the MPM list, the pre-stored overlapping candidate mode may be replaced with a candidate mode corresponding to idx(K−J), and the idxK mode may not be added to the MPM list. Herein, J a positive integer smaller than K. For an MPM list that is re-arranged on the basis of a number of occurrence frequencies, a position at which a PLANAR mode and a DC mode which are non-angular modes may be included may be determined on the basis of a specific threshold value ‘Th’. Herein, ‘Th’ may be a predetermined positive integer. ‘Th’ may be derived on the basis of a number of frequencies of a candidate mode belonging to the MPM list. For example, ‘Th’ may be determined on the basis of at least one of statistical values including a maximum value, a minimum value, an average value of a number of frequencies of a candidate mode. Among candidate modes of the re-arranged MPM list, at a position next to candidate modes having a number of occurrence frequencies of being equal to or greater than P, a PLANAR mode and a DC mode may be added as a candidate mode. The P may be an integer of 2 or 3 or greater. After the above process, when all of N candidate modes of an MPM list are not added, the candidate mode may be added until the N candidate modes are padded in the list by using at least one of methods described below. For example, when an angular mode is present among candidate modes present in an MPM list, an intra-prediction mode that is ‘corresponding angle mode±1’ may be added from idx1 to idxN according to an order as a candidate mode. For example, among angular modes, at least one of a horizontal mode, a vertical mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 degree angle mode may be added as a candidate mode. When configuring an MPM list, an order from idx1 to idxN of including candidate modes to the MPM list may be adaptively determined according to intra-prediction modes of at least one of left side, lower left side, upper side, upper right side, and upper left side neighbor blocks of a current block. A number K of referenced neighbor blocks may be determined according to at least one a size or form or both of a current block, an image resolution, and a QP value. Herein, K may be a positive integer. Herein, when a horizontal length of the image resolution or a horizontal length of the current block is W, and a vertical length of the image resolution or a vertical length of the current block is H, the size of the image and the current block may be represented as one of values being greater or smaller than W*H, W, H, W+H, W, and H. Herein, W, and H may be a positive integer. Alternatively, the number K may be a fixed value predefined in an image encoding/decoding apparatus. For example, when an image or current block or both have a specific size or greater, K neighbor blocks or more may be referenced. For example, when the image or current block or both have a size equal to or smaller than a specific size, K neighbor blocks or less may be referenced. For example, when the image or current block or both have a specific size or greater, K neighbor blocks or less may be referenced. For example, when the image or current block or both have a specific size or smaller, K neighbor blocks or more may be referenced. For example, when the image or current block or both have a QP value equal to or greater than a specific value, K neighbor blocks or more may be referenced. For example, when the image or current block or both have a QP value equal to or smaller than a specific value, K neighbor blocks or less may be referenced. For example, when the image or current block or both have a QP value equal to or greater than a specific value, K neighbor blocks or less may be referenced. For example, the image or current block or both have a QP value equal to or smaller than a specific value, K neighbor blocks or more may be referenced. When a number of neighbor blocks referenced by a current block is K, positions of the referenced neighbor blocks may be determined by using one of the methods described below according to the predetermined order. For example, a number corresponding to ceil(K/2) or floor(K/2) of left side neighbor blocks and a number corresponding to a floor(K/2) or a ceil(K/2) of upper side blocks may be referenced. For example, all of left side neighbor blocks may be preferentially referenced, and then when a number of referenced neighbor block is smaller than K, upper side neighbor blocks may be referenced until the K blocks are referenced. Alternatively, on the contrary, upper side neighbor blocks may be preferentially referenced, and then when a number of referenced neighbor blocks is smaller than K, left side neighbor blocks may be referenced until the K blocks are referenced. When a number (K) and a position of neighbor blocks for configuring an MPM list are determined, the MPM list of a current block may be variably configured on the basis of an intra-prediction mode of neighbor blocks. Herein, an MPM mode may have N candidate modes, N may be a positive integer, and K may be defined as a positive integer equal to or smaller than N. For example, in order to configure an MPM list of a current block, for an intra-prediction mode of K referenced neighbor blocks, at least one of conditions described below may be used. The conditions are as follows: when all of K neighbor blocks have a PLANAR mode, when all of K neighbor blocks have a DC mode; when K neighbor blocks have a combination of a PLANAR mode and a DC mode; when all of K neighbor blocks have an angular mode; when, among K neighbor blocks, L neighbor blocks are angular modes and K−L neighbor blocks are PLANAR modes or DC modes (herein, L is a positive integer smaller than K); when all of K neighbor blocks are a horizontal angular mode; when all of K neighbor blocks are a vertical angular mode; and when, among K neighbor blocks, L neighbor blocks are a horizontal angular mode and K-L neighbor blocks are a vertical angular mode (herein, L is a positive integer smaller than K). For example, when all of K neighbor blocks referenced for configuring an MPM list of a current block have an angular mode, or when K neighbor blocks have more angular modes than non-angular modes, angular modes may be preferentially added to from idx1 to idxN of the MPM list as a candidate mode. Alternatively, when all or a part of K referenced neighbor blocks have an angular mode, a corresponding angular mode may be preferentially added to an MPM list. Alternatively, when all or a part of K referenced neighbor blocks have non-angular mode, a corresponding non-angular mode may be preferentially added to the MPM list, and may be added at a predefined fixed position. In one embodiment, when the above condition is satisfied, an angular mode may be preferentially added to an MPM list. The added angular mode may be an angular mode of neighbor blocks referenced by the current block. Herein, among angular modes of neighbor blocks, when at least two modes not overlapping are present, adding may be sequentially performed from idx1 according to a predetermined neighbor block reference order as a candidate mode. The predetermined reference order may be, for example, an order of left side block→upper side block→upper left side block→upper right side block→lower left side block. In the above embodiment, angular modes of neighbor blocks may be preferentially added to the MPM list, and then, at least one of modes obtained by adding ±1, ±2, ±3, ±4, ±5, ±6, ±7, ±8 . . . to the angular mode may be added to the MPM list, and at least one of a horizontal mode, a vertical mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 degree angle mode may be added to the MPM list. Intra-prediction mode obtained by adding a specific offset to the angular mode of the neighbor block such as ±1, ±2, ±3, ±4, ±5, ±6, ±7, ±8 . . . may be added to the MPM list in an order as in which an absolute value of the offset increases. After adding the intra-prediction mode obtained by adding the specific offset to the angular mode of the neighbor block such as ±1, ±2, ±3, ±4, ±5, ±6, ±7, ±8 . . . , at least one of a horizontal mode, a vertical mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 degree angle mode may be added to the MPM list. In the above embodiment, when a PLANAR mode and a DC mode which are non-angular mode are added to the MPM list, indexes (position) where the PLANAR mode and the DC mode are added may be fixed or vary. For example, the PLANAR mode may be fixedly added to idxA, and the DC mode may be fixedly added to idxB as a candidate mode. Herein, A and B may be a positive integer equal to or smaller than N. Alternatively, when K neighbor blocks have an angular mode, a PLANAR mode may be added to idxC, and a DC mode may be added to idxD as a candidate mode. Herein, C and D may be a positive integer greater than K equal to or smaller than N. In other words, at least one of the horizontal mode, the vertical mode, the 45 degree angle mode, the 135 degree angle mode, and the 225 degree angle mode may be added to the MPM list, and then at least one of the PLANAR mode and the DC mode may be added to the MPM list. For example, when all of K referenced neighbor blocks are a PLANAR mode, the PLANAR mode may be added to idx1 of an MPM list as a candidate mode. For example, when all of K referenced neighbor blocks are a DC mode, the DC mode may be added to idx1 of an MPM list as a candidate mode. For example, when all of K referenced neighbor blocks are a PLANAR or a DC mode, the PLANAR mode may be added to idx1 or idx2 of an MPM list, and the DC mode may be added to idx2 or idx1 of the MPM list as a candidate mode. The intra prediction mode of the current block for which the N MPM lists are used may be entropy-encoded/decoded in the manners of the following embodiments. In the presence of the same intra prediction mode as the intra prediction mode of the current block among the intra prediction modes included in MPM_LIST_1, MPM_LIST_2, . . . , and MPM_LIST_N, the encoder may entropy-encode the indicators MPM_FLAG_1, MPM_FLAG_2, . . . , and MPM_FLAG_N indicating which list among MPM_LIST_1, MPM_LIST_2, . . . , and MPM_LIST_N includes the same intra prediction mode as the intra prediction mode of the current block. If the intra prediction mode of the current block exists in MPM_LIST_1, MPM_FLAG_1 may be the first value, and MPM_FLAG_2, . . . , and MPM_FLAG_N except for MPM_FLAG_1 may be the second value. In this case, the index information about MPM_LIST_1, MPM_IDX_1 may further be entropy-encoded. Or, if the intra prediction mode of the current block exists in MPM_LIST_2, MPM_FLAG_2 may be the first value and MPM_FLAG_1, . . . , and MPM_FLAG_N except for MPM_FLAG_2 may be the second value. In this case, the index information about MPM_LIST_2, MPM_IDX_2 may further be entropy-encoded. Or, if the intra prediction mode of the current block exists in MPM_LIST_N, MPM_FLAG_N may be the first value and MPM_FLAG_1, . . . , and MPM_FLAG_(N−1) except for MPM_FLAG_N may be the second value. In this case, index information about MPM_LIST_N, MPM_IDX_N may further be entropy-encoded. In the absence of the same intra prediction mode as the intra prediction mode of the current block among the intra prediction modes included in MPM_LIST_1, MPM_LIST_2, . . . , and MPM_LIST_N, the encoder may entropy-encode MPM_FLAG_1, MPM_FLAG_2, . . . , and MPM_FLAG_N indicating which list among MPM_LIST_1, MPM_LIST_2, . . . , and MPM_LIST_N includes the same intra prediction mode as the intra prediction mode of the current block to the second value. If MPM_FLAG_1, MPM_FLAG_2, . . . , and MPM_FLAG_N are the second value, the remaining intra prediction mode, REM_MODE may further be entropy-encoded. The decoder may entropy-decode the indicators MPM_FLAG_1, MPM_FLAG_2, . . . , and MPM_FLAG_N indicating which list among MPM_LIST_1, MPM_LIST_2, . . . , and MPM_LIST_N includes the same intra prediction mode as the intra prediction mode of the current block. If MPM_FLAG_1 is the first value and MPM_LIST_2, . . . , and MPM_LIST_N except for MPM_FLAG_1 are the second value, the intra prediction mode of the current block may exist in MPM_LIST_1. In this case, the intra prediction mode of the current block may be derived by further entropy-decoding the index information about MPM_LIST_1, MPM_IDX_1. Or, if MPM_FLAG_2 is the first value and MPM_LIST_1, . . . , and MPM_LIST_N except for MPM_LIST_2 are the second value, the intra prediction mode of the current block may exist in MPM_LIST_2. In this case, the intra prediction mode of the current block may be derived by further entropy-decoding the index information about MPM_LIST_2, MPM_IDX_2. Or, if MPM_FLAG_N is the first value and MPM_LIST_1, . . . , and MPM_LIST_2N−1) except for MPM_LIST_N are the second value, the intra prediction mode of the current block may exist in MPM_LIST_N. In this case, the intra prediction mode of the current block may be derived by further entropy-decoding the index information about MPM_LIST_N, MPM_IDX_N. Or, if MPM_FLAG_1, MPM_FLAG_2, . . . , and MPM_FLAG_N are the second value, the intra prediction mode of the current block may be derived by further entropy-decoding the remaining intra prediction mode, REM_MODE. Herein, the case where all of MPM_FLAG_1, MPM_FLAG_2, . . . , and MPM_FLAG_N are the first value may not occur. Now, a description will be given of another embodiment for the case where N MPM lists are used for a current block. TABLE 1Descriptorcoding_unit( x0, y0, log2CbSize ) {...MPM_FLAG_1[ x0 + i ][ y0 + j ]ae(v)if( MPM_FLAG_1[ x0 + i ][ y0 + j ] )MPM_IDX_1[ x0 + i ][ y0 + j ]ae(v)else {MPM_FLAG_2[ x0 + i ][ y0 + j ]ae(v)if( MPM_FLAG_2[ x0 + i ][ y0 + j ] )MPM_IDX_2[ x0 + i ][ y0 + j ]ae(v)else {...MPM_FLAG_N[ x0 + i ][ y0 + j ]ae(v)if( MPM_FLAG_N[ x0 + i ][ y0 + j ] )MPM_IDX_N[ x0 + i ][ y0 + j ]ae(v)elseREM_MODE[ x0 + i ][ y0 + j ]ae(v)}...} As in the example of [Table 1], the encoder may entropy-encode the intra prediction mode of the current block by sequentially checking whether the same intra prediction mode as the intra prediction mode of the current block exists among the intra prediction modes included in each of MPM_LIST_1, MPM_LIST_2, MPM_LIST_N according to at least one of the orders of making the plurality of MPM lists. A flag specifying an MPM list such as MPM_FLAG_N or an index indicating one of a plurality of MPM lists may be encoded/decoded. Information indicating whether an MPM-based intra prediction derivation method is used for a current block (or a current slice, a current picture, a current sequence, etc.) may be encoded/decoded. If the information indicates that the MPM-based intra prediction derivation method is used, the index may be encoded/decoded. At least one of the number or types of a plurality of MPM lists may be fixedly predefined in the encoder/decoder, or may be variably determined based on parameters related to the size, depth, shape, position, and so on of the current block/a neighbor block. For example, the number of MPM lists, predefined in the encoder/decoder may be 1, 2, 3, or larger. The maximum number of intra prediction modes included in each MPM list may be forced to be equal. The maximum number may be fixedly preset in the encoder/decoder, or may be signaled in a predetermined unit (e.g., a sequence, a picture, a slice, a block, etc.). If the number of intra prediction modes included in a specific MPM list is smaller than the maximum number, a predetermined mode may be added to the specific MPM list. The added mode may be a preset default mode, or an intra prediction mode included in another MPM list. Notably, a mode different from the intra prediction modes included in the specific MPM list may be added. A redundancy check between MPM lists may be omitted. One of MPM lists may share at least one same intra prediction mode with another MPM list. Now, another embodiment for the case where N MPM lists are used for a current block will be described. The encoder may make a plurality of MPM lists ranging from MPM_LIST_1 to MPM_LIST_N in at least one of the orders of making a plurality of MPM lists. The total number of candidate intra prediction modes in the N MPM lists may be K. N and K may be positive integers. For example, MPM_LIST_combined may be made up of K or fewer candidate intra prediction modes out of the candidate intra prediction modes of the N MPM lists. For example, if the same intra prediction mode as the intra prediction mode of the current block exists in MPM_LIST_combined, an indicator MPM_FLAG_combined indicating whether the same intra prediction mode as the intra prediction mode of the current block exists in MPM_LIST_combined may be entropy-encoded to a first value. Herein, index information about MPM_LIST_combined, MPM_IDX_combined may additionally be entropy-encoded. In the absence of the same intra prediction mode as the intra prediction mode of the current block in MPM_LIST_combined, MPM_FLAG_combined may be entropy-encoded to a second value. If MPM_FLAG_combined is the second value, the remaining intra prediction mode, REM_MODE may additionally be entropy-encoded. The decoder may entropy-decode the indicator MPM_FLAG_combined indicating whether the same intra prediction mode as the intra prediction mode of the current block exists in MPM_LIST_combined. If MPM_FLAG_combined is the first value, the intra prediction mode of the current block may be derived by further entropy-decoding the index information MPM_IDX_combined. If MPM_FLAG_combined is the second value, the intra prediction mode of the current block may be derived by additionally entropy-decoding the remaining intra prediction mode, REM_MODE. According to the present invention, the intra prediction mode of the current block may be derived by encoding/decoding. Herein, the intra prediction mode of the current block may be entropy-encoded/decoded without using the intra prediction mode of a neighbor block. The intra-prediction mode of the current block may be induced using an intra-prediction mode of another color component. For example, when the current block is a chroma block, an intra-prediction mode of at least one relevant-luma block corresponding to the chroma target block may be used to induce an intra-prediction mode for the chroma block. Here, the relevant-luma block may be determined based on at least one of the position, size, shape, or coding parameter of the chroma block. Alternatively, the relevant-luma block may be determined based on at least one of the size, shape, or coding parameter of the luma block. The relevant-luma block may be determined using a luma block including a sample corresponding to the central position of the chroma block, or using at least two luma blocks respectively including samples corresponding to at least two positions of chroma blocks. The at least two positions may include an upper left sample position and a center sample position. When there are several relevant-luma blocks, a statistic value of intra-prediction modes of at least two relevant-luma blocks may be induced as the intra-prediction mode of the chroma block. Alternatively, an intra-prediction mode of a relatively large relevant-luma block may be induced as the intra-prediction mode of the chroma block. Alternatively, when the size of the luma block corresponding to a predetermined position of the chroma block is equal to or greater than the size of the chroma block, the intra-prediction mode of the chroma block may be induced using the intra-prediction mode of the relevant-luma block. For example, when the current block is partitioned into sub-blocks, the intra-prediction mode of each of the partitioned sub-blocks may be induced using at least one method of inducing the intra-prediction mode of the current block. When deriving an intra-prediction mode, a number of intra-prediction modes used when encoding/decoding may vary according to a form of a current block having a W×H size. Herein, W, and H may be a positive integer. When a number of angular modes used for a square block (when W and H have the same value) is N (herein, N is a positive integer), a number of angular modes for a non-square block (when W and H have different values) may be K smaller than N. Herein, both N and K may be a positive integer. FIG.8is a view showing a number of intra-prediction modes that are usable according to a block form. FIG.9is a view showing an angle of an angular intra-prediction mode. FIG.8is a view showing various examples of non-square block included in a CTU having a 128×128 size. InFIG.8, a small block may mean a 4×4 block. Numbers within the block may mean a Z-scanning order. InFIGS.8(a) to8(d), shaded areas may mean a current block. Accordingly, the current block shown inFIGS.8(a) and8(b)shows an example of non-square block where W is greater than H. In addition, the current block shown inFIGS.8(c) and8(d)shows an example of non-square block where W is smaller than H. When a number of angular modes for a square block is N, a number of intra-prediction modes (angular mode) for non-square block may be determined by using one of examples described below. As show inFIGS.8(a) and8(b), when the current block is a horizontally long non-square block, modes between two arrows may be used. Referring toFIG.9, for a horizontally long non-square block, intra-prediction may be performed by using intra-prediction modes between a 135 degree angle and a 225 angle. In other words, intra-prediction may be performed by using a horizontal mode and at least one of modes adjacent to the horizontal mode. In addition, a number of usable intra-prediction modes is limited, and thus encoding/decoding of an intra-prediction mode may be performed on the basis of the limited number of usable intra-prediction modes. FIGS.8(c) and8(d)show cases where the current block is a vertically long non-square block. Herein, referring toFIG.9, intra-prediction modes between a 45 degree angle and a 135 degree angle may be used. In other words, intra-prediction may be performed by using a vertical mode and at least one of modes adjacent to the vertical mode. When two non-angular modes (mode 0 and mode 1) and 65 angular modes (mode 2 to mode 66) are present, inFIGS.8(a) and8(b), from mode 2 corresponding to a 225 angle to mode 34 corresponding to a 135 degree angle may be used. Alternatively, on the basis of mode 18 corresponding to a 180 degree angle, intra-prediction of a current block may be performed and an intra-prediction mode may be encoded/decoded by using from mode (18−P) to mode (18+P). Herein, P may be a positive integer, when angular modes are 65 in total, P may be a positive integer smaller than 16. In addition, inFIGS.8(c) and8(d), 33 modes from mode34 corresponding to a 135 angle to mode66 corresponding to a 45 degree angle may be used. Alternatively, on the basis of mode 50 corresponding to a 90 degree angle, intra-prediction for a current block may be performed and an intra-prediction mode may be encoded/decoded by using modes from mode (50−P) to mode (50+P). Herein, P may be a positive integer, when angular modes are 65 in total, P may be a positive integer smaller than 16. For example, inFIGS.8(a) to8(d), intra-prediction for the current block may be performed and an intra-prediction mode may be encoded/decoded by using modes of M=N/P intra-prediction modes. Herein, P may be P=2K (Herein, K is a positive integer). When an intra-prediction mode identical to an intra-prediction mode of a current block is not preset in the derived MPM list, the intra-prediction mode of the current block may be encoded/decoded by using the method described below. In order to encode/decode an intra-prediction mode of a current block, intra-prediction modes that are not included in an MPM list including K candidate modes may be sorted in at least one of descending and ascending orders. When a total number of intra-prediction modes usable by the current block is N, a number of the sorted intra-prediction modes may be N−K. Herein, N may be a positive integer, and K may be a positive integer equal to or smaller than N. A number L of bits required for encoding/decoding from the sorted intra-prediction mode to an intra-prediction mode of a current block may vary according to a block form as below. Herein, L may be a positive integer. A number of intra-prediction modes usable by a square block may be N, and a number of intra-prediction modes usable by non-square block may be M=N/2. Herein, when a number of bits required for encoding/decoding an intra-prediction mode of a square block is L, a number of bits required for encoding/decoding an intra-prediction mode of non-square block may be L−1. As another example, when a number of intra-prediction mode usable by non-square block is M=N/4, a number of bits required for encoding/decoding an intra-prediction mode of non-square block may be L−2. Accordingly, when a number of intra-prediction mode usable by non-square block is M, M=N/(2K), a number of bits required for encoding/decoding an intra-prediction mode of non-square block may be L−K. Herein, M may be a positive integer, and 2Kmay be a positive integer equal to or smaller than N. When a number of angular modes usable by non-square block is K, an MPM list configured for encoding/decoding an intra-prediction mode may be configured on the basis of intra-prediction modes corresponding to K angular modes. For example, an intra-prediction mode not corresponding to K angular modes may not be added to an MPM list. Alternatively, intra-prediction modes corresponding to K angular modes may be preferentially added to an MPM list, and then intra-prediction modes not corresponding to K angular modes may be added to the MPM list. Alternatively, intra-prediction modes corresponding to K angular modes may be preferentially added to an MPM list, and then at least one of a PLANAR mode, a DC mode, a vertical mode, a horizontal mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 degree angle mode may be added to the MPM list. Alternatively, among intra-prediction modes corresponding to K angular modes, a maximum number L of intra-prediction modes that may be added to an MPM list may be preset in an encoder/decoder, or may be signaled from the encoder to the decoder. Herein, L may be a positive integer smaller than K. When deriving an intra-prediction mode of a current block, the intra-prediction mode of the current block may be derived by using indicators determining whether or not the intra-prediction mode of the current block matches with at least one predetermined mode. The predetermined intra-prediction mode indicated by the indicators may be at least one of modes described below.PLANAR modeDC modeAt least one of a vertical mode, a horizontal mode, a 45 degree angle mode, a 135 angular mode, and a 225 degree angle modeMode represented by combining MPM candidate modes Deriving the intra-prediction mode of the current block by using the indicators may be performed at the timing described below. When an intra-prediction mode of a current block does not match with all candidate modes of a plurality of MPM lists, encoding/decoding using the indicator may be performed. Herein, when a total number of intra-prediction modes usable by a current block is N, and numbers of candidate modes from the first to the K-th MPM list are respectively from P1to PK, an intra-prediction mode of a current block may be one of L=N−Σi=1K(pi) remaining modes. Herein, N may be a positive integer, Σi=1K(pi) may be a positive integer equal to or smaller than N, and L may be a positive integer. Herein, in order to derive an intra-prediction mode of a current block from L remaining modes, indicators indicating whether or not the intra-prediction mode of the current block matches with at least one predetermined mode may be encoded/decoded. For example, when an indicator indicating whether or not matching with the first predetermined mode has a first value, the intra-prediction mode of the current block may be derived as the first predetermined mode. For example, when all of indicators indicating whether or not matching with up to the T-th predetermined mode have a second value and an indicator indicating whether or not matching with the T+1-th predetermined mode has a first value, the intra-prediction mode of the current block may be derived as the T+1-th predetermined mode. For example, when all of indicators indicating whether or not matching with a predetermined mode have a second value, the intra-prediction mode of the current block may be entropy encoded/decoded. Herein, T may be a positive integer equal to or smaller than N, and the first value may be 1, and the second value may be zero. Alternatively, before determining whether or not candidate modes included in a MPM list configuration and an MPM list match with an intra-prediction mode of a current block, encoding/decoding using the indicator may be performed. Herein, when a total number of intra-prediction modes usable by a current block is N, indicators indicating whether or not an intra-prediction mode of a current block matches with at least one predetermined mode may be encoded/decoded. For example, when an indicator indicating whether or not matching with the first predetermined mode has a first value, the intra-prediction mode of the current block may be derived as the first predetermined mode. For example, when all of indicators indicating whether or not matching with up to the T-th predetermined mode have a second value and an indicator indicating whether or not matching with the T+1-th predetermined mode has a first value, the intra-prediction mode of the current block may be derived as the T+1-th predetermined mode. For example, when all of indicators indicating whether or not matching with a predetermined mode have a second value, the intra-prediction mode of the current block may be entropy encoded/decoded. Herein, T may be a positive integer equal to or smaller than N, and the first value may be 1, and the second value may be zero. A specific MPM list may be configured by using at least one of intra-prediction modes indicated by the indicators. The specific MPM list may include at least one of a PLANAR mode, a DC mode, a vertical mode, a horizontal mode, a 45 degree angle mode, a 135 degree angle mode, and a 225 angular mode. The specific MPM list may be configured before configuring a general MPM list. Herein, the general MPM list may mean an MPM list of the above examples. In other words, after configuring the specific MPM list by using at least one of predetermined intra-prediction modes indicated by the indicators, the general MPM list may be configured by using at least one of intra-prediction modes except for the intra-prediction modes included in the specific MPM list. The specific MPM list may be configured without using an intra-prediction mode of a neighbor block of a current block. Accordingly, process or calculation for deriving the intra-prediction mode of the neighbor block is not required, and thus reducing the complexity of the encoder/decoder. Information (flag) of whether or not the specific MPM list is used may be signaled from the encoder to the decoder. In addition, a specific MPM index for indicating an MPM candidate within the specific MPM list may be additionally signaled from the encoder to the decoder. In other words, at least one of the specific MPM index and a general MPM index may be signaled from the encoder to the decoder. When a specific MPM index that is entropy decoded in the decoder indicates an MPM candidate within the specific MPM list, intra-prediction for a current block may be performed by using an intra-prediction mode indicated by the corresponding MPM candidate. Information of whether or not the specific MPM list is used and the specific MPM index may be signaled from the encoder to the decoder in a form of one syntax element. For example, when the one syntax element indicates a value of zero, it may mean that the specific MPM list is not used, and when the one syntax element indicates a value being equal to or greater than 1, it may indicate a specific MPM candidate within the specific MPM list. In addition, remaining intra-prediction modes may be configured by using at least one of intra-prediction modes not included in a specific MPM list and a general MPM list. Information of the configured remaining intra-prediction modes may be signaled from the encoder to the decoder. Intra prediction information may be entropy-encoded/decoded. The intra prediction information may be signaled in at least one of a VPS (video parameter set), an SPS (sequence parameter set), a PPS (picture parameter set), an APS (adaptation parameter set), a slice header, a tile header, a unit of a CTU, a unit of a block, a unit of a CU, a unit of a PU and a unit of a TU. For example, intra prediction information may comprise at least one among pieces of information below. A flag indicating whether an MPM is matched: e.g.) prev_intra_luma_pred_flag An index indicating a position in an MPM list: e.g.) mpm_idx intra luma prediction mode information: e.g.) rem_intra_luma_pred_mode intra chroma prediction mode information: e.g.) intra_chroma_pred_mode An indicator (MPM flag) indicating, for each of N MPM lists, whether the same intra prediction mode as the intra prediction mode of the current block is included among the intra prediction modes of the MPM list, when the intra prediction mode of the current block is derived or entropy-encoded/decoded using the N MPM lists: e.g.) MPM_FLAG_1, MPM_FLAG_2, . . . , MPM_FLAG_N Index information indicating, when the same intra prediction mode as the intra prediction mode of the current block is included among the intra prediction modes of a specific one of the N MPM lists, the position or sequence of the intra prediction mode in the MPM list: e.g.) MPM_IDX_1, MPM_IDX 2, . . . , MPM_IDX_N When an MPM (Most Probable Mode) flag is 1, an intra prediction mode of a luma component may be derived from candidate modes including intra prediction modes of adjacent units having been already encoded/decoded by using an MPM index mpm_idx. When the MPM (Most Probable Mode) flag is 0, the intra prediction mode of the luma component may be encoded/decoded by using intra prediction mode information on luma component rem_intra_luma_pred_mode. An intra prediction mode of a chroma component may be encoded/decoded by using intra prediction mode information on chroma component intra_chroma_pred_mode and/or a corresponding intra prediction mode of a chroma component block. The intra prediction information may be entropy-encoded/decoded based on at least one of coding parameters. At least one of the above-described pieces of intra prediction information may not be signaled based on at least one of the size and shape of the block. For example, if the size of the current block is a predetermined size, at least one piece of intra prediction information about the current block may not be signaled, and at least one piece of information about intra prediction corresponding to the size of a previously encoded/decoded upper level block may be used. For example, if the current block is shaped into a rectangle, at least one piece of intra prediction information about the current block may not be signaled, and at least one piece of information about intra prediction corresponding to the size of a previously encoded/decoded upper level block may be used. When at least one of the pieces of intra prediction information is entropy-encoded/decoded, at least one of the following binarization methods may be used.Truncated Rice binarization methodK-th order Exp_Golomb binarization methodLimited K-th order Exp_Golomb binarization methodFixed-length binarization methodUnary binarization methodTruncated Unary binarization method Now, a detailed description will be given of the reference sample construction step S520. In intra prediction of the current block or a sub-block having a smaller size and/or shape than the current block based on the derived intra prediction mode, a reference sample may be constructed for the prediction. The following description is given in the context of the current block, and the current block may mean a sub-block. The reference sample may be constructed, using one or more reconstructed samples or sample combinations neighboring to the current block. Additionally, filtering may be applied in constructing the reference sample. Herein, the reference sample may be constructed using each reconstructed sample on a plurality of reconstructed sample lines, as it is. Or, the reference sample may be constructed after filtering between samples on the same reconstructed sample line. Or, the reference sample may be constructed after filtering between samples on different reconstructed sample lines. The constructed reference sample may be denoted by ref[m, n], and a reconstructed neighbor sample or a sample obtained by filtering the reconstructed neighbor sample may be denoted by rec[m, n]. Herein, m or n may be a predetermined integer value. In the case where the current block is of size W(horizontal)×H(vertical), if a left uppermost sample position of the current block is (0, 0), a relative position of a left uppermost reference sample closest to the sample position may be set to (−1, −1). FIG.10is an exemplary view depicting neighbor reconstructed sample lines which may be used for intra prediction of a current block. As illustrated inFIG.10, a reference sample may be constructed using one or more reconstructed sample lines adjacent to the current block. For example, one of a plurality of reconstructed sample lines illustrated inFIG.10may be selected, and a reference sample may be constructed using the selected reconstructed sample line. A predetermined one of the plurality of reconstructed sample lines may be fixedly selected as the selected reconstructed sample line. Or, a specific one of the plurality of reconstructed sample lines may be adaptively selected as the selected reconstructed sample line. In this case, an indicator for the selected reconstructed sample line may be signaled. For example, a reference sample may be constructed using one or more of the plurality of reconstructed sample lines illustrated inFIG.10in combination. For example, a reference sample may be constructed as a weighted sum (or weighted mean) of one or more reconstructed samples. Weights used for the weighted sum may be assigned based on distances from the current block. Herein, a larger weight may be assigned for a shorter distance to the current block. For example, the following [Equation 1] may be used. ref[−1,−1]=(rec[−2,−1]+2*rec[−1,−1]+rec[−1,−2]+2)>>2 ref[x,−1]=(rec[x,−2]+3*rec[x,−1]+2)>>2,(x=0˜W+H−1) ref[−1,y]=(rec[−2,y]+3*rec[−1,y]+2)>>2,(y=0˜W+H−1)  [Equation 1] Or, a reference sample may be constructed using at least one of the mean value, maximum value, minimum value, median value, and most frequent value of a plurality of reconstructed samples based on at least one of distances from the current block or intra prediction modes. Or, a reference sample may be constructed based on a change (variation) in the values of a plurality of contiguous reconstructed samples. For example, a reference sample may be constructed based on at least one of whether the difference between the values of two contiguous reconstructed samples is equal to or larger than a threshold, whether the values of the two contiguous reconstructed samples are changed continuously or non-continuously, and so on. For example, if the difference between rec[−1, −1] and rec[−2, −1] is equal to or larger than a threshold, ref[−1, −1] may be determined to be rec[−1, −1], or a value obtained by applying a weighted mean with a predetermined weight assigned to rec[−1, −1]. For example, if as a plurality of contiguous reconstructed samples are nearer to the current bloc, the values of the plurality of contiguous reconstructed samples are changed by n each time, a reference sample, ref[−1, −1] may be determined to be rec[−1, −1]−n. At least one among the number and positions of reconstructed sample lines and a constructing method used for constructing the reference sample may be determined differently according to whether an upper or left boundary of the current block corresponds to a boundary of at least one among a picture, a slice, a tile and a Coding Tree Block (CTB). For example, in constructing a reference sample using reconstructed sample lines 1 and 2, when the upper boundary of the current block corresponds to a CTB boundary, reconstructed sample line 1 may be used for the upper side and reconstructed sample lines 1 and 2 may be used for the left side. For example, in constructing a reference sample using reconstructed sample lines 1 to 4, when the upper boundary of the current block corresponds to a CTB boundary, reconstructed sample lines 1 and 2 may be used for the upper side and reconstructed sample lines 1 to 4 may be used for the left side. For example, in constructing a reference sample using reconstructed sample line 2, when the upper boundary of the current block corresponds to a CTB boundary, reconstructed sample line 1 may be used for the upper side and reconstructed sample line 2 may be used for the left side. One or more reference sample lines may be constructed through the above process. A reference sample constructing method of the upper side of the current block may be different from that of the left side. Information indicating that a reference sample has been constructed using at least one method among the above methods may be encoded/decoded. For example, information indicating whether a plurality of reconstructed sample lines are used may be encoded/decoded. If the current block is divided into a plurality of sub-blocks, and each sub-block has an independent intra prediction mode, a reference sample may be constructed for each sub-block. FIG.11is a view depicting an embodiment of constructing a reference sample for a sub-block included in a current block. As illustrated inFIG.11, if the current block is of size 16×16 and 16 4×4 sub-blocks have independent intra prediction modes, a reference sample for each sub-block may be constructed in at least one of the following methods according to a scanning scheme for predicting a sub-block. For example, a reference sample may be constructed for each sub-block, using N reconstruction sample lines neighboring to the current block. In the example illustrated inFIG.11, N is 1. For example, in the case where a plurality of sub-blocks are predicted in a raster scan order of 1→2→3→ . . . 15→16, a reference sample for a Kth sub-block may be constructed, using a sample of at least one of already encoded/decoded left, top, top right, and bottom left sub-blocks. For example, in the case where a plurality of sub-blocks are predicted in a Z scan order of 1→2→5→6→3→4→7→ . . . 12→15→16, a reference sample for a Kth sub-block may be constructed, using a sample of at least one of already encoded/decoded left, top, top right, and bottom left sub-blocks. For example, in the case where a plurality of sub-blocks are predicted in a zig-zag scan order of 1→2→5→9→6→3→4→ . . . 12→15→16, a reference sample for a Kth sub-block may be constructed, using a sample of at least one of already encoded/decoded left, top, top right, and bottom left sub-blocks. For example, in the case where a plurality of sub-blocks are predicted in a vertical scan order of 1→5→9→13→2→6→ . . . 8→12→16, a reference sample for a Kth sub-block may be constructed, using a sample of at least one of already encoded/decoded left, top, top right, and bottom left sub-blocks. In the case where a plurality of sub-blocks are predicted in a scan order other than the above scan orders, a reference sample for a Kth sub-block may be constructed, using a sample of at least one of already encoded/decoded left, top, top right, and bottom left sub-blocks. In selecting the reference sample, a decision as to the availability of a block including the reference sample and/or padding may be performed. For example, if the block including the reference sample is available, the reference sample may be used. Meanwhile, if the block including the reference sample is not available, the unavailable reference sample may be replaced with one or more available neighbor reference samples by padding. If the reference sample exists outside at least one of a picture boundary, a tile boundary, a slice boundary, a CTB boundary, and a predetermined boundary, it may be determined that the reference sample is not available. In the case where the current block is encoded by CIP (constrained intra prediction), if the block including the reference sample is encoded/decoded in an inter prediction mode, it may be determined that the reference sample is not available. FIG.12is a view depicting a method for replacing an unavailable reconstructed sample, using an available reconstructed sample. If it is determined that the neighbor reconstructed sample is unavailable, the unavailable sample may be replaced, using a neighbor available reconstructed sample. For example, as illustrated inFIG.12, in the presence of available samples and unavailable samples, an unavailable sample may be replaced, using one or more available samples. The sample value of an unavailable sample may be replaced with the sample value of an available sample in a predetermined order. An available sample adjacent to an unavailable sample may be used to replace the unavailable sample. In the absence of an adjacent available sample, the first appearing available sample or the closest available sample may be used. A replacement order of unavailable samples may be a left lowermost to right uppermost order. Or the replacement order of unavailable samples may be a right uppermost to left lowermost order. Or the replacement order of unavailable samples may be a left uppermost to right uppermost and/or left lowermost order. Or the replacement order of unavailable samples may be a right uppermost and/or left lowermost to left uppermost order. As illustrated inFIG.12, unavailable samples may be replaced in an order from a left lowermost sample position 0 to a right uppermost sample. In this case, the values of the first four unavailable samples may be replaced with the value of the first appearing or closest available sample a. The values of the next 13 unavailable samples may be replaced with the value of the last available sample b. Or, an unavailable sample may be replaced, using a combination of available samples. For example, the unavailable sample may be replaced using the mean value of available samples adjacent to both ends of the unavailable sample. For example, inFIG.12, the first four unavailable samples may be filled with the value of the available sample a, and the next 13 unavailable samples may be filled with the mean value of the available sample b and an available sample c. Or, the 13 unavailable samples may be filled with any value between the values of the available samples b and c. In this case, the unavailable samples may be replaced with difference values. For example, as an unavailable sample is nearer to the available sample a, the value of the unavailable sample may be replaced with a value close to the value of the available sample a. Similarly, as an unavailable sample is nearer to the available sample b, the value of the unavailable sample may be replaced with a value close to the value of the available sample b. That is, the value of an unavailable sample may be determined based on the distance from the unavailable sample to the available sample a and/or b. To replace an unavailable sample, one or more of a plurality of methods including the above methods may be selectively applied. A method for replacing an unavailable sample may be signaled by information included in a bitstream, or a method predetermined by an encoder and a decoder may be used. Or the method for replacing an unavailable sample may be derived by a predetermined scheme. For example, a method for replacing an unavailable sample may be selected based on the difference between the values of the available samples a and b and/or the number of unavailable samples. For example, a method for replacing an unavailable sample may be selected based on a comparison between the difference between the values of two available samples and a threshold and/or a comparison between the number of unavailable samples and a threshold. For example, if the difference between the values of the two available samples is larger than the threshold and/or if the number of unavailable samples is larger than the threshold, the values of unavailable samples may be replaced with different values. For the constructed one or more reference samples, it may be determined whether to apply filtering according to at least one of the intra prediction mode, size, and shape of the current block. If the filtering is applied, a different filter type may be used according to at least one of the intra prediction mode, size, and shape of the current block. For example, for each of the plurality of reference sample lines, whether filtering is applied and/or a filter type may be determined differently. For example, filtering may be applied to a first neighbor line, whereas filtering may not be applied to a second neighbor line. For example, both a filtered value and a non-filtered value may be used for the reference sample. For example, among 3-tap filter, 5-tap filter and 7-tap filter, at least one may be selected and applied according to at least intra prediction mode, size and shape of a block. Hereinbelow, the step of performing intra prediction (S530) will be described in detail. Intra prediction may be performed for the current block or a sub-block based on the derived intra prediction mode and reference sample. In the following description, the current block may mean a sub-block. For example, non-directional intra prediction may be performed. The non-directional intra prediction mode may be at least one of the DC mode and the Planar mode. Intra prediction in the DC mode may be performed using the mean value of one or more of the constructed reference samples. Filtering may be applied to one or more prediction samples at the boundary of the current block. The DC-mode intra prediction may be performed adaptively according to at least one of the size and shape of the current block. FIG.13is an exemplary view illustrating intra prediction according to shapes of a current block. For example, as illustrated in (a) ofFIG.13, if the current block is shaped into a square, the current block may be predicted using the mean value of reference samples above and to the left of the current block. For example, as illustrated in (b) ofFIG.13, if the current block is shaped into a rectangle, the current block may be predicted using the mean value of reference samples neighboring to the longer between the width and length of the current block. For example, if the size of the current block falls within a predetermined range, predetermined samples are selected from among the top or left reference samples of the current block, and prediction may be performed using the mean value of the selected samples. Planar-mode intra prediction may be performed by calculating a weighted sum in consideration of distances from the one or more constructed reference samples according to the positions of target intra prediction samples of the current block. For example, a prediction block may be calculated to be a weighted sum of N reference samples dependent on the position (x, y) of a target prediction sample. N may be a positive integer, for example, 4. For example, directional intra prediction may be performed. The directional prediction mode may be at least one of a horizontal mode, a vertical mode, and a mode having a predetermined angle. Horizontal/vertical-mode intra prediction may be performed using one or more reference samples on a horizontal/vertical line at the position of a target intra prediction sample. Intra prediction in a mode having a predetermined angle may be performed using one or more reference samples on a line at the predetermined angle with respect to the position of a target intra prediction sample. Herein, N reference samples may be used. N may be a positive integer such as 2, 3, 4, 5, or 6. Further, for example, prediction may be performed by applying an N-tap filter such as a 2-tap, 3-tap, 4-tap, 5-tap, or 6-tap filter. For example, intra prediction may be performed based on position information. The position information may be encoded/decoded, and a reconstruction sample block at the position may be derived as an intra prediction block for the current block. Or a block similar to the current block, detected by the decoder may be derived as an intra prediction block for the current block. For example, an intra color component prediction may be performed. For example, intra prediction may be performed for a chroma component using a reconstruction luma component of the current block. Or, intra prediction may be performed for another chroma component Cr using one reconstruction chroma component Cb of the current block. Intra prediction may be performed by using one or more of the afore-described various intra prediction methods in combination. For example, an intra prediction block may be constructed for the current block through a weighted sum of a block predicted using a predetermined non-directional intra prediction mode and a block predicted using a predetermined directional intra prediction mode. Herein, a different weight may be applied according to at least one of the intra prediction mode, block size, shape/and or sample position of the current block. The above embodiments may be performed in the same method in an encoder and a decoder. A sequence of applying to above embodiment may be different between an encoder and a decoder, or the sequence applying to above embodiment may be the same in the encoder and the decoder. The above embodiment may be performed on each luma signal and chroma signal, or the above embodiment may be identically performed on luma and chroma signals. A block form to which the above embodiments of the present invention are applied may have a square form or a non-square form. The above embodiment of the present invention may be applied depending on a size of at least one of a coding block, a prediction block, a transform block, a block, a current block, a coding unit, a prediction unit, a transform unit, a unit, and a current unit. Herein, the size may be defined as a minimum size or maximum size or both so that the above embodiments are applied, or may be defined as a fixed size to which the above embodiment is applied. In addition, in the above embodiments, a first embodiment may be applied to a first size, and a second embodiment may be applied to a second size. In other words, the above embodiments may be applied in combination depending on a size. In addition, the above embodiments may be applied when a size is equal to or greater that a minimum size and equal to or smaller than a maximum size. In other words, the above embodiments may be applied when a block size is included within a certain range. For example, the above embodiments may be applied when a size of current block is 8×8 or greater. For example, the above embodiments may be applied when a size of current block is 4×4 or greater. For example, the above embodiments may be applied when a size of current block is 16×16 or smaller. For example, the above embodiments may be applied when a size of current block is equal to or greater than 16×16 and equal to or smaller than 64×64. The above embodiments of the present invention may be applied depending on a temporal layer. In order to identify a temporal layer to which the above embodiments may be applied, an additional identifier may be signaled, and the above embodiments may be applied to a specified temporal layer identified by the corresponding identifier. Herein, the identifier may be defined as the lowest layer or the highest layer or both to which the above embodiment may be applied, or may be defined to indicate a specific layer to which the embodiment is applied. In addition, a fixed temporal layer to which the embodiment is applied may be defined. For example, the above embodiments may be applied when a temporal layer of a current image is the lowest layer. For example, the above embodiments may be applied when a temporal layer identifier of a current image is 1. For example, the above embodiments may be applied when a temporal layer of a current image is the highest layer. A slice type to which the above embodiments of the present invention are applied may be defined, and the above embodiments may be applied depending on the corresponding slice type. In the above-described embodiments, the methods are described based on the flowcharts with a series of steps or units, but the present invention is not limited to the order of the steps, and rather, some steps may be performed simultaneously or in different order with other steps. In addition, it should be appreciated by one of ordinary skill in the art that the steps in the flowcharts do not exclude each other and that other steps may be added to the flowcharts or some of the steps may be deleted from the flowcharts without influencing the scope of the present invention. The embodiments include various aspects of examples. All possible combinations for various aspects may not be described, but those skilled in the art will be able to recognize different combinations. Accordingly, the present invention may include all replacements, modifications, and changes within the scope of the claims. The embodiments of the present invention may be implemented in a form of program instructions, which are executable by various computer components, and recorded in a computer-readable recording medium. The computer-readable recording medium may include stand-alone or a combination of program instructions, data files, data structures, etc. The program instructions recorded in the computer-readable recording medium may be specially designed and constructed for the present invention, or well-known to a person of ordinary skilled in computer software technology field. Examples of the computer-readable recording medium include magnetic recording media such as hard disks, floppy disks, and magnetic tapes; optical data storage media such as CD-ROMs or DVD-ROMs; magneto-optimum media such as floptical disks; and hardware devices, such as read-only memory (ROM), random-access memory (RAM), flash memory, etc., which are particularly structured to store and implement the program instruction. Examples of the program instructions include not only a mechanical language code formatted by a compiler but also a high level language code that may be implemented by a computer using an interpreter. The hardware devices may be configured to be operated by one or more software modules or vice versa to conduct the processes according to the present invention. Although the present invention has been described in terms of specific items such as detailed elements as well as the limited embodiments and the drawings, they are only provided to help more general understanding of the invention, and the present invention is not limited to the above embodiments. It will be appreciated by those skilled in the art to which the present invention pertains that various modifications and changes may be made from the above description. Therefore, the spirit of the present invention shall not be limited to the above-described embodiments, and the entire scope of the appended claims and their equivalents will fall within the scope and spirit of the invention. INDUSTRIAL APPLICABILITY The present invention may be used in encoding/decoding an image.
148,398
11943476
DETAILED DESCRIPTION OF THE INVENTION It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the systems and methods of the present invention, as represented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. Various methods described in the present invention are aimed to improve the efficiency of secondary transform signaling or to reduce the decoding latency. At an encoder side, a current block, such as a CU, is first predicted by a prediction operation to generate a predictor. Residuals of the current block are generated according to the predictor. A transform operation, including one or both primary transform (e.g. DCT-II) and secondary transform, is applied to determine final transform coefficients. A quantization process is then applied to the final transform coefficients before entropy encoding into a video bitstream. The residuals after the primary transform are referred to as temporary transform coefficients or primary transform coefficients, and the temporary transform coefficients are processed by secondary transform to generate the final transform coefficients of the current block. If secondary transform is not applied to the current block, the temporary transform coefficients are assigned as the final transform coefficients of the current block. If primary transform is not applied, the residuals processed by secondary transform are the final transform coefficients of the current block. At a decoder side, a video bitstream is decoded to derive coefficient levels associated with a current block, and the coefficient levels are inverse quantized to generate final transform coefficients. If a secondary transform index associated with the current block is larger than zero, inverse secondary transform is first applied to the final transform coefficients to determine temporary transform coefficients. Inverse primary transform is then applied to the temporary transform coefficients to recover residuals. Only inverse primary transform is applied to the current block to recover the residuals if the secondary transform index associated with the current block is equal to zero and the conditions for applying secondary transform are satisfied (e.g. the width and height of the current transform block is larger than 4). A reconstructed block is then obtained according to the residuals and a corresponding predictor of the current block. Setting Constraint for Applying Secondary Transform According to the recent secondary transform signaling design, a video decoder can only decide a secondary transform index, such as a RST index or a LFNST index, after the coefficients for all TBs in one CU are parsed. The video coding standard under development tends to support 64×64 pipeline processing; and the latency issue of secondary transform occurs when processing a CU with a size larger than 64×64 samples. Various embodiments of the present invention set a constraint to handle the latency issue caused by secondary transform signaling. In the recent development, secondary transform is only applied to intra coded blocks, so the current block in the following embodiments is an intra coded block. However, the current block in the following embodiments may not necessary to be an intra coded block if secondary transform can be enabled for non-intra predicted blocks. To avoid the undesired latency caused by secondary transform signaling, secondary transform cannot be applied on the transform block(s) in a CU when a width or height of the CU is larger than a predefined threshold. In some embodiments, the width or height of the CU is measured in a number of luma samples in the luma CB within the CU. Some examples of the predefined threshold are 16, 32, 64, 128, and 256 luma samples. For example, residuals of a current block are not processed by secondary transform if any of a width or height of the current block is larger than 64 samples, so any block with a size larger than 64×64 is not processed by secondary transform. In one embodiment, the predefined threshold is set according to a maximum TU size or a maximum TB size (i.e. MaxTbSizeY) specified in the video coding standard, for example, the maximum TB size in the video coding standard under development is 64 luma samples. In yet another embodiment, the predefined threshold is adaptively determined according to a maximum TU size or a maximum TB size, which is derived from a value signaled in a Sequence Parameter Set (SPS), Picture Parameter Set (PPS), tile, tile group, or slice level. For example, a maximum TB size (MaxTbSizeY) is set as 1<<sps_max_luma_transform_size_64_flag signaled at SPS. A secondary transform index for a current block is set to be zero when the current block has a width or height larger than the predefined threshold, and this secondary transform index is signaled by the encoder and parsed by the decoder in one embodiment, or this secondary transform index is not signaled by the encoder and is inferred to be zero by the decoder in an alternative embodiment. For example, the video encoder signals a secondary transform index for every intra coded CU to indicate whether secondary transform is applied, and if it is applied, the secondary transform index also indicates which matrix is selected by the encoder. A corresponding video decoder in this embodiment parses a secondary transform index for each intra coded CU to determine whether inverse secondary transform needed to be applied. The video decoder may check a width or height of each intra coded CU with the predefined threshold for bitstream conformance as the secondary transform index for any intra coded CU has to be zero when the width or height of the intra coded CU is larger than the predefined threshold. In the alternative embodiment, the secondary transform index is not signaled at the encoder side for any intra coded CU having a CU width or height larger than the predefined threshold, and the secondary transform index is inferred as zero at the decoder side. In the above embodiments, the current block is the current CU. The current block may be a luma Coding Block (CB) containing one or more luma TBs. The current block may be a chroma CB containing one or more chroma TBs. The current block may be a luma or chroma TB. The current block may be a TU. In some embodiments, a transform operation or inverse transform operation for one or more TUs in a current CU excludes secondary transform or inverse secondary transform based on a number of TUs in the current CU. For example, the transform operation excludes secondary transform when the number of TUs in the current CU is larger than one, and the inverse transform operation excludes inverse secondary transform when the number of TUs in the current CU is larger than one. In other words, secondary transform is disabled for a current CU when there are multiple TUs existed in the current CU (which means the width or the height of the current CU/CB are larger than the maximum TU/TB size). A secondary transform index for a current CU is forced to be zero or inferred as zero when this current CU is splitting into multiple TUs. For example, when a CU can be processed by secondary transform, a secondary transform index is signaled regardless whether this CU is further split into multiple TUs. The corresponding decoder parses the secondary transform index for the CU, but forces the secondary transform index to be zero if this CU is split into multiple TUs. In another example, a secondary transform index is not signaled when a CU is split into multiple TUs, and thus the decoder does not parse the secondary transform index and directly infers the secondary transform index as zero. An exception is that an ISP-applied luma CB (in a luma splitting tree, also called a CU) may be divided into multiple luma TBs (in a luma splitting tree, also called TUs) even if the width and height of the luma CB are not larger than the maximum TB size. In this case, secondary transform can be used when multiple TUs exist in a CU. In another embodiment, secondary transform cannot be applied to any CU with a width or height larger than a predefined threshold nor any CU split into multiple TUs. For example, a secondary transform index for a current CU is still signaled or parsed but is forced to be zero when this current CU is splitting into multiple TUs, or when a CU width is larger than a maximum TU width and/or a CU height is larger than a maximum TU height. Alternatively, the secondary transform index is not signaled at the encoder side when a current CU is split into multiple TUs or when a CU width or CU height is larger than a predefined threshold, and the secondary transform index for this current CU is inferred as zero in the decoder side. For example, a constraint is set to skip signaling a secondary transform index for a CU splitting into multiple TUs. A CU is forced to split into multiple TUs when at least one of the following is true: a CU width is larger than a maximum TU width (or maximum TU or TB size), a CU height is larger than a maximum TU height (or maximum TU or TB size), or a CU size is larger than a maximum TU or TB size specified in the standard or in a SPS, PPS, tile, tile group, or slice level. Accordingly, a secondary transform index is not signaled nor parsed for a current CU when the current CU will be split into multiple TUs. The secondary transform index for the current CU is simply inferred to be zero when the current CU will be split into multiple TUs. In some other embodiments of the present invention, a constraint restricts applying secondary transform or inverse secondary transform to only one selected TU within a current CU when a width or height of the current CU is larger than a predefined threshold or when the current CU contains multiple TUs. That is only the selected TU within the current CU can be processed by secondary transform when the current CU contains multiple TUs. The transform operation including secondary transform or the inverse transform operation including inverse secondary transform for the selected TU may follow the current design. For example, in an encoder side, a transform operation including only primary transform is applied to all other TUs in the current CU, whereas an auxiliary transform operation including both primary transform and secondary transform is applied to the selected TU. In a decoder side, an inverse transform operation including only inverse primary transform is applied to all other TUs in the current CU, whereas an auxiliary transform operation including both inverse secondary transform and inverse primary transform is applied to the selected TU. Some examples of the predefined threshold is set according to the maximum TU size specified by the video coding standard or adaptively determined in a SPS, PPS, tile, tile group, or slice level. In an embodiment of implementing this constraint, the selected TU is a last TU within the current CU according to a decoding order. In comparison to applying secondary transform on one of other TUs, applying second transform on the last TU leads to less latency. Some other embodiments set a constraint to restrict a maximum width, height, or size of an intra or inter CU. For example, in order to apply secondary transform to intra coded CUs, the width or height of each intra coded CU cannot exceed a predefined threshold. The predefined threshold may be 16, 32, 64, 128, or 256 samples. In one embodiment, the predefined threshold is set according to a maximum TU size specified in the corresponding standard, such as 64 luma samples, and in another embodiment, the predefined threshold is adaptively determined according to a maximum TU size specified in a SPS, PPS, tile, tile group, or slice level. By implementing this constraint, each intra coded CU only contains one TU as the width and height of all the intra CUs are smaller than or equal to the maximum TU size. Any of the foregoing embodiments implemented in a decoder may implicitly decide whether secondary transform is disabled according to a block width, block height, or block area, or explicitly decided by a secondary transform flag signed at CU, CTU, slice, tile, tile group, SPS, or PPS level. Signaling Modification for Secondary Transform In order to solve the latency issue caused by the conventional design of secondary transform signaling, some embodiments of the present invention modify the current secondary transform signaling design. In some embodiments, the secondary transform syntax, such as the RST index or the LFNST index, is signaled at a TU level instead of at a CU level. For example, the secondary transform index is signaled at the end of a TU according to an embodiment. In another embodiment, the secondary transform index for a TU is signaled after signaling a last significant coefficient at a TB level, and then syntax elements for this TU, such as the significant flag for each coding group in each TB are signals. In other words, the secondary transform syntax at a TU level is signaled before syntax elements of a next TU in the scanning order. In yet another embodiment, the secondary transform index is signaled before reconstructing the coefficients for each coefficient group. In cases when there are multiple TUs in a current CU, a secondary transform index for the current CU is signaled in at least one of the TUs. For example, the secondary transform index is signaled in a first TU of the current CU. Secondary transform can only be applied to the first TU of a current CU as secondary transform is not allowed in the following TUs of the current CU according one embodiment. In another embodiment, the signaled secondary transform index is shared with all TUs in the current CU. For example, the transform operation or inverse transform operation is applied to the following TUs according to the shared secondary transform index that is signaled in the first TU. In another example, a number of non-zero coefficients in each TU is compared with a threshold, and the TU can only apply secondary transform or inverse secondary transform according to the shared secondary transform index if the number of non-zero coefficients is larger than the threshold. In an alternative embodiment, a secondary transform index is signaled at each of the first N TUs of a current CU, where N is selected from 1 to a total number of TUs in the current CU. In yet another embodiment, the secondary transform index is signaled in a last TU within the current CU because performing secondary transform on the last TU leads to less latency compared to performing secondary transform on any other TU. In some embodiments of secondary transform syntax signaling, after signaling the last significant coefficient at a TB level, a syntax element for secondary transform, such as the secondary transform index, is signaled. The remaining syntax elements for the TB, such as the significant flag for each coding group in the TB, are signaled. One of the embodiments of the video encoder signals a secondary transform index at a TB level after signaling the last significant coefficient at the TB level, and then signals the remaining syntax elements for the TB. In one embodiment, secondary transform syntax at a TU level is signaled after signaling the last significant coefficient at a TB level, and the syntax for a TU, such as the significant flag for each coding group in each TB, is then signaled. For example, the coding group contains 4×4 samples. In another embodiment, secondary transform syntax at a CU level is signaled after signaling the last significant coefficient at the TU level, and the syntax for a TU, such as signaling of the significant flag for each coding group in the TU, is then signaled. In an embodiment, secondary transform syntax for a current CU, such as a RST index or LFNST index, is signaled in a first available TU in a current CU. In this embodiment, a current CU has a first available TU if both a constraint for secondary transform signaling is satisfied and secondary transform is allowed for the current CU. An example of the constraint for secondary transform signaling is depending on a position of a last significant coefficient of a TU. In another example, the constraint for secondary transform signaling only signals secondary transform syntax when a number of non-DC values in transform coefficients is larger than a predefined number. Some other examples of the constraint for secondary transform signaling will be described in the later sections. An example of allowing secondary transform is when the current CU is an intra coded CU. For each of the remaining TUs other than the first available TU within the current CU, secondary transform syntax is not signaled and is inferred to be the same as the secondary transform syntax of the first available TU according to one embodiment. That is, the remaining TUs share the secondary transform syntax with the first available TU. In this embodiment, if the constraint for secondary transform signaling is not satisfied in any remaining TU in the current block, secondary transform or inverse secondary transform will not be applied to this remaining TU regardless of the secondary transform syntax of the first available TU in the current block. In one example, if a first available TU cannot be found in a current CU for secondary transform, secondary transform will not be applied to any TU in this current CU. In another embodiment, secondary transform or inverse secondary transform is only applied to the first available TU within a current CU and not applied to the remaining TUs within the current CU. In some embodiments, the constraint is checked with every TU within a current CU, and each TU satisfying the constraint shares the same secondary transform syntax. For example, secondary transform or inverse secondary transform can be applied to all TUs in a current CU if all the TUs satisfy the constraint for secondary transform signaling. In another embodiment, the constraint is also checked with every TU within a current CU, but secondary transform or inverse secondary transform is only applied to one or more TUs in the current CU if all TUs satisfy the constraint. Secondary transform or inverse secondary transform cannot be applied to TUs in a current CU if any TU in the current block is not available for secondary transform as the constraint for secondary transform signaling is not satisfied. Secondary Transform Signaling based on Last Significant Coefficient The constraint for secondary transform signaling mentioned in various previously described embodiments can be set according to one or more last significant coefficients of one or more transform blocks according to some embodiments of the present invention. Embodiments of the constraint for secondary transform signaling are related to one or more positions of the last significant coefficients of one or more transform blocks. The encoder signals syntax associated with a last significant coefficient position for each transform block indicating the position of the last significant coefficient in the transform block. The decoder determines a position of a last significant coefficient in each transform block by parsing syntax associated with a last significant coefficient position for each transform block. For example, the syntax associated with the last significant coefficient position includes last_sig_coeff_x_prefix, last_sig_coeff_y_prefix, last_sig_coeff_x_suffix, and last_sig_coeff_y_suffix. In the conventional RST signaling design, the encoder or decoder checks if there is any non-zero coefficient within a zero-out region of secondary transform, which means the coefficients after secondary transform or before inverse secondary transform are zero, and skips signaling or parsing the secondary transform index if at least one non-zero coefficient is found in the zero-out region of secondary transform. Embodiments of the present invention simplify this checking process for secondary transform signaling by only checking a TB level syntax element for each considered TB.FIG.5illustrates a 16×16 TU within a 16×16 CU for demonstrating various embodiments of secondary transform signaling according to a last significant coefficient signaled at a TB level.FIG.5illustrates a luma (luminance) Transform Block (TB) of the 16×16 TU, where the two chrominance (chroma) TBs of the 16×16 TU are not shown for brevity. In some embodiments of secondary transform signaling, a secondary transform index such as the RST index or LFNST index, is adaptively signaled for a current block according to a position of a last significant coefficient in each TB in the current block. For example, a current block is a luma CB containing one or more luma TBs or the current block is a chroma CB containing one or more chroma TBs, and a secondary transform index is conditionally signaled according to one or more positions of the last significant coefficients in the luma or chroma TB(s). For another example, a current block is a CU and a secondary transform index is adaptively signaled for one or more luma TBs in a current CU according to positions of last significant coefficients in one or more luma TBs, and this secondary transform index is shared by the luma and chroma TBs in the current CU. In another example, a current block is a CU contains one or more luma TBs and one or more chroma TBs, and a secondary transform index is conditionally signaled according to one or more positions of the last significant coefficients in one or both the luma and chroma TBs. Secondary transform is only applied to one or more luma or chroma TBs. The secondary transform index is assumed to be signaled at a CU level or after parsing all TBs in the current block in the following embodiments; however, these embodiments can also be implemented with the secondary transform index signaled in a TB level (e.g. signaled after parsing the coefficients in the current TB or signaled after parsing the last significant coefficient positions in the current TB) or a TU level (or after parsing the TBs within the current TU). For example, the secondary transform index for a current CU is signaled at a CU level after all TBs in the current CU. In cases when secondary transform is applied to this 16×16 CU, a 16×48 matrix multiplication is applied to the 16×16 transform block within the 16×16 CU using a selected secondary transform kernel. Each coding group in these embodiments is a 4×4 subblock in the transform block. The first, second, third, and fourth coding groups within a top-left 8×8 region of the transform block are denoted as CG 0, CG 1, CG 2, and CG 3. The corresponding significant flags for CG 0, CG 1, CG 2, and CG 3 are denoted as SigFlagCG0, SigFlagCG1, SigFlagCG2, and SigFlagCG3 respectively. In the video encoder, the 16×16 TU is first processed by primary transform to generate primary transform coefficients, and the 48 primary transform coefficients in the first three coding groups CG 0, CG 1, CG 2 are the input of secondary transform. The 48 primary transform coefficients are multiplied with a selected 16×48 matrix to generate 16 secondary transform coefficients. After applying secondary transform, coefficients in the first coding group CG 0 are set to equal to the generated 16 secondary transform coefficients, while all remaining coefficients in the transform block are set to zero according to one embodiment. In cases when secondary transform is not applied, the second, third and fourth coding groups CG 1, CG 2, CG 3 and/or the remaining region in the transform block may contain non-zero coefficients. The region having all transform coefficients set to zero after secondary transform is referred to as a zero-out region of secondary transform. Instead of searching for non-zero coefficients within a zero-out region of secondary transform, embodiments of the present invention check a position of a last significant coefficient for each considered transform block. A secondary transform index is adaptively signaled for a current CU according to the position(s) of the last significant coefficient(s) in one or more considered TBs within the current CU. Some examples of the considered TUs are all TBs in the current CU, only luma TBs in the current CU, only chroma TBs in the current CU, only TBs with significant coefficients in the current CU, a predefined subset of TBs in the current block, or all TBs except for those not allowed for secondary transform. The TBs not allowed for secondary transform include any TB with a TB width or TB height less than 4 samples or any TB processed by transform skip. If there is no considered TB in the current block, secondary transform is not applied to any TB in the current block. For example, the video encoder skips signaling the secondary transform index if the position of the last significant coefficient in any considered TB is within a predefined region (e.g. zero-out region of secondary transform where all coefficients are set to zero after secondary transform) in this embodiment. The video decoder infers secondary transform is not applied to a current CU when a position of a last significant coefficient in any considered TB within the current CU is located in the predefine region. In one embodiment, the predefined region includes CG 1, CG 2, or CG 3 of the current transform block. The video decoder thus infers secondary transform is not applied to a current CU when a position of a significant coefficient in any considered transform block within the current CU is in CG 1, CG 2, or CG 3, as all coefficients in CG 1, CG 2, and CG 3 are set to zero after secondary transform. In another embodiment, the predefined region includes the entire TB except for a top-left 4×4 subblock, or the predefined region includes those coefficient positions with position indices in a TB larger than 15 assuming the position index ranging from 0 and the processing order is diagonal scanning for the whole TB. In another embodiment, the predefined region includes the entire TB except for the first 8 coefficient positions, or the predefined region includes those coefficient positions with position indices in a TB larger than 7 assuming the position index ranging from 0 and the processing order is diagonal scanning for the whole TB. In the preferred embodiments of the present invention, according to a position of a last significant coefficient in each considered TB, secondary transform is inferred to be disabled without any syntax signaling, which means a secondary transform index will not be signaled at the encoder and the secondary transform index will not be parsed at the decoder. For example, the video decoder infers a corresponding secondary transform index for a current CU to be zero without parsing the secondary transform index from the video bitstream when a position of a last significant coefficient of any considered transform block within the current CU is in a predefined region of secondary transform in the transform block. In one embodiment, coefficients in second, third, fourth coding groups CG1, CG2, and CG 3 of the top-left 8×8 region are set to zero after secondary transform. In another embodiment, all transform coefficients except for the top-left 4×4 subblock are set to zero after secondary transform. In another embodiment, when the predefined region refers to a zero-out region of secondary transform, the predefined region varies according to the TB width or TB height. For example, if the TB width is equal to the TB height and the TB width is equal to 4 or 8, RST 8×N where N=16, 48, or 64, is applied to the TB as introduced and then the coefficients after secondary transform are zero if the position index in the TB is larger than 7. In this example, the predefined region includes those coefficient positions with the position indices in a TB larger than 7 assuming the position index ranging from 0 and the processing order is diagonal scanning for the whole TB. In another example, if RST 16×N where N=16, 48, or 64, is applied to the TB as introduced and then the coefficients after secondary transform are zero if the position index in the TB is larger than 15. In this example, the predefined region includes those coefficient positions with the position indices in a TB larger than 15 assuming the position index ranging from 0 and the processing order is diagonal scanning for the whole TB. According to these embodiments, secondary transform is not applied when a position of a last significant coefficient for any considered transform block is in any of second, third, and fourth coding groups CG 1, CG 2, and CG 3 in a top-left 8×8 region of the transform block, when the position of the last significant coefficient of any considered transform block is not in the first coding group CG 0 in the top-left 8×8 region, or when a position of a last significant coefficient of any considered transform block is in the predefined region. In the decoder, after parsing of a last significant coefficient position for each considered transform block in the current CU, a secondary transform index is inferred to be zero when a position of a last significant coefficient for any considered transform block is in a top-left 8×8 region except for a first coding group CG, the entire transform block except for a first coding group CG 0, or the predefined region. The encoder in this embodiment adaptively skips signaling a secondary transform index for a current CU according to a position of a last significant coefficient of each considered TB within the current CU and the predefined position, and the decoder infers inverse secondary transform is disabled for the current CU according to the positions of the last significant coefficients of the considered TBs within the current CU and the predefined position. For example, inverse secondary transform is disabled by inferring a secondary transform index for a current CU to be zero when a position of a last significant coefficient for any considered TB falls in the predefined region such as a coding group other than the first coding group in a TB. The encoder in one embodiment only signals a secondary transform index when all position of last significant coefficients in the considered transform blocks are not in the predefined region. Similarly, the decoder in this embodiment only parses a secondary transform index when all positions of last significant coefficients in the considered transform blocks are not in the predefined region; otherwise the decoder infers inverse secondary transform is disabled for the transform block or the entire CU. The decoder determines the position of the last significant coefficient in a transform block by parsing last significant coefficient position syntax at the TB level. In some of the above embodiments, one or more syntax elements related to residual coding for some predefined coding groups in a transform block do not need to be signaled when secondary transform is applied to the transform block. For example, these syntax elements related to residual coding for some predefined coding group in one or more transform blocks parsed after the current block are always equal to zero when secondary transform is applied to the transform block, therefore, these syntax elements related to residual coding are not signaled in a video bitstream nor parsed from the video bitstream when a secondary transform index is larger than zero. After applying secondary transform, coefficients in some coding groups are all set to zero, which implies some particular syntax elements related to residual coding, such as the significant flag for these coding groups, do not need to be signaled. In one example, coefficients in the second, third, and fourth coding groups CG 1, CG 2, and CG 3 as well as the region outside the top-left 8×8 region are set to zero after secondary transform.FIG.5illustrates an example of a 16×16 transform block within a 16×16 CU. Each 4×4 sub-block in the 16×16 transform block is a coding group. The coding groups within the top-left 8×8 region are denoted as CG 0, CG 1, CG 2, and CG 3, and the corresponding significant flags for these coding groups are denoted as SigFlagCG0, SigFlagCG1, SigFlagCG2, and SigFlagCG3. If secondary transform is applied, a 16×48 matrix is selected in the secondary transform operation to transform the first 48 primary transform coefficients in the top-left 8×8 region of this 16×16 transform block into 16 secondary transform coefficients. Coefficients in the transform block except for the first coding group CG 0 are set to zero after secondary transform according to one embodiment, so the significant flags SigFlagCG1, SigFlagCG2, and SigFlagCG3for CG 1, CG 2, and CG 3, as well as the significant flags for the region outside the top-left region, are not signaled at the encoder side and are inferred to be false at the decoder side according to this embodiment. Secondary Transform Signaling Depend on Comparing Last Significant Coefficient Position with Threshold In some embodiments of the present invention, secondary transform syntax is conditionally signaled in a video bitstream depending on one or more comparison results from one or more Transform Blocks (TBs) within a CU. One comparison includes checking a position of a last significant coefficient in a TB with a predefined position. In some embodiments, secondary transform syntax is conditionally signaled in a video bitstream only depending on comparison results from considered TBs within a current CU. For example, all TBs in the current CU are the considered TBs. In another example, only luma TBs in the current CU are the considered TBs. In another example, only the TBs with significant coefficients in the current CU are the considered TBs. In another example, the considered TBs can be any subset of TBs in the current CU. In another example, in the current CU, the TBs, except for those not allowed for secondary transform, are the considered TBs. For example, a TB is not allowed for secondary transform if a TB width or TB height is smaller than 4, or a TB is not allowed for secondary transform if it is processed by transform skip. When the comparison results for all considered TBs do not satisfy the signaling condition of secondary transform, secondary transform is inferred as disabled for the current CU and a secondary transform index is not signaled in the video bitstream. When there is no considered TB within the current CU, the secondary transform index is not signaled as secondary transform is disabled for the current CU. An example of setting the signaling condition of secondary transform is when the position of the last significant coefficient for a TB is larger than a predefined position. If the positions of the last significant coefficients for all considered TBs in the current CU are smaller than or equal to the predefined position, the secondary transform index for the current CU is not signaled. The 16×16 CU containing only one 16×16 transform block shown inFIG.5may be used to illustrate some examples of deciding whether a secondary transform index is signaled according to a position of a last significant coefficient in the 16×16 transform block. Assume that this 16×16 transform block is a considered TB in the current 16×16 CU. In the following embodiments, the secondary transform index is signaled at a CU level, or after the signaling of residual coding syntax for all TBs in the current CU, whereas the secondary transform index may be signaled at a TU, TB level, or after signaling the residual coding syntax including positions of the last significant coefficients for one or more TBs in the current CU in some other embodiments and if the secondary transform is signaled at TU, TB level, or after signaling the residual coding syntax including positions of the last significant coefficients for one or more TBs in the current CU, the considered TBs are within the current TU, current TB or the TBs signaling before the current TB. In other embodiments, the secondary transform index for a current CU is signaled after one or more luma TBs in the current CU, or is signaled after a first non-zero TB in the current CU, or is signaled after a first TB in the current CU. The decoder parses last significant coefficient position syntax for each TB of a CU from the video bitstream, and determines the position of the last significant coefficient for each TB based on the parsed the last significant coefficient position syntax. For example, the last significant coefficient position syntax includes last_sig_coeff_x_prefix, last_sig_coeff_y_prefix, last_sig_coeff_x_suffix, and last_sig_coeff_y_suffix. The position of the last significant coefficient is compared with a predefined position, such as (0,0), and the decoder infers the secondary transform index as zero if the position of the last significant coefficient is equal to (0,0). Separate secondary transform indices may be signaled for luma and chroma CB/CU. In this embodiment, for one luma CB and two chroma CBs coded in separate splitting trees, one secondary transform index is conditionally signaled for the luma CB depending on one or more luma TBs in the luma CB and another secondary transform index is conditionally signaled for the chroma CB depending on one or more chroma TBs in the chroma CB. In an alternative embodiment, only one secondary transform index is signaled for each CU, and one or both the luma and chroma TBs use the secondary transform index, for example, the secondary transform index is conditionally signaled for one or more luma TBs according to a position of a last significant coefficient in each luma TB or any subset of luma TBs, and the chroma TBs reuse the secondary transform index. In another example, the secondary transform index is conditionally signaled for one or more luma TBs according to positions of last significant coefficients in the luma and chroma TBs. In this embodiment, for a current CU coded in a shared splitting tree, one secondary transform index is conditionally signaled for the current CU, and secondary transform or inverse secondary transform is applied to one or both of luma and chroma TBs according to the secondary transform index. For example, if secondary transform is only applied to the luma TBs, the secondary transform index of the current CU is conditionally signaled depending on the luma and chroma TBs in the current CU. In one embodiment, the secondary transform index is larger than zero if secondary transform is applied to the 16×16 TU within the 16×16 CU, and the secondary transform index is set to zero if secondary transform is not applied to the 16×16 TU. In the following embodiments, the encoder adaptively skips signaling a secondary transform index according to comparison results from the considered TBs in a CU. Similarly, the decoder adaptively infers secondary transform is not applied to the transform block according to comparison results from the considered TBs in a CU. As shown inFIG.5, there are four 4×4 coding groups in a top-left 8×8 region of the 16×16 transform block, including a first coding group CG 0, a second coding group CG 1, a third coding group CG 2, and a fourth coding group CG 3. The corresponding significant flags for CG 0, CG 1, CG 2, and CG 3 are denoted as SigFlagCG0, SigFlagCG1, SigFlagCG2, and SigFlagCG3. If secondary transform is applied to this 16×16 transform block, a 16×48 matrix is used to transform 48 primary transform coefficients of the top-left 8×8 region in the 16×16 transform block into 16 secondary transform coefficients. The 16 secondary transform coefficients replace the primary transform coefficients in the first coding group CG 0 of the top-left 8×8 region in the 16×16 transform block. Coefficients in CG 1 and CG 2 or coefficients in CG 1, CG 2, and CG 3 or the TB except for CG1 (the first 16 secondary transform coefficients) are set to zero after secondary transform. A region with all transform coefficients set to zero after secondary transform is represented as a zero-out region of secondary transform. If there is any significant coefficient located in the zero-out region of secondary transform, secondary transform is inferred to be disabled. The term coefficients or transform coefficients in the description refers to final coefficients delivered to a quantization process at the encoder or final coefficients received after a dequantization process at the decoder. In some embodiments, if the positions of the last significant coefficients in the considered TBs of a current CU are all smaller than or equal to the predefined position in a processing order, the secondary transform index for the current CU is not signaled at the encoder and inverse secondary transform for the current CU is inferred as disabled at the decoder. The encoder compares the positions of the last significant coefficients in the considered TBs in a current CU with the predefined position in a processing order. If the positions of the last significant coefficients in the considered TBs are all smaller than or equal to the predefined position in a processing order, the encoder skips signaling the secondary transform index for the current CU, otherwise, the encoder signals the secondary transform index based on other existing conditions. The decoder also compares the positions of the last significant coefficients for the considered TBs of a current CU with the predefined position in a processing order. The decoder parses the secondary transform index if the position of the last significant coefficient in at least one considered transform block is larger than the predefined position; otherwise inverse secondary transform is inferred to be disabled for the transform block. In this embodiment, the processing order may be a diagonal scanning order for a transform block, within each coding group, and/or across all coding groups in a transform block. An example of the processing order for a 16×16 transform block is from a top-left 8×8 region, a bottom-left 8×8 region, a top-right 8×8 region, to a bottom-right 8×8 region, and within each 8×8 region of the 16×16 transform block, the processing order is from a top-left coding group, a bottom-left coding group, a top-right coding group, to a bottom-right coding group, and within each coding group, the processing order is a diagonal scanning order. Another example of the processing order for a 16×16 transform block is from the top-left coefficient to the bottom-left coefficient, as shown inFIG.10. In the above embodiments, an example of the predefined position is a first position in a transform block, which contains the DC value in the transform block, such as position 0. In this embodiment, a secondary transform index is not signaled for a current block as secondary transform cannot be applied if there are only DC values in all considered transform block(s) within the current block (which means the positions of last significant coefficients for all considered TBs are at the first position in a TB). The current block is a CU, CB, or a TU. The secondary transform index is only signaled when a position of the last significant coefficient for at least one of the considered TBs is not equal to the first position in the transform block, which implies there is at least one non-DC value in at least one considered transform block. For example of a CU containing one TB, if a position of a last significant coefficient is at position C as shown inFIG.5, which is within the first coding group CG 0 but larger than the first position in CG 0, the encoder signals a secondary transform index for the 16×16 CU and the decoder parses the secondary transform index from the video bitstream. In another example, if a position of a last significant coefficient is at position D as shown inFIG.5, which is the first position in the first coding group CG 0, the encoder skips signaling a secondary transform index for the 16×16 CU and the decoder infers inverse secondary transform is not applied to the TU within the 16×16 CU. In this example, there is only a DC value in the transform block and applying secondary transform to this transform block will not bring additional coding gain, so secondary transform is disabled and secondary transform syntax is not signaled. Another embodiment of the predefined position is a fixed position (x,y) in a first coding group of a top-left 8×8 region, where x and y can be integers selecting from 0, 1, 2, 3, . . . to (a maximum coding group size−1). For example, the fixed position (x,y) is (0,1), (1,0), or (1,1) in the first coding group CG 0 of a top-left 8×8 region within the transform block. Another example of the predefined position in CG 0 is determined by a fixed scanning order, for example, the first, second, third, fourth, to (a maximum coding group size—1)thposition in the first coding group CG 0 according to a fixed scanning order. An example of the fixed scanning order is a diagonal scanning order. Some embodiments of the present invention also check if a number of non-zero coefficients in a first coding group CG 0 of the 16×16 transform block is larger than a predefined number, and the encoder or decoder only signals or parses a secondary transform index when the number of non-zero coefficients in CG 0 is larger than the predefined number. If the number of non-zero coefficients in CG 0 is less than or equal to the predefined number, secondary transform is not applied to the transform block according to this embodiment. The encoder skips signaling a secondary transform index for a current CU when a number of non-zero coefficients in CG 0 of the transform block within the current CU is smaller than or equal to the predefined number. Some examples of the predefined number are 1, 2, 3, and 4. In an embodiment, the encoder signals the secondary transform index if a position of the last significant coefficient is larger than a predefined position or if the number of non-zero coefficients in a first coding group of a top-left 8×8 region is larger than a predefined number and the position of the last significant coefficient is within the first coding group, otherwise the encoder skips signaling the secondary transform index. Some examples of the predefined position are the 64thposition and 48thposition, and an example of the predefined number is 1. In an embodiment of enabling secondary transform for a current CU containing multiple TUs, secondary transform may only be applied when a number of non-zero coefficients in CG 0 of all the TUs are larger than the predefined number. For example, a secondary transform index is not signaled nor parsed if each transform block in a current CU contains less than or equal to one non-zero coefficient. In some other embodiments, a number of non-DC values in the considered transform blocks within a CU is determined and compared with a predefined number to decide secondary transform signaling, for example, secondary transform is only applied when there is at least one non-DC transformed value in at least one considered transform block within a CU. In this embodiment, a secondary transform index is signaled when there is at least one non-DC transformed value in at least one considered transform block within a CU at the encoder. Similarly, the decoder only parses a secondary transform index when there is at least one non-DC transformed value in at least one considered transform block within a CU. The decoder disables inverse secondary transform for TBs within a current CU by inferring a secondary transform index to be zero without parsing the secondary transform index when all TBs in the current CU contain only DC coefficients. In one embodiment, the secondary transform index is signaled or parsed if a position of the last significant coefficient for at least one considered transform block within a CU is larger than a predefined position or if a number of non-DC values in at least one considered transform block within a CU is larger than a predefined number and/or the position of the last significant coefficient is within the first coding group CG 0 of a top-left 8×8 region in each considered transform block within a CU. An example of the predefined number is 0 and some examples of the predefined position are the first position in the transform block. In an embodiment of setting the predefined number to be 0, the number of non-DC values may also be derived from the positions of the last significant coefficients or the last significant coefficient position syntax for the considered TBs, that is the number of non-DC values is larger than the predefined number (e.g. equal to 0) if the position of the last significant coefficient for at least one considered TBs is larger than the first position of the transform block. This embodiment is equivalent to signaling or parsing a secondary transform index when positions of last significant coefficients for at least one considered TBs are larger than a predefined position or when the positions of the last significant coefficients for at least one considered TBs are within a first coding group except for a first position of the first coding group. Exemplary Flowcharts Illustrating Embodiments of Secondary Transform Signaling based on Position ComparisonFIG.6is a flowchart illustrating an exemplary embodiment of a video encoding method of conditionally signaling secondary transform syntax depending on comparison results of last significant coefficient positions implemented in a video encoding system. The video encoding system in this exemplary embodiment processes residuals associated with one or more TBs in a current CU by a transform operation, where the residuals associated with the current CU is partitioned into one or more Transform Units (TUs), and each TU is composed of luma and chroma Transform Blocks (TBs). Secondary transform may be applied to one or both luma and chroma components in this exemplary embodiment. The exemplary embodiment of the video encoding system inFIG.6receives input data associated with a current block in a current video picture in step S602. An example of the current block is a CU, containing one luma TB and two chroma TBs. The input data in step S602includes residuals generated by a prediction operation applied to the current CU, an example of the prediction is intra prediction. The video encoding system determines a transform operation in step S604and applies the transform operation to the one or more TBs in the current block to generate final transform coefficients, for example, the one or more TBs is the luma TB(s) in the current CU. The transform operation includes both primary and secondary transform or the transform operation only includes one of primary transform and secondary transform. In some embodiments, one or more considered TBs are used to determine secondary transform signaling. Some examples of the considered TBs include: all TBs in the current block, only luma TBs in the current block, only chroma TBs in the current block, only TBs with significant coefficients in the current block, a predefined subset of TBs in the current block, and all TBs except for those not allowed for secondary transform. For example, a TB is not allowed for secondary transform if a TB width or a TB height is less than 4 samples, or a TB is not allowed for secondary transform if it is processed by transform skip. The video encoding system determines a position of a last significant coefficient of the final transform coefficients for each TB in the current block in step S606, and compares one or more positions of the last significant coefficients for one or more TBs with a predefined position in step S608, for example, the positions of the last significant coefficients for the considered TBs are compared with the predefined position. For example, the predefined position is a first position in the transform block as the first position contains a DC value. In this example, if the position of the last significant coefficient of at least one TB is larger than the first position, it implies there is at least one non-DC coefficient in the TB, and the video encoding system incorporates a secondary transform index for the current CU in a video bitstream. If all the positions of the last significant coefficients for the one or more TBs are equal to the first position, it implies there is only a DC coefficient in each TB in the current block, and the video encoding system skips signaling the secondary transform index. The video encoding system determines a value of the secondary transform index according to the transform operation applied to the current block, for example, the secondary transform index is equal to zero when secondary transform is disabled and the secondary transform index is equal to one or two when secondary transform is applied. The step of determining the secondary transform index may be performed at any step in steps S604, S606, S608, and S610. If the position of the last significant coefficient of at least one TB is larger than the predefined position in step S608, the secondary transform index is signaled in the video bitstream for the current block in step S610, for example, the secondary transform index is signaled at a CU level, a CB level, a TU level, or a TB level. The current block is then encoded according to the final transform coefficients in step S612. If the positions of the last significant coefficients for the one or more TBs are smaller than or equal to the predefined position in step S608, the secondary transform index is not signaled by the video encoding system, and the current block is encoded according to the final transform coefficients in step S612. FIG.7is a flowchart illustrating an exemplary embodiment of a video decoding method of conditionally parsing secondary transform syntax depending on comparison results of last significant coefficient positions implemented in a video decoding system. The video decoding system receives a video bitstream carrying input data of a current block in a current video picture in step S702. The current block is a CU in this exemplary embodiment, for example, the current block contains one or more luma TBs and two chroma TBs. A last significant coefficient position syntax associated with each TB in the current block is parsed from the video bitstream in step S704. The video decoding system determines a position of a last significant coefficient from the parsed last significant coefficient position syntax in step S706and compares one or more positions of the last significant coefficients for one or more TBs with a predefined position in step S708. In an embodiment, only considered TBs in the current block are used to determine secondary transform signaling, so only positions of the last significant coefficients for the considered TBs are compared with a predefined position in step S708. Some examples of the considered TBs include: all TBs in the current block, only luma TBs in the current block, only chroma TBs in the current block, only TBs with significant coefficients in the current block, a predefined subset of TBs in the current block, and all TBs except for those not allowed for secondary transform. For example, a TB is not allowed for secondary transform if a TB width or a TB height is less than 4 samples, or a TB is not allowed for secondary transform if it is processed by transform skip. For example, the predefined position is a first location in the current block, and the video decoding system infers inverse secondary transform is not applied if the one or more locations of the last significant coefficients for the one or more TBs are equal to the first location without parsing a corresponding secondary transform index. When the location of the last significant coefficient for a TB falls in the first location, the TB contains only a DC value at the first location and all other transform coefficients in the TB are zero. If the position of the last significant coefficient for at least one TB is larger than the predefined position, the video decoding system parses a secondary transform index from the video bitstream in step S710. The secondary transform index is used to determine an inverse transform operation for one or more TBs in the current block, for example, the inverse transform operation skips inverse secondary transform if the secondary transform index is equal to zero, and the inverse transform operation performs both inverse secondary transform and inverse primary transform if the secondary transform index is equal to one or two. If all the positions of the last significant coefficients for the one or more TBs are smaller than or equal to the predefined position in step S708, the video decoding system determines the inverse transform operation by inferring inverse secondary transform is disabled for the one or more TBs in the current block in step S712. After determining the inverse transform operation, the video decoding system applies the inverse transform operation to the one or more TBs in the current block to recover residuals in step S714and decodes the current block based on the residuals in step S716. For example, inverse secondary transform is only applied to the luma TB(s) in the current block but both luma and chroma TBs are checked to determine whether the secondary transform index needs to be parsed. Exemplary System Block Diagram Embodiments of the previously described video processing method are implemented in video encoders, video decoders, or both the video encoders and decoders. For example, the video processing method is implemented in an entropy coding module in the video encoder or in an entropy decoding module in the video decoder. Alternatively, the video processing method is implemented in a circuit integrated to the entropy coding module in the video encoder or video decoder.FIG.8illustrates an exemplary system block diagram for a Video Encoder800implementing various embodiments of the video processing method. A Block Structure Partitioning module810in the Video Encoder800receives input data of video pictures and determines a block partitioning structure for each video picture to be encoded. Each leaf coding block in the current video picture is predicted by Intra prediction in an Intra Prediction module812or Inter prediction in an Inter Prediction module814to remove spatial redundancy or temporal redundancy. The Intra Prediction module812provides intra predictors for the leaf coding block based on reconstructed video data of the current video picture. The Inter Prediction module814performs Motion Estimation (ME) and Motion Compensation (MC) to provide inter predictors for the leaf coding block based on video data from other video picture or pictures. A Switch816selects either the Intra Prediction module812or Inter Prediction module814to supply the predictor to an Adder818to form prediction errors, also called residuals. The residuals in each leaf coding block in the current video picture are divided into one or multiple transform blocks. A Transform (T) module820determines a transform operation for one or more transform blocks in a current CU, and the transform operation includes one or both of primary transform and secondary transform. Some embodiments of the present invention checks if a position of a last significant coefficient in each considered transform block is larger than a predefined position, and disables secondary transform for the one or more transform block if the positions of the last significant coefficients for all considered transform block are less than or equal to the predefined position. In this case, a secondary transform index is not signaled in a video bitstream if the position of the last significant coefficient for at least one considered transform block is less than or equal to the predefined position. The residuals of each transform block are processed by the Transform (T) module820followed by a Quantization (Q) module822to generate transform coefficient levels to be encoded by an Entropy Encoder834. The Entropy Encoder834also encodes prediction information and filter information to form a video bitstream. The video bitstream is then packed with side information. The transform coefficient levels of the current transform block are processed by an Inverse Quantization (IQ) module824and an Inverse Transform (IT) module826to recover the residuals of the current transform block. As shown inFIG.8, reconstructed video data are recovered by adding back the residuals to the selected predictor at a Reconstruction (REC) module828. The reconstructed video data may be stored in a Reference Picture Buffer (Ref. Pict. Buffer)832and used by the Inter Prediction module814for prediction of other pictures. The reconstructed video data from the Reconstruction module828may be subject to various impairments due to the encoding processing, consequently, an In-loop Processing Filter830is applied to the reconstructed video data before storing in the Reference Picture Buffer832to further enhance picture quality. A corresponding Video Decoder900for decoding the video bitstream generated by the Video Encoder800ofFIG.8is shown inFIG.9. The input to the Video Decoder900is decoded by an Entropy Decoder910to parse and recover transform coefficient levels of each transform block and other system information. A Block Structure Partitioning module912determines a block partitioning structure for each video picture. The decoding process of the Decoder900is similar to the reconstruction loop at the Encoder800, except the Decoder900only requires motion compensation prediction in Inter Prediction module916. Each leaf coding block in the video picture is decoded by either an Intra Prediction module914or an Inter Prediction module916, and a Switch918selects an Intra predictor or Inter predictor according to decoded mode information. The transform coefficient levels associated with each transform block is then recovered by an Inverse Quantization (IQ) module922to generate final transform coefficients. An Inverse Transform (IT) module924applies an inverse transform operation to the final transform coefficients to recover residuals. The inverse transform operation includes one or both inverse secondary transform and inverse primary transform. Some embodiments of the present invention determines a position of a last significant coefficient for each considered TB in a current CU by parsing a last significant coefficient position associated with the considered TB, and infers inverse secondary transform for one or more TBs in the current CU is disabled if the positions of the last significant coefficients for all considered TBs are less than or equal to the predefined threshold. If the position of the last significant coefficient of at least one considered TB is larger than the predefined threshold, the Inverse Transform (IT) module determines an inverse transform operation according to a secondary transform index parsed from the video bitstream. The recovered residuals are reconstructed by adding back the predictor in a Reconstruction (REC) module920to produce reconstructed video. The reconstructed video is further processed by an In-loop Processing Filter (Filter)926to generate final decoded video. If a currently decoded video picture is a reference picture, the reconstructed video of the currently decoded video picture is also stored in a Reference Picture Buffer928for later pictures in decoding order. Various components of the Video Encoder800and Video Decoder900inFIG.8andFIG.9may be implemented by hardware components, one or more processors configured to execute program instructions stored in a memory, or a combination of hardware and processor. For example, a processor executes program instructions to control applying a transform operation or an inverse transform operation. The processor is equipped with a single or multiple processing cores. In some examples, the processor executes program instructions to perform functions in some components in the Encoder800and Decoder900, and the memory electrically coupled with the processor is used to store the program instructions, information corresponding to the reconstructed data, and/or intermediate data during the encoding or decoding process. The memory in some embodiments includes a non-transitory computer readable medium, such as a semiconductor or solid-state memory, a Random Access Memory (RAM), a Read-Only Memory (ROM), a hard disk, an optical disk, or other suitable storage medium. The memory may also be a combination of two or more of the non-transitory computer readable mediums listed above. As shown inFIGS.8and9, the Encoder800and Decoder900may be implemented in the same electronic device, so various functional components of the Encoder800and Decoder900may be shared or reused if implemented in the same electronic device. Any of the embodiments of the present invention may be implemented in a Transform module820of the Encoder800, and/or an Inverse Transform module924of the Decoder900. Alternatively, any of the embodiments may be implemented as a circuit coupled to the Transform module820of the Encoder800and/or the Inverse Transform module924of the Decoder900, so as to provide the information needed by the Transform module820or the Inverse Transform module924. Embodiments of the video processing methods adaptively enable secondary transform may be implemented in a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described above. For examples, applying a transform operation or an inverse transform operation may be realized in program codes to be executed on a computer processor, a Digital Signal Processor (DSP), a microprocessor, or Field Programmable Gate Array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. Reference throughout this specification to “an embodiment”, “some embodiments”, or similar language means that a particular feature, structure, or characteristic described in connection with the embodiments may be included in at least one embodiment of the present invention. Thus, appearances of the phrases “in an embodiment” or “in some embodiments” in various places throughout this specification are not necessarily all referring to the same embodiment, these embodiments can be implemented individually or in conjunction with one or more other embodiments. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, or operations are not shown or described in detail to avoid obscuring aspects of the invention. The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
67,991
11943477
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS The present technology may be described in terms of functional block components and various processing steps. Such functional blocks may be realized by any number of components configured to perform the specified functions and achieve the various results. For example, the present technology may employ various quantizers, transform algorithms, and the like, which may carry out a variety of functions. In addition, the present technology may be practiced in conjunction with any number of electronic systems, such as automotive, aviation, “smart devices,” portables, and consumer electronics, and the systems described are merely exemplary applications for the technology. Further, the present technology may employ any number of conventional prediction techniques, quantization techniques, and transmission and/or storage techniques. Methods and apparatus for transform coefficient encoding and decoding according to various aspects of the present technology may operate in conjunction with any suitable electronic system, such as an imaging system, an audio system, or any other system that compresses data and/or operates on compressed data. Referring toFIGS.1and2, a system100according to various aspects of the present technology may be configured to encode source data, generate a compressed bitstream, decode the compressed bitstream, and generate output data that represents the source data. In an exemplary embodiment, the system100may comprise an encoder105, an interface circuit155, and a decoder110that operate together to compress, transmit, and reconstruct data. An imaging system200may comprise an image sensor201equipped with a readout logic circuit202, a compander205to improve the signal-to-noise ratio of a signal by compressing the range of amplitudes of the signal prior to transmission, and a first line buffer circuit210to generate blocks of data. For example, the image sensor201may generate one or more frames of image data, wherein each frame of image data comprises a plurality of pixel data. The compander205may receive and compress the frame of image data. The compander205may then transmit the compressed frame of image data to the first line buffer circuit210, wherein the first line buffer circuit210divides the frame of image data into a plurality of blocks comprising a subset of pixels values from the plurality of pixel data, wherein each block comprises a plurality of sub-blocks. For example, each block may be described as a 2×2 block comprising 4 sub-blocks, a 4×4 block comprising 16 sub-blocks, etc, where each sub-block represents an individual pixel signal. The first line buffer circuit210may then transmit each block successively to the encoder105for further processing. The encoder105may be configured to receive source data and convert the source data from one format or code to another for the purpose of standardization, speed, and/or compression. According to an exemplary embodiment, the encoder105may comprise a prediction module115, a difference encoder, a transform module120, a quantizer125, an entropy encoding module130, and a matching decoder160. According to an exemplary embodiment, the matching decoder160may be configured to generate data that replicates the internal state and/or the decompressed data at the decoder110. The matching decoder160may comprise an inverse quantizer165to perform a dequantization function that is complementary to that performed by the quantizer125and an inverse transform module170to perform an inverse transform algorithm that is complementary to that performed by the transform module120. The matching decoder160may be connected between the quantizer125and the prediction module115. Accordingly, the matching decoder160may operate to dequantize and inversely transform the data from the quantizer125, thus generating a replica of the decompressed data at the decoder110(i.e., replica data, replicated data). The matching decoder160may then transmit the replicated data to the prediction module115. The prediction module115may be configured to generate a predicted block270using already-encoded blocks (i.e., blocks that have been encoded in a previous cycle) and data that has not been encoded. For example, the prediction module115may be configured to use one or more of a current block240, a previous block245, a corner block250, and an upper block255. The current block240may comprise current, original input data (e.g., pixel data) that has not been encoded (compressed) and the previous block245may comprise replicated data from the matching decoder160. In other words, as the first line buffer circuit210transmits blocks of data to the encoder105, a current (in time) block of data is used to form the current block240and replica data (via the matching decoder160) is used to form the previous block245. The corner block250comprises data that has already been encoded and later decoded (i.e., data has undergone difference encoding by the difference encoder215, has been quantized by the quantizer125, and has undergone reconstruction by a matching decoder160). Similarly, the upper block255comprises decoded data that has already been encoded and later decoded (i.e., data has undergone difference encoding by the difference encoder215, has been quantized by the quantizer125, and has undergone reconstruction by the matching decoder160). According to an exemplary embodiment, the prediction module115forms the predicted block270without using data from the decoder110. According to an exemplary embodiment, the prediction module115may receive the not-yet-encoded pixel data in the form of current block240and use the already-encoded pixel data to make predictions and form the predicted block270. For example, the prediction module115may be configured to use the replicated data (forming the previous block245) from the matching decoder160, the corner block250, and the upper block255to form the predicted block270. The encoder105may further comprise a difference encoder215(i.e., a delta encoder) configured to determine a difference between two data samples and transmit the difference, rather than an original data sample to the transform module120. For example, the difference encoder215may be connected to and configured to receive the predicted block270and the current block240, thereby generating a difference between data from the predicted block270and the original, not-yet-encoded pixel data from the current block240. The difference encoder215may comprise any circuit and/or system suitable for calculating a difference, such as a circuit configured to perform a simple arithmetic subtraction, or any other suitable difference operation, such as calculating a ratio, and the like. The transform module120may be configured to generate a transformed output by applying a transform algorithm to the predicted block270, or the difference block in a case where the encoder105comprises the difference encoder215. According to an exemplary embodiment, the transform module120is configured to apply a fast Walsh-Hadamard transform (FWHT) to the predicted block270or the difference block from the difference encoder215. The transform module120may be realized by hardware, software, or a combination thereof. The quantizer125maps input values from a set of values to an output value from a finite, smaller set of values. The quantizer125may be connected to the transform module120and configured to receive the transformed output. The quantizer125may then map the transformed output to a quantized output. The quantizer125may comprise any circuit and/or system suitable for performing a quantization function. According to an exemplary embodiment, the encoder105may further comprise encoded line buffers260. The encoded line buffers260may be connected to an output terminal of the quantizer125and configured to use the quantized output to form the upper block255and the corner block250. The encoder105may further comprise a reorder module220configured to rearrange the quantized outputs and generate an ordered set (e.g., a vector) of quantized outputs, wherein the quantized outputs are reordered in ascending order. The quantized outputs may be referred to as a plurality of quantized coefficients. The reorder module220may be connected to an output of the quantizer125and receive the quantized outputs. The entropy encoder130converts the ordered set (i.e., plurality of quantized coefficients) into another domain which represents each quantized coefficient a smaller number of bits. In general, the entropy encoder130maps the values of the quantized coefficients to a stream of bits by utilizing their statistical properties. Some transform coefficients may be quantized to zero and the values of the non-zero coefficients are coded using a coding scheme, such as a variable-length coding (e.g., Huffman coding) or arithmetic coding. According to an exemplary embodiment, the entropy encoder130may generate a compressed bitstream using a single table of codes. According to an exemplary embodiment, the entropy encoder130may represent each quantized coefficient from the plurality of quantized coefficients as a symbol, wherein the symbol comprises a context, a magnitude (i.e., an exponent), and a mantissa. The symbol may further comprise a sign when the quantized coefficient is a non-zero value. The context in this case is a ‘next’ context. The entropy encoder130may use the magnitude to define the length of the encoded mantissa to form the ithcoefficient: Ci=(−1)sign*2magnitude*mantissa. In the case of a quantized coefficient having a zero value, the entropy encoder130represents the number of consecutive zeros as a symbol. Therefore, the symbol for a quantized coefficient with a zero value comprises a context, the total number of zeros (‘zero_count_magnitude’), and the number of zeros of the mantissa (‘zero_count_mantissa’). The context in this case is a ‘next’ context. The zero coefficient is described by: Ci(i+n−1)=0, where n=2zero_count_magnitude*zero_count_mantissa. The number of contexts may be selected according to the particular application, and it may be desired to keep the number of contexts as low as possible. According to an exemplary embodiment, the entropy encoder may comprise the four distinct contexts: a first context, a second context, a third context, and a fourth context. The first context may be a starting context and be associated with the first coefficient from the plurality of quantized coefficients. The second context may be associated with a last coefficient from the plurality of quantized coefficients with a magnitude of zero. The third context may be associated with a last coefficient from the plurality of quantized coefficients with a magnitude of one. The fourth context may be a default context. The entropy encoder130may then encode the symbol. In order to reduce the number of symbols, the quantized coefficient may be represented using the magnitude and the mantissa, wherein the sign and the mantissa are encoded separately. For example, and referring toFIG.4, the entropy encoder130may comprise a look-up table containing possible symbols (e.g., E1:E8, Z1:EOB, T1:T8, and M1:M8) according to a statistical probability for each possible symbol, a ‘currently-active’ context, which is one of the four distinct contexts, and a ‘next’ context, which is the context that immediately follows the ‘currently-active’ context. The entropy encoder130may be configured to encode the symbol using the look-up table. For example, the look-up table may comprise Huffman codes. The entropy encoder130may be configured to encode the symbol using a subset of the Huffman codes, wherein the subset is determined based on the ‘currently active’ context. Accordingly, each quantized coefficient may be encoded using a single look-up table based on the ‘currently-active’ context, the ‘next’ context, and the magnitude. According to an exemplary embodiment, the entropy encoder130may further be configured to count the number of zeros and the number of ones in the plurality of quantized coefficients using a run-length encoding (RLE) algorithm. The encoder105may further comprise an output buffer230configured to temporarily hold data. For example, the output buffer230may be connected to and configured to receive a compressed bitstream from the entropy encoder130. The output buffer230may temporarily hold the compressed bitstream before transmitting it to the interface155. The encoder105may further comprise a bandwidth control circuit235configured to selectively increase or decrease the bandwidth of the compressed bitstream. For example, the bandwidth control circuit235may increase the bandwidth if the desired quality decreases below a set threhsold, and may decrease the bandwidth if the bandwidth exceeds transmission or storage capabilities of the system100. The interface circuit155transmits data from a transmitting device to a receiving device. For example, the interface circuit155may be connected to an output terminal of the encoder105and/or the encoder module130and receive the compressed bitstream. The interface circuit155may be further connected to an input terminal of the decoder110and configured to transmit the compressed bitstream to the decoder110and/or the decoding module150. According to an exemplary embodiment, the interface circuit155comprises a mobile industry processor interface (MIPI) that is a bi-direction signaling protocol to transmit data between the imaging system200and a host processor (not shown), which contains the decoder110, using a MIPI D-PHY serial bus. The decoder110may be configured to perform various complementary processes of the encoder105, such as decoding, dequantization, inverse transform, and reconstruction. For example, the decoder110may comprise a decoding module150to receive the compressed bitstream and decode/decompress the bitstream, a dequantizer145to receive and dequantize the decoded bitstream, an inverse transform module140to receive and perform an inverse transform on the dequantized data, and an reconstruction module135to receive and reconstruct the transformed data and generate output data that represents the source data. The decoder110may be implemented in a host processor or any suitable host device, such as a device for displaying images and/or video. According to an exemplary embodiment, and referring toFIGS.1and3, the system100generates a frame of image data with the image sensor201, divides the frame of image data into a plurality of blocks using the first line buffer circuit210, and encodes the pixel data with the encoder105. The encoder105generates the predicted block245using at least one of the corner block250, the upper block255, the previous block. The encoder105then performs difference encoding with the difference encoder215by computing a difference of the values in the current block240and the corresponding original, not-yet-encoded pixel data. The encoder105then transforms the difference block using FWHT for each row and each column of the block. The encoder then quantizes the transformed block (comprising transformed coefficients) using the transform module120. The encoder105then reorders the quantized coefficients. The quantized coefficient may be represented by a symbol and the encoder105uses the look-up table to code symbol. The encoded symbol is then transmitted as a compressed bitstream to the output buffer230, wherein the output buffer230transmits the compressed bitstream to the interface155and/or the decoder110. The decoder110then performs complementary functions to construct the output data, which represents the source data. The system100may comprise the interface155to store and/or transmit the compressed bitstream. The system100further decodes the compressed bitstream to generate output data that represents the source data using the decoder110. The decoder110expands the compressed bitstream using the decoding module150, dequanitized the data using the dequantizer145, performs inverse transformation using the inverse transform module140, and reconstructs the data using the reconstruction module135. According to an exemplary embodiment, and referring toFIGS.3,5A-D, the encoder130uses the replica data comprising replica values from the matching decoder160to make predictions. For example, the encoder105may receive the source data, for example the pixel data, wherein the prediction module115receives the source data and performs intra block prediction comprising generating a first predicted value using a previously-predicted sub-block and generating a second predicted value using the previously-predicted sub-block and a replica value. For example, a first predicted value (e.g., sp0) is predicted using previously-predicted values (e.g., sb, sc, sd, se) (FIG.5A). The encoder105then replaces the first predicted value (sp0) with a first replica value (si0), and the encoder105then predicts a second predicted value (e.g., sp1) using the first replica value (si0) and the previously-predicted values (e.g., sa, sb) (FIG.5B). The encoder105then predicts a third predicted value (e.g., sp2) using previously-predicted values (e.g., sd, se, sf) and the first replica value (si0) (FIG.5C). The encoder105then replaces the second predicted value (sp1) and the third predicted value (sp2) with a second replica value (si1) and a third replica value (si2), respectively. The encoder105then predicts a fourth predicted value (e.g., sp3) using only the replica values (e.g., si0, si1, si2) (FIG.5D). This method reduces the need for multiple prediction directions, as a single direction can be used, and provides a more accurate prediction. The encoder105then performs difference encoding using the difference encoder215. For example, the difference encoder215then receives the values in the predicted block270, as formed above, and the original, not-yet-encoded pixel values. The difference encoder215then computes the difference between the predicted values and the original, not-yet-encoded values and encodes the difference value. Each predicted value in the predicted block is subtracted from an original, not-yet-encoded value having a same coordinate location within the frame of image data. Since the predicted values are more accurate, the encoded difference value is also more accurate than conventional methods. According to an exemplary embodiment, and referring toFIGS.1and6A-6D, the decoder110operates to perform complementary functions, such as decoding, dequantization, inverse transformation, and reconstruction, to generate output data that represents the source data. To reconstruct the data, the decoder110may perform prediction in a similar manner as the encoder105performs prediction. For example, the decoder110, predicts a first decoded value (sp0) using previously-predicted values (sb, sc, sd, se) (FIG.6A). The decoder110may further correct the first decoded value (sp0) according to a transform coefficient to increase the accuracy of the output data. The decoder110then replaces the first predicted value (sp0) with a first decoded pixel data (sd0), and the decoder110then predicts a second predicted value (e.g., sp1) using the first decoded pixel data (sd0) and previously-predicted values (e.g., sa, sb) (FIG.6B). The decoder110then predicts a third predicted value (e.g., sp2) using previously-predicted values (e.g., sd, se, sf) and the first decoded pixel data (sd0) (FIG.6C). The decoder110then replaces the second predicted value (sp1) and the third predicted value (sp2) with a second decoded pixel data (sd1) and a third decoded pixel data (sd2), respectively. The decoder110then predicts a fourth decoded value (e.g., sp3) using only the decoded pixel data (e.g., sd0, sd1, sd2) (FIG.6D). In the foregoing description, the technology has been described with reference to specific exemplary embodiments. The particular implementations shown and described are illustrative of the technology and its best mode and are not intended to otherwise limit the scope of the present technology in any way. Indeed, for the sake of brevity, conventional manufacturing, connection, preparation, and other functional aspects of the method and system may not be described in detail. Furthermore, the connecting lines shown in the various figures are intended to represent exemplary functional relationships and/or steps between the various elements. Many alternative or additional functional relationships or physical connections may be present in a practical system. The technology has been described with reference to specific exemplary embodiments. Various modifications and changes, however, may be made without departing from the scope of the present technology. The description and figures are to be regarded in an illustrative manner, rather than a restrictive one and all such modifications are intended to be included within the scope of the present technology. Accordingly, the scope of the technology should be determined by the generic embodiments described and their legal equivalents rather than by merely the specific examples described above. For example, the steps recited in any method or process embodiment may be executed in any order, unless otherwise expressly specified, and are not limited to the explicit order presented in the specific examples. Additionally, the components and/or elements recited in any apparatus embodiment may be assembled or otherwise operationally configured in a variety of permutations to produce substantially the same result as the present technology and are accordingly not limited to the specific configuration recited in the specific examples. Benefits, other advantages and solutions to problems have been described above with regard to particular embodiments. Any benefit, advantage, solution to problems or any element that may cause any particular benefit, advantage or solution to occur or to become more pronounced, however, is not to be construed as a critical, required or essential feature or component. The terms “comprises”, “comprising”, or any variation thereof, are intended to reference a non-exclusive inclusion, such that a process, method, article, composition or apparatus that comprises a list of elements does not include only those elements recited, but may also include other elements not expressly listed or inherent to such process, method, article, composition or apparatus. Other combinations and/or modifications of the above-described structures, arrangements, applications, proportions, elements, materials or components used in the practice of the present technology, in addition to those not specifically recited, may be varied or otherwise particularly adapted to specific environments, manufacturing specifications, design parameters or other operating requirements without departing from the general principles of the same. The present technology has been described above with reference to an exemplary embodiment. However, changes and modifications may be made to the exemplary embodiment without departing from the scope of the present technology. These and other changes or modifications are intended to be included within the scope of the present technology, as expressed in the following claims.
23,289
11943478
DETAILED DESCRIPTION Inventive concepts will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of present inventive concepts to those skilled in the art. It should also be noted that these embodiments are not mutually exclusive. Components from one embodiment may be tacitly assumed to be present/used in another embodiment. The following description presents various embodiments of the disclosed subject matter. These embodiments are presented as teaching examples and are not to be construed as limiting the scope of the disclosed subject matter. For example, certain details of the described embodiments may be modified, omitted, or expanded upon without departing from the scope of the described subject matter. FIG.9illustrates an example of an operating environment of an encoder900that may be used to encode bitstreams as described herein. The encoder900receives video from network902and/or from storage904and encodes the video into bitstreams and transmits the encoded video to decoder906via network908. Storage device904may be part of a storage depository of multi-channel audio signals such as a storage repository of a store or a streaming video service, a separate storage component, a component of a mobile device, etc. The decoder906may be part of a device910having a media player912. The device910may be a mobile device, a set-top device, a desktop computer, and the like. In other embodiments, functionality of the decoder910may be distributed across multiple devices. FIG.10is a block diagram illustrating elements of encoder900configured to encode video frames according to some embodiments of inventive concepts. As shown, encoder900may include a network interface circuit1005(also referred to as a network interface) configured to provide communications with other devices/entities/functions/etc. The encoder900may also include a processor circuit1001(also referred to as a processor) coupled to the network interface circuit1005, and a memory circuit1003(also referred to as memory) coupled to the processor circuit. The memory circuit1003may include computer readable program code that when executed by the processor circuit1001causes the processor circuit to perform operations according to embodiments disclosed herein. According to other embodiments, processor circuit1001may be defined to include memory so that a separate memory circuit is not required. As discussed herein, operations of the encoder900may be performed by processor1001and/or network interface1005. For example, processor1001may control network interface1005to transmit communications to decoder906and/or to receive communications through network interface1002from one or more other network nodes/entities/servers such as other encoder nodes, depository servers, etc. Moreover, modules may be stored in memory1003, and these modules may provide instructions so that when instructions of a module are executed by processor1001, processor1001performs respective operations. FIG.11is a block diagram illustrating elements of decoder906configured to decode video frames according to some embodiments of inventive concepts. As shown, decoder906may include a network interface circuit1105(also referred to as a network interface) configured to provide communications with other devices/entities/functions/etc. The decoder906may also include a processor circuit1101(also referred to as a processor) coupled to the network interface circuit1105, and a memory circuit1103(also referred to as memory) coupled to the processor circuit. The memory circuit1103may include computer readable program code that when executed by the processor circuit1101causes the processor circuit to perform operations according to embodiments disclosed herein. According to other embodiments, processor circuit1101may be defined to include memory so that a separate memory circuit is not required. As discussed herein, operations of the decoder906may be performed by processor1101and/or network interface1105. For example, processor1101may control network interface1105to receive communications from encoder900. Moreover, modules may be stored in memory1103, and these modules may provide instructions so that when instructions of a module are executed by processor1101, processor1101performs respective operations. One problem that may occur in the current version of VVC is that given a W×H MIP predicted coding block, where W specifies the width of the coding block and H specifies the height of the coding block, the W is equal or less than MaxTbSizeY, the H is equal or less than MaxTbSizeY, where MaxTbSizeY specifies the maximum transform size. In other words, when W is greater than MaxTbSizeY or H is greater than MaxTbSizeY, the current block can NOT be coded as a MIP predicted block. This restriction of MIP prediction impacts the coding efficiency when a video encoder or decoder has a configuration that the maximum coding block size is greater than the maximum transform size. The inventive concepts described herein allow the MIP predicted block when the current coding block has a width value that is greater than the maximum transform size or the current block has a height value that is greater than the maximum transform size. Thus, a MIP predicted coding block which has multiple transform blocks is allowed. One advantage that may be achieved is enabling the MIP prediction when the current coding block has a width or height that is greater than the maximum transform size. The benefit improves the coding efficiency by using MIP on coding blocks which has a width or height that is greater than the maximum transform size. An example in VVC has been implemented using the VTM6.0 as reference VVC software. As in the third embodiment, compared to the current software configuration, the maximum transform size is configured to be equal to 32 for this case. All IntraOver VTM-6.0 (maximum transform size = 32)YUVEncTDecTClass A1−0.04%0.03%0.00%101%102%Class A20.00%−0.08%−0.02%101%103%Class B0.00%0.02%0.04%101%98%Class C0.00%0.01%0.00%101%100%Class E−0.05%−0.02%−0.16%99%100%Overall−0.02%0.00%−0.02%100%101%Class D0.00%0.00%−0.01%100%101%Class F0.02%−0.02%−0.06%101%101% Random AccessOver VTM-6.0 (maximum transform size = 32)YUVEncTDecTClass A1−0.10%−0.20%−0.33%100%100%Class A20.00%−0.10%−0.03%100%100%Class B−0.01%−0.12%0.00%100%100%Class C−0.2%−0.06%−0.01%100%101%Class EOverall−0.03%−0.11%−0.08%100%100%Class D0.01%−0.07%0.01%100%101%Class F0.02%0.13%−0.06%100%98% In the description that follows, the term “sample” may be interpreted as “sample value”. For example, a sentence “Derive X from the Y samples” may be interpreted as “Derive X from the Y sample values”. Similarly, a sentence “The X samples are derived by Y” may be interpreted as “The X sample values are derived by Y”. The term “MIP INPUT” can be interpreted as “The extracted reduced boundary bdryredwhich is used as the input to the matrix multiplication”. The term “MIP OUTPUT” can be interpreted as “The reduced prediction signal predredwhich is the output of the matrix multiplication”. In a first embodiment, a method for video encoding or decoding for a current intra predicted block is provided. The method can preferably be applied for a block which is coded by matrix based intra prediction (MIP). The method may derive the size of the current CU as a width value W and height value H by decoding syntax elements in the bitstream. The method may also determine that the current block is an Intra predicted block from decoding elements in the bitstream. The method determines whether the current CU has a syntax element of mipFlag in the bitstream by checking one or several criteria. In other words, the method determines that the current CU has to encode a syntax element of mipFlag into the bitstream or the current CU has to decode a syntax element of mipFlag from the bitstream by checking one or several criteria. If the method identifies that the current CU has a syntax element in the bitstream, it determines that the current block is a MIP predicted block from decoding elements in the bitstream. The method determines a MIP weight matrix to be used for the current block from a matrix look-up table by using the width and height of the current coding block and the MIP prediction mode of the current coding block. The method derives the maximum transform size MaxTbSizeY from decoding elements in the bitstream. The method determines that the current MIP predicted coding block has one transform block or multiple transform blocks by checking:If W is equal to or less than MaxTbSizeY and H is equal to or less than the MaxTbSizeY, there is one transform block.Otherwise, there are multiple transform blocks. When it is determined that there is one transform block, the method may derive the MIP prediction block by using the determined MIP weight matrix and previously decoded elements in the bitstream. When it is determined that there are multiple transform blocks, the method may derive the first MIP prediction block by using the determined MIP weight matrix and previously decoded elements in the bitstream. The method derives the rest of the prediction blocks by using the determined MIP weight matrix and previously decoded elements in the bitstream and decoded elements in one or several previously decoded transform blocks in the current coding block. The method may derive the current block by using the derived one or several MIP prediction blocks. In a second embodiment, when the method determines that the current block is an Intra predicted block, it determines that the current CU has a syntax element of mipFlag in the bitstream. In other words, if the current block is an Intra predicted block, there is always a syntax element of mipFlag in the bitstream. In a third embodiment, when the method determines that the current block is an Intra predicted block, it determines that the current CU has a syntax element of mipFlag in the bitstream by checking the following criteria: The current CU does NOT have a syntax element of mipFlag in the bitstream if:a. W is greater than (T_whRatio×H), ORb. H is greater than (T_whRatio×W), Otherwise, the current CU has a syntax element of mipFlag in the bitstream. Here, T_whRatio specifies a constant parameter, and as an example T_whRatio is equal to 4. r In a fourth embodiment, if the method determines that the current block is an Intra predicted block, it determines that the current CU has a syntax element of mipFlag in the bitstream by checking the following criteria: The current CU does NOT have a syntax element of mipFlag in the bitstream if:c. W is greater than (T_whRatio×H), ORd. H is greater than (T_whRatio×W), ORe. W is greater than a pre-determined threshold T, ORf. H is greater than a pre-determined threshold T Otherwise, the current CU has a syntax element of mipFlag in the bitstream. Here, T_whRatio specifies a constant parameter, and as an example T_whRatio is equal to 4. Also, the threshold T is a constant parameter, and as an example the threshold T is equal to 64. r In a fifth embodiment, the method described above can be applied in an encoder and/or decoder of a video or image coding system. In other words, a decoder may execute the method described here by all or a subset of the following steps to decode an intra predicted block in a picture from a bitstreams:1. Derive the size of the current CU as a width value W and height value H by decoding syntax elements in the bitstream.2. Determine that the current block is an Intra predicted block from decoding elements in the bitstream.3. Determine whether the current block has a supported MIP predicted block size or not:a. The current block can NOT be predicted as MIP block if:i. W is greater than (T_whRatio×H), ORii. H is greater than (T_whRatio×W), ORiii. W is greater than a pre-determined threshold T, ORiv. H is greater than a pre-determined threshold Tb. Otherwise, the current block can be predicted as MIP block.4. If the method determines that the current block can be predicted as a MIP block, it determines that the current block is a MIP predicted block from decoding elements in the bitstream.5. Determine a prediction mode for the current block from decoding elements in the bitstream.6. Derive a mipSizeId value from the width value W and height value H of the current CU.7. Determine a matrix vector to use for the current block from a matrix vector look-up table by using the prediction mode and the mipSizeId value as a table index.8. Derive a maximum transform size MaxTbSizeY from decoding elements in the bitstream.9. Determine that the current CU has one transform block or has multiple transform blocks by checking the following criteria:a. If W is equal to or less than the MaxTbSizeY and H is equal to or less than the MaxTbSizeY, there is one transform block. Where the transform block has its width nTbW=W and its height nTbH=H.b. Otherwise, there are multiple transform blocks. Where each transform block has its width nTbW=min(W, MaxTbSizeY) and its height nTbH=min(H, MaxTbSizeY).10. Determine the original boundary sample values for the current block. The original boundary samples are nTbW samples from the nearest neighboring samples to the above of the current transform block and nTbH samples from the nearest neighboring samples to the left of the current transform block.11. Determine the size of the reduced boundary bdryred by the mipSizeId value of the current block.12. Determine the dimension size of the reduced prediction signal predredby the mipSizeId value of the current block.13. Derive the reduced boundary bdryred from the original boundary samples.14. Derive the reduced prediction signal predredtempby matrix multiplication of the matrix vector and the reduced boundary bdryred.15. Derive the reduced prediction signal predredby using sample value clipping on each sample of the predredtemp.16. Determine whether or not to apply vertical linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block.17. Determine whether or not to apply horizontal linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block.18. If the decision is to apply both vertical and horizontal linear interpolations,a. By using the width nTbW and the height nTbH of the current transform block, determine which linear interpolation direction to apply first.b. If the decision is to first apply vertical linear interpolation,i. Determine the size of the reduced top boundary bdryredlltopfor the vertical linear interpolation by the width nTbW and the height nTbH of the current transform block.ii. Derive the reduced top boundary bdryredlltopfrom the original top boundary samples.c. If the decision is to first apply horizontal linear interpolation,i. Determine the size of the reduced left boundary bdryredllleftfor the horizontal linear interpolation by the width nTbW and the height nTbH of the current transform block.ii. Derive the reduced left boundary bdryredllleftfrom the original left boundary samples.19. Derive a first MIP prediction block pred by generating the sample values at the remaining positions by using linear interpolation.20. If in step 9 it determines that there are multiple transform blocks in the current CU, repeat step 10 to 19 to derive a second MIP prediction block for each transform block in the current CU.21. Decode the current block by using the derived one or several MIP prediction blocks. In a sixth embodiment, an example of change the current VVC draft text (response to Embodiment 3) is provided. The changes (strikethrough and double underline) to the current VVC draft text (ref JVET-02001-vE) for the MIP process for one embodiment (embodiment 2) of the current invention is as follows: 7.3.8.5 Coding Unit Syntax coding_unit x0, y0, cbWidth, cbHeight, cqtDepth, treeType, modeType ) {DescriptorchType = treeType = = DUAL_TREE_CHROMA? 1 : 0if( slice_type != I ∥ sps_ibc_enabled_flag ∥ sps_palette_enabled_flag) {if treeType != DUAL_TREE_CHROMA &&!(( ( cbWidth = = 4 && cbHeight = = 4 ) modeType = = MODE_TYPE_INTRA )&& !sps_ibc_enabled_flag ))cu_skip_flag[ x0 ][ y0 ]ae(v)if cu_skip_flag[ x0 ][ y0 ] = = 0 && slice_type != I&& !( cbWidth = = 4 && cbHeight = = 4 ) && modeType = = MODE_TYPE_ALL )pred_mode_flagae(v)if( (( slice_type = = I && cu_skip_flagl x0 ][ y0 ] = =0 )( slice_type != I && ( CuPredMode[ chType ][ x0 ][ y0 ] != MODE_INTRA ∥( cbWidth = = 4 && cbHeight = = 4 && cu_skip_flag[ x0 ][ y0 ] = = 0 ))) ) &&cbWidth <= 64 && cbHeight <= 64 && modeType != MODE_TYPE_INTER &&sps_ibc_enabled_flag && treeType != DUAL_TREE_CHROMA)pred_mode_ibc_flagae(v)if( ((( slice_type = = ∥ ( cbWidth = = 4 && cbHeight = = 4 ) ∥ sps_ibc_enabled_flag) &&CuPredMode[ x0 ][ y0 ] = = MODE_INTRA ) ∥(slice_type != I && !( cbWidth = = 4 && cbHeight = = 4) && ∥ sps_ibc_enabled_flag&& CuPredMode[ x0 ][ y0 ] != MODE_INTRA ) ) && sps_palette_enabled_flag &&cbWidth <= 64 && cbHeight <= 64 && && cu_skip_flagl x0 ][ y0 ] = = 0 &&modeType != MODE_INTER)pred_mode_plt_flagae(v)}if( CuPredMode[ chType ][ x0 ][ y0 ] = = MODE_INTRA ∥CuPredMode[ chType ][ x0 ][ y0 ] = = MODE_PLT) {if( treeType = = SINGLE_TREE ∥ treeType = = DUAL_TREE_LUMA) {if( pred_mode_plt_flag) {if treeType = = DUAL_TREE_LUMA )palette_coding( x0, y0, cbWidth, cbHeight, 0, 1 )else /* SINGLE_TREE */palette_coding( x0, y0, cbWidth, cbHeight, 0, 3 )} else {if( sps_bdpcm_enabled_flag &&cbWidth <= MaxTsSize && cbHeight <= MaxTsSize )intra_bdpcm_flagae(v)if( intra_bdpcm_flag )intra_bdpcm_dir_flagae(v)} else {if( sps_mip_enabled_flag &&Abs( Log2( cbWidth )•Log2( cbHeight )) <= 2 )intra_mip_flag[ x0 ][ y0 ]ae(v)if( intra_mip_flag[ x0 ][ y0 ])intra_mip_mode[ x0 ][ y0 ]ae(v)else {if( sps_mrl_enabled_flag && ( ( y0 % CtbSizeY ) > 0 ))intra_luma_ref_idx[ x0 ][ y0 ]ae(v)if ( sps_isp_enabled_flag && intra_luma_ref_idx[ x0 ][ y0 ] = = 0 &&( cbWidth <= MaxTbSizeY && cbHeight <= MaxTbSizeY ) &&( cbWidth * cbHeight > MinTbSizeY * MinTbSizeY ))intra_subpartitions_mode_flag[ x0 ][ y0 ]ae(v)if( intra_subpartitions_mode_flag[ x0 ][ y0 ] = = 1 )intra_subpartitions_split _flag[ x0 ][ y0 ]ae(v)if( intra_luma_ref_idx[ x0 ][ y0 ] = = 0 )intra_luma_mpm_flag[x0][y0]ae(v)if( intra_luma_mpm_flag[ x0 ][ y0 ] ) {if( intra_luma_ref_idx[ x0 ][ y0 ] = = 0 )intra_luma_not_planar_flag[ x0 ][ y0 ]ae(v)if( intra_luma_not_planar_flag[ x0 ][ y0 ])intra_luma_mpm_idx[ x0 ][ y0 ]ae(v)} elseintra_luma_mpm_remainder[ x0 ][ y0 ]ae(v)}}}}if (treeType = = SINGLE_TREE ∥ treeType = = DUAL_TREE_CHROMA) &&ChromaArrayType != 0 ) {if ( pred_mode_plt_flag && treeType = = DUAL_TREE_CHROMA )palette_coding( x0, y0, cbWidth / SubWidthC, cbHeight / SubHeightC, 1, 2 )else {if( CclmEnabled )cclm_mode_flagae(v)if( cclm_mode_flag )cclm_mode_idxae(v)elseintra_chroma_pred_modeae(v)}}} else if treeType != DUAL_TREE_CHROMA) { /* MODE_INTER or MODE_IBC */if( cu_skip_flag[ x0 ][ y0 ] = = 0 )general_merge_flag[ x0 ][ y0 ]ae(v)if( general_merge_flag[ x0 ][ y0 ] ) {merge_data( x0, y0, cbWidth, cbHeight, chType )} else if ( CuPredMode[ chType ][ x0 ][ y0 ] = = MODE_IBC ) {mvd_coding( x0, y0, 0, 0 )if( MaxNumIbcMergeCand > l )mvp_l0_flag[ x0 ][ y0 ]ae(v)if( sps_amvr_enabled_flag &&( MvdL0[ x0 ][ y0 ][ 0 ] != 0 ∥ MvdL0[ x0 ][ y0 ][ l ] != 0 )) {amvr_precision_idx[ x0 ][ y0 ]ae(v)}} else {if( slice_type = = B )inter_pred_idc[ x0 ][ y0 ]ae(v)if sps_affine_enabled_flag && cbWidth >= 16 && cbHeight >= 16 ) {inter _affine_flag[ x0 ][ y0 ]ae(v)if( sps_affine_type_flag && inter _affine_flag[ x0 ][ y0 ] )cu_affine_type_flag[ x0 ][ y0 ]ae(v)}if( sps_smvd_enabled_flag && !mvd_l1_zero_flag &&inter_pred_idc[ x0 ][ y0 ] = = PRED_BI &&!inter_affine_flag[ x0 ][ y0 ] && RefIdxSymL0 > • 1 && RefIdxSymL1 > • 1 )sym_mvd_flag[ x0 ][ y0 ]ae(v)if( inter_pred_idc[ x0 ][ y0 ] != PRED_L1 ) {if NumRefIdxActive[ 0 ] > 1 && !sym_mvd_flag[ x0 ][ y0 ] )ref_idx_l0[ x0 ][ y0 ]ae(v)mvd_coding( x0, y0, 0, 0 )if( MotionModelIdc[ x0 ][ y0 ] > 0 )mvd_coding( x0, y0, 0, 1 )if(MotionModelIdc[ x0 ][ y0 ] > l )mvd_coding( x0, y0, 0, 2 )mvp_l0_flag[ x0 ][ y0 ]ae(v)} else {MvdL0[ x0 ][ y0 ][ 0 ] = 0MvdL0[ x0 ][ y0 ][ 1 ] = 0}if( inter_pred_idc[ x0 ][ y0 ] != PRED_L0 ) {if( NumRefIdxActive 1 ] > 1 && !sym_mvd_flag[ x0 ][ y0 ] )ref_idx_l1[ x0 ][ y0 ]ae(v)if( mvd_l1_zero_flag && inter_pred_idc[x0 ][ y0 ] = = PRED_BI ) {MvdL1[ x0 ][ y0 ][ 0 ] = 0MvdL1[ x0 ][ y0 ][ 1 ] = 0MvdCpL1[ x0 ][ y0 ][ 0 ][ 0 ] = 0MvdCpL1[ x0 ][ y0 ][ 0 ][ 1 ] = 0MvdCpL1[ x0 ][ y0 ][ 1 ][ 0 ] = 0MvdCpL1[ x0 ][ y0 ][ 1 ][ 1 ] = 0MvdCpL1[ x0 ][ y0 ][ 2 ][ 0 ] = 0MvdCpL1[ x0 ][ y0 ][ 2 ][ 1 ] = 0} else {if sym_mvd_flag x0 ][ y0 ]) {MvdL1[ x0 ][ y0 ][ 0 ] = • MvdL0[ x0 ][ y0 ][ 0 ]MvdL1[ x0 ][ y0 ][ 1 ] = • MvdL0[ x0 ][ y0 ][ 1 ]} elsemvd_coding( x0, y0, 1, 0 )if MotionModelIdc[ x0 ][ y0 ] > 0 )mvd_coding( x0, y0, 1, 1 )if(MotionModelIdc x0 ][ y0 ] > 1 )mvd_coding( x0, y0, 1, 2 )mvp_l1_flag[ x0 ][ y0 ]ae(v)}} else {MvdL1[ x0 ][ y0 ][ 0 ] = 0MvdL1[ x0 ][ y0 ][ 1 ] = 0}if( ( sps_amvr_enabled_flag && inter_affine_flag[ x0 ][ y0 ] = = 0 &&( MvdL0[ x0 ][ y0 ][ 0 ] != 0 ∥ MvdL0[ x0 ][ y0 ][ 1 ] != 0 ∥MvdL1[ x0 ][ y0 ][ 0 ] != 0 ∥ MvdL1[ x0 ][ y0 ][ 1 ] != 0)) ∥( sps_affine_amvr_enabled_flag && inter_affine_flag[ x0 ][ y0 ] = = 1 &&( MvdCpL0[ x0 ][ y0 ][ 0 ] [ 0 ] !=0 ∥ MvdCpL0[ x0 ][ y0 ][ 0 ] [ l ] != 0 ∥MvdCpL1[ x0 ][ y0 ][ 0 ][ 0 ] != 0 ∥ MvdCpL1[ x0 ][ y0 ][ 0 ][ 1 ] != 0 ∥MvdCpL0[ x0 ][ y0 ][ 1 ][ 0 ] != 0 ∥ MvdCpL0[ x0 ][ y0 ][ 1 ][ 1 ] != 0 ∥MvdCpL1[ x0 ][ y0 ][ 1 ][ 0 ] != 0 ∥ MvdCpL1[ x0 ][ y0 ][ l ][ 1 ] != 0 ∥MvdCpL0[ x0 ][ y0 ][ 2 ][ 0 ] != 0 ∥ MvdCpL0[ x0 ][ y0 ][ 2 ][ 1 ] != 0 ∥MvdCpL1[ x0 ][ y0 ][ 2 ] [ 0 ] != 0 ∥ MvdCpL1[ x0 ][ y0 ][ 2 ][ 1 ] != 0 ) ) {amvr_flag[ x0 ][ y0 ]ae(v)if( amvr_flag[ x0 ][ y0 ] )amvr_precision_idx[ x0 ][ y0 ]ae(v)}if( sps_bcw_enabled_flag && inter_pred_idc[ x0 ][ y0 ] == PRED_BI &&luma_weight_l0_flag[ref_idx_l0 [ x0 ][ y0 ] ] = = 0 &&luma_weight_l1_flag[ref_idx_l1 [ x0 ][ y0 ] ] = = 0 &&chroma_weight_l0_flag[ ref_idx_l0 [ x0 ][ y0 ] ] = = 0 &&chroma_weight_l1_flag[ ref_idx_l1 [ x0 ][ y0 ] ] = = 0 &&cbWidth * cbHeight >= 256 )bcw_idx[ x0 ][ y0 ]ae(v)}}if( CuPredMode[ chType ][ x0 ][ y0 ] != MODE_INTRA && !pred_mode_plt_flag &&general_merge_flag[ x0 ][ y0 ] = = 0 )cu_cbfif( cu_cbf) {if(CuPredMode[chType][ x0 ][ y0 ] = = MODE_INTER && sps_sbt_enabled_flag&& !ciip flag[ x0 ][ yo ] && !MergeTriangleFlag[ x0 ][ y0 ] ) {if( cbWidth <= MaxSbtSize && cbHeight <= MaxSbtSize ) {allowSbtVerH = cbWidth >= 8allowSbtVerQ = cbWidth >= 16allowSbtHorH = cbHeight >= 8allowSbtHorQ = cbHeight >= 16if( allowSbtVer ∥ allowSbtHorH ∥ allowSbtVerQ ∥ allowSbtHorQ )cu_sbt_flag}if( cu_sbt_flag ) {if( (allowSbtVerH ∥ allowSbtHorH ) && ( allowSbtVerQ ∥ allowSbtHorQ) )cu_sbt_quad_flagif( ( cu_sbt_quad_flag && ( allowSbtVerQ && allowSbtHorQ ) ∥(! cu_sbt_quad_flag && allowSbtVerH && allowSbtHorH ) )cu_sbt_horizontal_flagae(v)cu_sbt_pos_flagae(v)}}LfnstDcOnly = 1LfnstZeroOutSigCoeffFlag = 1transform_tree( x0, y0, cbWidth, cbHeight, treeType )lfhstWidth = (treeType = = DUAL_TREE_CHROMA ) ? cbWidth / SubWidthC: cbWidthlfhstHeight = (treeType = = DUAL_TREE_CHROMA) ? cbHeight / SubHeightC: cbHeightif( Min( lfnstWidth, lfhstHeight) >= 4 && sps_lfhst_enabled_flag = = 1 &&CuPredMode[ chType ][ x0 ][ y0 ] = = MODE_INTRA &&IntraSubPartitionsSplitType = = ISP_NO_SPLIT &&( !intra_mip_flag[ x0 ][ y0 ] ∥ Min( lfnstWidth, lfnstHeight ) >= 16 ) &&tu_mts_idx[ x0 ][ y0 ] = = 0 && Max( cbWidth, cbHeight ) <= MaxTbSizeY) {if( LfhstDcOnly = = 0 && LfnstZeroOutSigCoeffFlag = = 1 )lfnst_idx[ x0 ][ y0 ]ae(v)}} The inventive concepts described above allows the MIP predicted coding block which has multiple transform blocks to be used. Coding efficiency may be improved when the maximum transform size is configured to a value that is smaller than the maximum intra coding block size. Turning now toFIG.12, operations of a decoder shall now be described. In block1201, the processing circuitry1101may determine a width and a height of a current block of a bitstream based on syntax elements in the bitstream. In block1203, the processing circuitry1101may determine whether the current block is an intra predicted block. This may be done from decoding elements in the bitstream. In block1205, the processing circuitry1101may, responsive to the current block being an intra predicted block, determine whether the intra predicted block is a matrix based intra prediction (MIP) predicted block. In one embodiment, in determining whether the intra predicted block is a MIP predicted block, the processing circuitry1101may determine whether a syntax element indicating intra predicted block is the MIP predicted block based on at least one criteria. In another embodiment, in determining whether a syntax element indicating intra predicted block is the MIP predicted block, the processing circuitry1101may determine whether a syntax element indicating intra predicted block is the MIP predicted block based on the current block being the intra predicted block. In other embodiments, in determining whether a syntax element indicating intra predicted block is the MIP predicted block, the processing circuitry1101may determine that the syntax element indicating intra predicted block is the MIP predicted block based on the width being less that a first parameter times the height, or the height being less that the first parameter times the width. In a further embodiment, determining that the syntax element indicating intra predicted block is the MIP predicted block is further based on the width being less than a first threshold or the height being less than the first threshold. In block1207, the processing circuitry1101may, responsive to the current block being a MIP predicted block, determine whether the MIP predicted block has one transform block or multiple transform blocks. Turning toFIG.14, in determining whether the MIP predicted block has one transform block or multiple transform blocks, the processing circuitry1101in block1401may derive a maximum transform size from decoding elements in the bitstream. In block1403, the processing circuitry1101may whether the width value is less than or equal to a maximum transform size derived from decoding elements in the bitstream and whether the height value is less than or equal to the maximum transform size. Responsive to the width value being less than or equal to the maximum transform size and the height value being less than or equal to the maximum transform size, the processing circuitry1101may determine in block1405that there is one transform block with a width nTbW equal to the width value and a height nTbH equal to the height value. Responsive to the width value being greater than the maximum transform size or the height value being greater than the maximum transform size, the processing circuitry1101may determine that there are multiple transform blocks, each having a width nTbW equal to a minimum of the width value and the maximum transform size and a height nTbH equal to a minimum of the height value and the maximum transform size. In block1209, the processing circuitry1101may, determine a MIP weight matrix to be used to decode the current block based on the width and height of the current block and a MIP prediction mode of the current block. In block1211, the processing circuitry1101may, responsive to determining that the MIP predicted block has one transform block, derive the MIP predicted block based on the MIP weight matrix and previously decoded elements in the bitstream. In block1213, the processing circuitry1101may, responsive to determining that the MIP block has multiple transform blocks, derive a first MIP predicted block based on the MIP weight matrix and previously decoded elements in the bitstream. In block1215, the processing circuitry1101may derive remaining MIP predicted blocks based on the MIP weight matrix and previously decoded elements in the bitstream and decoded elements in at least one decoded transform block of the current block. In block1217, the processing circuitry1101may output the MIP predicted block or the first MIP predicted block and remaining predicted blocks for subsequent processing by the decoder. Turning now toFIG.13, in another embodiment, the processing circuitry1101of the decoder may, in block1301, derive a size of a current coding block of a picture from a bitstream as a width value and a height value based on decoding syntax elements in the bitstream. In block1303, the processing circuitry1101may determine whether the current coding block is an intra predicted block from decoding elements in the bitstream. Responsive to the current coding block being an intra predicted block, the processing circuitry1101may in block1305, determine whether the coding current block can be predicted as a MIP predicted block size. In block1307, the processing circuitry1101may, responsive to the current coding block being an intra predicted block, determine whether the current coding block is a MIP predicted block from decoding elements in the bitstream. In block1309, the processing circuitry1101may determine the prediction mode for the current coding block and a value of mipSizeID (described above) based on the width value and the height value, wherein the width value and the height value specifies the width and height of the transform block as a table index. In block1311, the processing circuitry1101may determine whether the current coding block has one transform block or has multiple transform blocks. In one embodiment illustrated inFIG.14, the processing circuitry1101in block1401may derive a maximum transform size from decoding elements in the bitstream. For example, the maximum transform size may be a parameter in the bitstream. In block1403, the processing circuitry1101may determine whether the current block has one transform block or has multiple transform blocks by determining in block1403whether the width value is less than or equal to a maximum transform size derived from decoding elements in the bitstream and whether the height value is less than or equal to the maximum transform size. Responsive to the width value being less than or equal to the maximum transform size and the height value being less than or equal to the maximum transform size, the processing circuitry1101in block1405may determine that there is one transform block with a width nTbW equal to the width and a height nTbH equal to the height. Responsive to the width value being greater than the maximum transform size and the height value being greater than the maximum transform size, the processing circuitry1101in block1407may determine that there are multiple transform blocks, each having a width nTbW equal to a minimum of the width value and the maximum transform size and a height nTbH equal to a minimum of the height value and the maximum transform size. Returning toFIG.13, in block1313, the processing circuitry1101may determine a matrix vector to use for the current coding block from a matrix vector look-up table by using the prediction mode for the current coding block and the value based on the width value and the height value of the current coding block as a table index. In block1315, the processing circuitry1101may determine original boundary sample values for the current transform (or prediction) block. In one embodiment, the processing circuitry1101may determine the original boundary sample values by determining nTBW samples from nearest neighboring samples to above of the current transform block and nTbH samples from the nearest neighboring samples to left of the current transform block In block1317, the processing circuitry1101may determine a size of a reduced boundary bdryredby the value based on the width value and the height value of the current coding block. In block1319, the processing circuitry may determine a dimension size of a reduced prediction signal predredby the value based on the width value and the height value of the current coding block. In block1321, the processing circuitry1101may derive the reduced boundary bdryredfrom the original boundary samples. In block1323, the processing circuitry1101may derive a reduced prediction signal predredtempby matrix multiplication of the matrix vector and the reduced boundary bdryred. In block1325, the processing circuitry1101may derive the reduced prediction signal predredby using sample value clipping on each sample of the predredtemp. In block1327, the processing circuitry1101may determine whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred. In determining whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred, the processing circuitry1101may determine whether to apply the vertical linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block and whether to apply the horizontal linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block. In block1329, the processing circuitry110may apply interpolation based on the determination of whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred. Turning toFIG.15, responsive to applying vertical linear interpolation first in block1501, the processing circuitry1101may determine the size of the reduced top boundary bdryredlltopfor the vertical linear interpolation by the width nTbW and the height nTbH of the current transform block in block1503and derive the reduced top boundary bdryredlltopfrom the original top boundary samples in block1505. Responsive to the applying horizontal linear interpolation first in block1501, the processing circuitry1101may determine the size of the reduced left boundary bdryredllleftfor the horizontal linear interpolation by the width nTbW and the height nTbH of the current transform block in block1507and derive the reduced left boundary bdryredllleftfrom the original left boundary samples in block1509. Returning toFIG.13, in block1331, the processing circuitry1101may determine one of a size of a reduced top boundary bdryredlltopand a size of the reduced left boundary bdryredllleftbased on interpolation applied. In block1333, the processing circuitry1101may determine one of reduced top boundary bdryredlltopand reduced left boundary bdryredllleftbased on the interpolation applied. In block1335, the processing circuitry1101may derive a MIP prediction block pred by generating the sample values at remaining positions by using linear interpolation. In block1337, the processing circuitry1101may decode the current block by using each of the MIP prediction blocks. Listing of Embodiments: Embodiment 1. A method performed by a processor of a decoder, the method comprising: determining (1201) a width and a height of a current block of a bitstream based on syntax elements in the bitstream;determining (1203) whether the current block is an intra predicted block;responsive to the current block being an intra predicted block, determining (1205) whether the intra predicted block is a matrix based intra prediction, MIP, predicted block;responsive to the current block being a MIP predicted block, determining (1209) a MIP weight matrix to be used to decode the current block based on the width and height of the current block and a MIP prediction mode of the current block;determining (1207) whether the MIP predicted block has one transform block or multiple transform blocks;responsive to determining that the MIP predicted block has one transform block:deriving (1211) the MIP predicted block based on the MIP weight matrix and previously decoded elements in the bitstream; andresponsive to determining that the MIP block has multiple transform blocks:deriving (1213) a first MIP predicted block based on the MIP weight matrix and previously decoded elements in the bitstream; andderiving (1215) remaining MIP predicted blocks based on the MIP weight matrix and previously decoded elements in the bitstream and decoded elements in at least one decoded transform block of the current block; andoutputting (1217) the MIP predicted block or the first MIP predicted block and remaining predicted blocks for subsequent processing by the decoder. Embodiment 2. The method of Embodiment 1 wherein determining whether the intra predicted block is a MIP predicted block comprises determining whether a syntax element indicating intra predicted block is the MIP predicted block based on at least one criteria. Embodiment 3. The method of Embodiment 2 wherein determining whether a syntax element indicating intra predicted block is the MIP predicted block comprises determining whether a syntax element indicating intra predicted block is the MIP predicted block based on the current block being the intra predicted block. Embodiment 4. The method of Embodiment 2 wherein determining whether a syntax element indicating intra predicted block is the MIP predicted block comprises determining that the syntax element indicating intra predicted block is the MIP predicted block based on:the width being less that a first parameter times the height; orthe height being less that the first parameter times the width. Embodiment 5. The method of Embodiment 4 wherein determining that the syntax element indicating intra predicted block is the MIP predicted block is further based on:the width being less than a first threshold; orthe height being less than the first threshold. Embodiment 6. The method of any of Embodiments 1˜4 wherein determining (1209) whether the MIP predicted block has one transform block or multiple transform blocks comprises:deriving a maximum transform size from decoding elements in the bitstream; anddetermining the MIP predicted block has one transform block responsive to the width being equal to or less than the maximum transform size and the height being equal to or less than the maximum transform size. Embodiment 7. A method performed by a processor of a decoder, the method comprising:deriving (1301) a size of a current coding block of a picture from a bitstream as a width value and a height value based on decoding syntax elements in the bitstream;determining (1303) whether the current coding block is an intra predicted block from decoding elements in the bitstream;responsive to the current coding block being an intra predicted block, determining (1305) whether the current coding block can be predicted as a MIP predicted block size;responsive to determining that the current coding block can be predicted as a MIP block, determining (1307) whether the current coding block is a MIP predicted block from decoding elements in the bitstream;determining (1313) a matrix vector to use for the current coding block from a matrix vector look-up table by using a prediction mode for the current coding block and a value based on the width value and the height value of the current coding block as a table index;determining (1311) whether the current coding block has one transform block or has multiple transform blocks;for the one transform block or each of the multiple transform blocks:determining (1315) original boundary sample values for the current transform block;determining (1317) a size of a reduced boundary bdryredby the value based on the width value and the height value of the current coding block;determining (1319) a dimension size of a reduced prediction signal predredby the value based on the width value and the height value of the current coding block;deriving (1321) the reduced boundary bdryredfrom the original boundary samples;deriving (1323) a reduced prediction signal predredtempby matrix multiplication of the matrix vector and the reduced boundary bdryred;deriving (1325) the reduced prediction signal predredby using sample value clipping on each sample of the predredtemp,determining (1327) whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred;applying (1329) interpolation based on the determination of whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred; anddetermining (1331) one of a size of a reduced top boundary bdryredlltopand a size of the reduced left boundary bdryredllleftbased on interpolation applied;determining (1333) one of reduced top boundary bdryredlltopand reduced left boundary bdryredllleftbased on the interpolation applied; andderiving (1335) a MIP prediction block pred by generating the sample values at remaining positions by using linear interpolation; anddecoding (1337) the current coding block by using each of the MIP prediction blocks. Embodiment 8. The method of Embodiment 7, wherein determining (1313) whether the current block has one transform block or has multiple transform block comprises:deriving (1401) a maximum transform size from decoding elements in the bitstream;determining (1403) whether the width value is less than or equal to a maximum transform size derived from decoding elements in the bitstream and whether the height value is less than or equal to the maximum transform size;responsive to the width value being less than or equal to the maximum transform size and the height value being less than or equal to the maximum transform size, determining (1405) that there is one transform block with a width nTbW equal to the width and a height nTbH equal to the height;responsive to the width value being greater than the maximum transform size and the height value being greater than the maximum transform size, determining (1407) that there are multiple transform blocks, each having a width nTbW equal to a minimum of the width value and the maximum transform size and a height nTbH equal to a minimum of the height value and the maximum transform size. Embodiment 9. The method of Embodiment 8 wherein determining (1315) the original boundary sample values comprises determining nTbW samples from nearest neighboring samples to above of the current transform block and nTbH samples from the nearest neighboring samples to left of the current transform block. Embodiment 10. The method of any of Embodiments 8-9 wherein determining (1327) whether to apply the vertical linear interpolation to the reduced prediction signal predredand whether to apply the horizontal linear interpolation to the reduced prediction signal predredcomprises determining whether to apply the vertical linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block and whether to apply the horizontal linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block. Embodiment 11. The method of Embodiment 10, further comprising:responsive to the decision being to apply both vertical and horizontal linear interpolations, determining (1501) which linear interpolation direction to apply first;responsive to the decision being to first apply vertical linear interpolation:determining (1503) the size of the reduced top boundary bdryredlltopfor the vertical linear interpolation by the width nTbW and the height nTbH of the current transform block; andderiving (1505) the reduced top boundary bdryredlltopfrom the original top boundary samples.responsive to the decision being to first apply horizontal linear interpolation:determining (1507) the size of the reduced left boundary bdryredllleftfor the horizontal linear interpolation by the width nTbW and the height nTbH of the current transform block; andderiving (1509) the reduced left boundary bdryredllleftfrom the original left boundary samples. Embodiment 12. The method of any of Embodiments 7 to 11 further comprising determining (1309) the prediction mode for the current coding block and the value based on the width value and the height value as a table index. Embodiment 13. A decoder for a communication network, the decoder (906) comprising:a processor (1101); andmemory (1103) coupled with the processor, wherein the memory comprises instructions that when executed by the processor cause the processor to perform operations according to any of Embodiments 1-12. Embodiment 14. A computer program comprising computer-executable instructions configured to cause a device to perform the method according to any one of Embodiments 1-12, when the computer-executable instructions are executed on a processor (1101) comprised in the device. Embodiment 15. A computer program product comprising a computer-readable storage medium (1103), the computer-readable storage medium having computer-executable instructions configured to cause a device to perform the method according to any one of Embodiments 1-12. when the computer-executable instructions are executed on a processor (1101) comprised in the device. Embodiment 16. An apparatus comprising:at least one processor (1101);memory (1103) communicatively coupled to the processor, said memory comprising instructions executable by the processor, which cause the processor to perform operations comprising operations according to any of Embodiments 1-12. 17. A decoder adapted to perform operations comprising:determining (1201) a width and a height of a current block of a bitstream based on syntax elements in the bitstream;determining (1203) whether the current block is an intra predicted block;responsive to the current block being an intra predicted block, determining (1205) whether the intra predicted block is a matrix based intra prediction, MIP, predicted block;responsive to the current block being a MIP predicted block, determining (1209) a MIP weight matrix to be used to decode the current block based on the width and height of the current block and a MIP prediction mode of the current block;determining (1207) whether the MIP predicted block has one transform block or multiple transform blocks;responsive to determining that the MIP predicted block has one transform block:deriving (1211) the MIP predicted block based on the MIP weight matrix and previously decoded elements in the bitstream; andresponsive to determining that the MIP block has multiple transform blocks:deriving (1213) a first MIP predicted block based on the MIP weight matrix and previously decoded elements in the bitstream;deriving (1215) remaining MIP predicted blocks based on the MIP weight matrix and previously decoded elements in the bitstream and decoded elements in at least one decoded transform block of the current block; andoutputting (1217) the MIP predicted block or the first MIP predicted block and remaining predicted blocks for subsequent processing by the decoder. Embodiment 18. The decoder of Embodiment 17 wherein in determining whether the intra predicted block is a MIP predicted block, the decoder is adapted to perform operations comprising determining whether a syntax element indicating intra predicted block is the MIP predicted block based on at least one criteria. Embodiment 19. The decoder of Embodiment 18 wherein in determining whether a syntax element indicating intra predicted block is the MIP predicted block, the decoder is adapted to perform operations comprising determining whether a syntax element indicating intra predicted block is the MIP predicted block based on the current block being the intra predicted block. Embodiment 20. The decoder of Embodiment 18 wherein in determining whether a syntax element indicating intra predicted block is the MIP predicted block, the decoder is adapted to perform operations comprising determining that the syntax element indicating intra predicted block is the MIP predicted block based on:the width being less that a first parameter times the height; orthe height being less that the first parameter times the width. Embodiment 21. The decoder of Embodiment 20 wherein determining that the syntax element indicating intra predicted block is the MIP predicted block is further based on:the width being less than a first threshold; orthe height being less than the first threshold. Embodiment 22. The decoder of any of Embodiments 1-20 wherein in determining (1209) whether the MIP predicted block has one transform block or multiple transform blocks, the decoder is adapted to perform operations comprising:deriving a maximum transform size from decoding elements in the bitstream; anddetermining the MIP predicted block has one transform block responsive to the width being equal to or less than the maximum transform size and the height being equal to or less than the maximum transform size. Embodiment 23. A decoder adapted to perform operations comprising:deriving (1301) a size of a current coding block of a picture from a bitstream as a width value and a height value based on decoding syntax elements in the bitstream;determining (1303) whether the current coding block is an intra predicted block from decoding elements in the bitstream;responsive to the current coding block being an intra predicted block, determining (1305) whether the current coding block can be predicted as a MIP predicted block size;responsive to determining that the current coding block can be predicted as a MIP block, determine (1307) whether the current coding block is a MIP predicted block from decoding elements in the bitstream;determining (1313) a matrix vector to use for the current coding block from a matrix vector look-up table by using a prediction mode for the current coding block and a value based on the width value and the height value of the current coding block as a table index;determining (1311) whether the current coding block has one transform block or has multiple transform blocks;for the one transform block or each of the multiple transform blocks:determining (1315) original boundary sample values for the current transform block;determining (1317) a size of a reduced boundary bdryredby the value based on the width value and the height value of the current coding block;determining (1319) a dimension size of a reduced prediction signal predredby the value based on the width value and the height value of the current coding block;deriving (1321) the reduced boundary bdryredfrom the original boundary samples;deriving (1323) a reduced prediction signal predredtempby matrix multiplication of the matrix vector and the reduced boundary bdryred;deriving (1325) the reduced prediction signal predredby using sample value clipping on each sample of the predredtemp;determining (1327) whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred;applying (1329) interpolation based on the determination of whether to apply vertical linear interpolation to the reduced prediction signal predredand whether to apply horizontal linear interpolation to the reduced prediction signal predred; anddetermining (1331) one of a size of a reduced top boundary bdryredlltopand a size of the reduced left boundary bdryredllleftbased on interpolation applied;determining (1333) one of reduced top boundary bdryredlltopand reduced left boundary bdryredllleftbased on the interpolation applied; andderiving (1335) a MIP prediction block pred by generating the sample values at remaining positions by using linear interpolation; anddecoding (1337) the current block by using each of the MIP prediction blocks. Embodiment 24. The decoder of Embodiment 23, wherein determining (1313) whether the current block has one transform block or has multiple transform block comprises:deriving (1401) a maximum transform size from decoding elements in the bitstream;determining (1403) whether the width value is less than or equal to a maximum transform size derived from decoding elements in the bitstream and whether the height value is less than or equal to the maximum transform size;responsive to the width value being less than or equal to the maximum transform size and the height value being less than or equal to the maximum transform size, determining (1405) that there is one transform block with a width nTbW equal to the width and a height nTbH equal to the height;responsive to the width value being greater than the maximum transform size and the height value being greater than the maximum transform size, determining (1407) that there are multiple transform blocks, each having a width nTbW equal to a minimum of the width value and the maximum transform size and a height nTbH equal to a minimum of the height value and the maximum transform size. Embodiment 25. The decoder of Embodiment 24 wherein in determining (1315) the original boundary sample values, the decoder is adapted to perform operations comprising determining nTbW samples from nearest neighboring samples to above of the current transform block and nTbH samples from the nearest neighboring samples to left of the current transform block. Embodiment 26. The decoder of any of Embodiments 23-25 wherein in determining (1327) whether to apply the vertical linear interpolation to the reduced prediction signal predredand whether to apply the horizontal linear interpolation to the reduced prediction signal predred, the decoder is adapted to perform operations comprising determining whether to apply the vertical linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block and whether to apply the horizontal linear interpolation to the reduced prediction signal predredby the width nTbW and the height nTbH of the current transform block. Embodiment 27. The decoder of Embodiment 26, wherein the decoder is adapted to perform further operations comprising:responsive to the decision being to apply both vertical and horizontal linear interpolations, determining (1501) which linear interpolation direction to apply first;responsive to the decision being to first apply vertical linear interpolation:determining (1503) the size of the reduced top boundary bdryredlltopfor the vertical linear interpolation by the width nTbW and the height nTbH of the current transform block; andderiving (1505) the reduced top boundary bdryredlltopfrom the original top boundary samples.responsive to the decision being to first apply horizontal linear interpolation:determining (1507) the size of the reduced left boundary bdryredllleftfor the horizontal linear interpolation by the width nTbW and the height nTbH of the current transform block; andderiving (1509) the reduced left boundary bdryredllleftfrom the original left boundary samples. Embodiment 28. The decoder of any of Embodiments 23 to 27 wherein the decoder is adapted to perform further operations comprising determining (1309) the prediction mode for the current coding block and the value based on the width value and the height value as a table index. Abbreviations AbbreviationExplanationHEVCHigh Efficiency Video CodingJVETJoint Video Exploratory TeamVVCVersatile Video CodingITU-TInternational Telecommunications Union-TelecommunicationStandardization SectorMPEGMoving Picture Experts GroupCUCoding UnitMIPMatrix based Intra Prediction REFERENCES JVET-O2001-vE: Versatile Video Coding (Draft 6); B. Bross, J. Chen, S. Liu Additional explanation is provided below. Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description. Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. The term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. In the above-description of various embodiments of present inventive concepts, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of present inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which present inventive concepts belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. When an element is referred to as being “connected”, “coupled”, “responsive”, or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected”, “directly coupled”, “directly responsive”, or variants thereof to another element, there are no intervening elements present. Like numbers refer to like elements throughout. Furthermore, “coupled”, “connected”, “responsive”, or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another element/operation. Thus a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of present inventive concepts. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification. As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia,” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation. Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s). These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of present inventive concepts may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as “circuitry,” “a module” or variants thereof. It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated, and/or blocks/operations may be omitted without departing from the scope of inventive concepts. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows. Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present inventive concepts. All such variations and modifications are intended to be included herein within the scope of present inventive concepts. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the examples of embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of present inventive concepts. Thus, to the maximum extent allowed by law, the scope of present inventive concepts are to be determined by the broadest permissible interpretation of the present disclosure including the examples of embodiments and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
65,867
11943479
MODE FOR DISCLOSURE Some embodiments of the present disclosure are described in detail with reference to the accompanying drawings. A detailed description to be disclosed along with the accompanying drawings are intended to describe some embodiments of the present disclosure and are not intended to describe a sole embodiment of the present disclosure. The following detailed description includes more details in order to provide full understanding of the present disclosure. However, those skilled in the art will understand that the present disclosure may be implemented without such more details. In some cases, in order to avoid that the concept of the present disclosure becomes vague, known structures and devices are omitted or may be shown in a block diagram form based on the core functions of each structure and device. Although most terms used in the present disclosure have been selected from general ones widely used in the art, some terms have been arbitrarily selected by the applicant and their meanings are explained in detail in the following description as needed. Thus, the present disclosure should be understood with the intended meanings of the terms rather than their simple names or meanings. Specific terms used in the following description have been provided to help understanding of the present disclosure, and the use of such specific terms may be changed in various forms without departing from the technical sprit of the present disclosure. For example, signals, data, samples, pictures, frames, blocks and the like may be appropriately replaced and interpreted in each coding process. In the present description, a “processing unit” refers to a unit in which an encoding/decoding process such as prediction, transform and/or quantization is performed. Further, the processing unit may be interpreted into the meaning including a unit for a luma component and a unit for a chroma component. For example, the processing unit may correspond to a block, a coding unit (CU), a prediction unit (PU) or a transform unit (TU). In addition, the processing unit may be interpreted into a unit for a luma component or a unit for a chroma component. For example, the processing unit may correspond to a coding tree block (CTB), a coding block (CB), a PU or a transform block (TB) for the luma component. Further, the processing unit may correspond to a CTB, a CB, a PU or a TB for the chroma component. Moreover, the processing unit is not limited thereto and may be interpreted into the meaning including a unit for the luma component and a unit for the chroma component. In addition, the processing unit is not necessarily limited to a square block and may be configured as a polygonal shape having three or more vertexes. Furthermore, in the present description, a pixel is called a sample. In addition, using a sample may mean using a pixel value or the like. FIG.1shows an example of a video coding system as an embodiment to which the present disclosure is applied. The video coding system may include a source device10and a receive device20. The source device10can transmit encoded video/image information or data to the receive device20in the form of a file or streaming through a digital storage medium or a network. The source device10may include a video source11, an encoding apparatus12, and a transmitter13. The receive device20may include a receiver, a decoding apparatus22and a renderer23. The encoding apparatus12may be called a video/image encoding apparatus and the decoding apparatus20may be called a video/image decoding apparatus. The transmitter13may be included in the encoding apparatus12. The receiver21may be included in the decoding apparatus22. The renderer23may include a display and the display may be configured as a separate device or an external component. The video source can acquire a video/image through video/image capturing, combining or generating process. The video source may include a video/image capture device and/or a video/image generation device. The video/image capture device may include, for example, one or more cameras, a video/image archive including previously captured videos/images, and the like. The video/image generation device may include, for example, a computer, a tablet, a smartphone, and the like and (electronically) generate a video/image. For example, a virtual video/image can be generated through a computer or the like and, in this case, a video/image capture process may be replaced with a related data generation process. The encoding apparatus12can encode an input video/image. The encoding apparatus12can perform a series of procedures such as prediction, transform and quantization for compression and coding efficiency. Encoded data (encoded video/image information) can be output in the form of a bitstream. The transmitter13can transmit encoded video/image information or data output in the form of a bitstream to the receiver of the receive device in the form of a file or streaming through a digital storage medium or a network. The digital storage medium may include various storage media such as a USB, an SD, a CD, a DVD, Blueray, an HDD, and an SSD. The transmitter13may include an element for generating a media file through a predetermined file format and an element for transmission through a broadcast/communication network. The receiver21can extract a bitstream and transmit the bitstream to the decoding apparatus22. The decoding apparatus22can decode a video/image by performing a series of procedures such as inverse quantization, inverse transform and prediction corresponding to operation of the encoding apparatus12. The renderer23can render the decoded video/image. The rendered video/image can be display through a display. FIG.2is a schematic block diagram of an encoding apparatus which encodes a video/image signal as an embodiment to which the present disclosure is applied. The encoding apparatus100may correspond to the encoding apparatus12ofFIG.1. An image partitioning unit110can divide an input image (or a picture or a frame) input to the encoding apparatus100into one or more processing units. For example, the processing unit may be called a coding unit (CU). In this case, the coding unit can be recursively segmented from a coding tree unit (CTU) or a largest coding unit (LCU) according to a quad-tree binary-tree (QTBT) structure. For example, a single coding unit can be segmented into a plurality of coding units with a deeper depth based on the quad-tree structure and/or the binary tree structure. In this case, the quad-tree structure may be applied first and then the binary tree structure may be applied. Alternatively, the binary tree structure may be applied first. A coding procedure according to the present disclosure can be performed based on a final coding unit that is no longer segmented. In this case, a largest coding unit may be directly used as the final coding unit or the coding unit may be recursively segmented into coding units with a deeper depth and a coding unit having an optimal size may be used as the final coding unit as necessary based on coding efficiency according to image characteristics. Here, the coding procedure may include procedures such as prediction, transform and reconstruction which will be described later. Alternatively, the processing unit may further include a prediction unit (PU) or a transform unit (TU). In this case, the prediction unit and the transform unit can be segmented or partitioned from the aforementioned final coding unit. The prediction unit may be a unit of sample prediction and the transform unit may be a unit of deriving a transform coefficient and/or a unit of deriving a residual signal from a transform coefficient. A unit may be interchangeably used with the term “block” or “area”. Generally, an M×N block represents a set of samples or transform coefficients in M columns and N rows. A sample can generally represent a pixel or a pixel value and may represent only a pixel/pixel value of a luma component or only a pixel/pixel value of a chroma component. The sample can be used as a term corresponding to a picture (image), a pixel or a pel. The encoding apparatus100may generate a residual signal (a residual block or a residual sample array) by subtracting a predicted signal (a predicted block or a predicted sample array) output from an inter-prediction unit180or an intra-prediction unit185from an input video signal (an original block or an original sample array), and the generated residual signal is transmitted to the transform unit120. In this case, a unit which subtracts the predicted signal (predicted block or predicted sample array) from the input video signal (original block or original sample array) in the encoder100may be called a subtractor115, as shown. A predictor can perform prediction on a processing target block (hereinafter referred to as a current block) and generate a predicted block including predicted samples with respect to the current block. The predictor can determine whether intra-prediction or inter-prediction is applied to the current block or units of CU. The predictor can generate various types of information about prediction, such as prediction mode information, and transmit the information to an entropy encoding unit190as described later in description of each prediction mode. Information about prediction can be encoded in the entropy encoding unit190and output in the form of a bitstream. The intra-prediction unit185can predict a current block with reference to samples in a current picture. Referred samples may neighbor the current block or may be separated therefrom according to a prediction mode. In intra-prediction, prediction modes may include a plurality of nondirectional modes and a plurality of directional modes. The nondirectional modes may include a DC mode and a planar mode, for example. The directional modes may include, for example, 33 directional prediction modes or 65 directional prediction modes according to a degree of minuteness of prediction direction. However, this is exemplary and a number of directional prediction modes equal to or greater than 65 or equal to or less than 33 may be used according to settings. The intra-prediction unit185may determine a prediction mode to be applied to the current block using a prediction mode applied to neighbor blocks. The inter-prediction unit180can derive a predicted block with respect to the current block based on a reference block (reference sample array) specified by a motion vector on a reference picture. Here, to reduce the quantity of motion information transmitted in an inter-prediction mode, motion information can be predicted in units of block, subblock or sample based on correlation of motion information between a neighboring block and the current block. The motion information may include a motion vector and a reference picture index. The motion information may further include inter-prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information. In the case of inter-prediction, neighboring blocks may include a spatial neighboring block present in a current picture and a temporal neighboring block present in a reference picture. The reference picture including the reference block may be the same as or different from the reference picture including the temporal neighboring block. The temporal neighboring block may be called a collocated reference block or a collocated CU (colCU) and the reference picture including the temporal neighboring block may be called a collocated picture (colPic). For example, the inter-prediction unit180may form a motion information candidate list based on neighboring blocks and generate information indicating which candidate is used to derive a motion vector and/or a reference picture index of the current block. Inter-prediction can be performed based on various prediction modes, and in the case of a skip mode and a merge mode, the inter-prediction unit180can use motion information of a neighboring block as motion information of the current block. In the case of the skip mode, a residual signal may not be transmitted differently from the merge mode. In the case of a motion vector prediction (MVP) mode, the motion vector of the current block can be indicated by using a motion vector of a neighboring block as a motion vector predictor and signaling a motion vector difference. A predicted signal generated through the inter-prediction unit180or the intra-prediction unit185can be used to generate a reconstructed signal or a residual signal. The transform unit120can generate transform coefficients by applying a transform technique to a residual signal. For example, the transform technique may include at least one of DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform) and CNT (Conditionally Non-linear Transform). Here, GBT refers to transform obtained from a graph representing information on relationship between pixels. CNT refers to transform obtained based on a predicted signal generated using all previously reconstructed pixels. Further, the transform process may be applied to square pixel blocks having the same size or applied to non-square blocks having variable sizes. A quantization unit130may quantize transform coefficients and transmit the quantized transform coefficients to the entropy encoding unit190, and the entropy encoding unit190may encode a quantized signal (information about the quantized transform coefficients) and output the encoded signal as a bitstream. The information about the quantized transform coefficients may be called residual information. The quantization unit130may rearrange the quantized transform coefficients in the form of a block into the form of a one-dimensional vector based on a coefficient scanning order and generate information about the quantized transform coefficients based on the quantized transform coefficients in the form of a one-dimensional vector. The entropy encoding unit190can execute various encoding methods such as exponential Golomb, CAVLC (context-adaptive variable length coding) and CABAC (context-adaptive binary arithmetic coding), for example. The entropy encoding unit190may encode information necessary for video/image reconstruction (e.g., values of syntax elements and the like) along with or separately from the quantized transform coefficients. Encoded information (e.g., video/image information) may be transmitted or stored in the form of a bitstream in network abstraction layer (NAL) unit. The bitstream may be transmitted through a network or stored in a digital storage medium. Here, the network may include a broadcast network and/or a communication network and the digital storage medium may include various storage media such as a USB, an SD, a CD, a DVD, Blueray, an HDD and an SSD. A transmitter (not shown) which transmits the signal output from the entropy encoding unit190and/or a storage (not shown) which stores the signal may be configured as internal/external elements of the encoding apparatus100, and the transmitter may be a component of the entropy encoding unit190. The quantized transform coefficients output from the quantization unit130can be used to generate a predicted signal. For example, a residual signal can be reconstructed by applying inverse quantization and inverse transform to the quantized transform coefficients through an inverse quantization unit140and an inverse transform unit150in the loop. An adder155can add the reconstructed residual signal to the predicted signal output from the inter-prediction unit180or the intra-prediction unit185such that a reconstructed signal (reconstructed picture, reconstructed block or reconstructed sample array) can be generated. When there is no residual with respect to a processing target block as in a case in which the skip mode is applied, a predicted block can be used as a reconstructed block. The adder155may also be called a reconstruction unit or a reconstructed block generator. The generated reconstructed signal can be used for intra-prediction of the next processing target block in the current picture or used for inter-prediction of the next picture through filtering which will be described later. A filtering unit160can improve subjective/objective picture quality by applying filtering to the reconstructed signal. For example, the filtering unit160can generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture and transmit the modified reconstructed picture to a decoded picture buffer170. The various filtering methods may include, for example, deblocking filtering, sample adaptive offset (SAO), adaptive loop filtering (ALF), and bilateral filtering. The filtering unit160can generate various types of information about filtering and transmit the information to the entropy encoding unit190as will be described later in description of each filtering method. Information about filtering may be encoded in the entropy encoding unit190and output in the form of a bitstream. The modified reconstructed picture transmitted to the decoded picture buffer170can be used as a reference picture in the inter-prediction unit180. Accordingly, the encoding apparatus can avoid mismatch between the encoding apparatus100and the decoding apparatus and improve encoding efficiency when inter-prediction is applied. The decoded picture buffer170can store the modified reconstructed picture such that the modified reconstructed picture is used as a reference picture in the inter-prediction unit180. FIG.3is a schematic block diagram of a decoding apparatus which performs decoding of a video signal as an embodiment to which the present disclosure is applied. The decoding apparatus200ofFIG.3corresponds to the decoding apparatus22ofFIG.1. Referring toFIG.3, the decoding apparatus200may include an entropy decoding unit210, an inverse quantization unit220, an inverse transform unit230, an adder235, a filtering unit240, a decoded picture buffer (DPB)250, an inter-prediction unit260, and an intra-prediction unit265. The inter-prediction unit260and the intra-prediction unit265may be collectively called a predictor. That is, the predictor can include the inter-prediction unit180and the intra-prediction unit185. The inverse quantization unit220and the inverse transform unit230may be collectively called a residual processor. That is, the residual processor can include the inverse quantization unit220and the inverse transform unit230. The aforementioned entropy decoding unit210, inverse quantization unit220, inverse transform unit230, adder235, filtering unit240, inter-prediction unit260and intra-prediction unit265may be configured as a single hardware component (e.g., a decoder or a processor) according to an embodiment. Further, the decoded picture buffer250may be configured as a single hardware component (e.g., a memory or a digital storage medium) according to an embodiment. When a bitstream including video/image information is input, the decoding apparatus200can reconstruct an image through a process corresponding to the process of processing the video/image information in the encoding apparatus100ofFIG.2. For example, the decoding apparatus200can perform decoding using a processing unit applied in the encoding apparatus100. Accordingly, a processing unit of decoding may be a coding unit, for example, and the coding unit can be segmented from a coding tree unit or a largest coding unit according to a quad tree structure and/or a binary tree structure. In addition, a reconstructed video signal decoded and output by the decoding apparatus200can be reproduced through a reproduction apparatus. The decoding apparatus200can receive a signal output from the encoding apparatus100ofFIG.2in the form of a bitstream, and the received signal can be decoded through the entropy decoding unit210. For example, the entropy decoding unit210can parse the bitstream to derive information (e.g., video/image information) necessary for image reconstruction (or picture reconstruction). For example, the entropy decoding unit210can decode information in the bitstream based on a coding method such as exponential Golomb, CAVLC or CABAC and output syntax element values necessary for image reconstruction and quantized values of transform coefficients with respect to residual. More specifically, the CABAC entropy decoding method receives a bin corresponding to each syntax element in the bitstream, determines a context model using decoding target syntax element information and decoding information of neighboring and decoding target blocks or information on symbols/bins decoded in a previous stage, predicts bin generation probability according to the determined context model and performs arithmetic decoding of bins to generate a symbol corresponding to each syntax element value. Here, the CABAC entropy decoding method can update the context model using information on symbols/bins decoded for the next symbol/bin context model after the context model is determined. Information about prediction among the information decoded in the entropy decoding unit210can be provided to the predictor (inter-prediction unit260and the intra-prediction unit265) and residual values on which entropy decoding has been performed in the entropy decoding unit210, that is, quantized transform coefficients, and related parameter information can be input to the inverse quantization unit220. Further, information about filtering among the information decoded in the entropy decoding unit210can be provided to the filtering unit240. Meanwhile, a receiver (not shown) which receives a signal output from the encoding apparatus100may be additionally configured as an internal/external element of the decoding apparatus200or the receiver may be a component of the entropy decoding unit210. The inverse quantization unit220can inversely quantize the quantized transform coefficients to output transform coefficients. The inverse quantization unit220can rearrange the quantized transform coefficients in the form of a two-dimensional block. In this case, rearrangement can be performed based on the coefficient scanning order in the encoding apparatus100. The inverse quantization unit220can perform inverse quantization on the quantized transform coefficients using a quantization parameter (e.g., quantization step size information) and acquire transform coefficients. The inverse transform unit230inversely transforms the transform coefficients to obtain a residual signal (residual block or residual sample array). The predictor can perform prediction on a current block and generate a predicted block including predicted samples with respect to the current block. The predictor can determine whether intra-prediction or inter-prediction is applied to the current block based on the information about prediction output from the entropy decoding unit210and determine a specific intra/inter-prediction mode. The intra-prediction unit265can predict the current block with reference to samples in a current picture. The referred samples may neighbor the current block or may be separated from the current block according to a prediction mode. In intra-prediction, prediction modes may include a plurality of nondirectional modes and a plurality of directional modes. The intra-prediction265may determine a prediction mode applied to the current block using a prediction mode applied to neighboring blocks. The inter-prediction unit260can derive a predicted block with respect to the current block based on a reference block (reference sample array) specified by a motion vector on a reference picture. Here, to reduce the amount of motion information transmitted in the inter-prediction mode, the motion information can be predicted in units of block, subblock or sample based on correlation of the motion information between a neighboring block and the current block. The motion information may include a motion vector and a reference picture index. The motion information may further include inter-prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information. In the case of inter-prediction, neighboring blocks may include a spatial neighboring block present in a current picture and a temporal neighboring block present in a reference picture. For example, the inter-prediction unit260may form a motion information candidate list based on neighboring blocks and derive the motion vector and/or the reference picture index of the current block based on received candidate selection information. Inter-prediction can be performed based on various prediction modes and the information about prediction may include information indicating the inter-prediction mode for the current block. The adder235can generate a reconstructed signal (reconstructed picture, reconstructed block or reconstructed sample array) by adding the obtained residual signal to the predicted signal (predicted block or predicted sample array) output from the inter-prediction unit260or the intra-prediction unit265. When there is no residual with respect to the processing target block as in a case in which the skip mode is applied, the predicted block may be used as a reconstructed block. The adder235may also be called a reconstruction unit or a reconstructed block generator. The generated reconstructed signal can be used for intra-prediction of the next processing target block in the current picture or used for inter-prediction of the next picture through filtering which will be described later. The filtering unit240can improve subjective/objective picture quality by applying filtering to the reconstructed signal. For example, the filtering unit240can generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture and transmit the modified reconstructed picture to a decoded picture buffer250. The various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filtering, and bilateral filtering. The modified reconstructed picture transmitted to the decoded picture buffer250can be used as a reference picture by the inter-prediction unit260. In the present description, embodiments described in the filtering unit160, the inter-prediction unit180and the intra-prediction unit185of the encoding apparatus100can be applied to the filtering unit240, the inter-prediction unit260and the intra-prediction unit265of the decoding apparatus equally or in a corresponding manner. FIG.4is a configuration diagram of a content streaming system as an embodiment to which the present disclosure is applied. The content streaming system to which the present disclosure is applied may include an encoding server410, a streaming server420, a web server430, a media storage440, a user equipment450, and multimedia input devices460. The encoding server410serves to compress content input from multimedia input devices such as a smartphone, a camera and a camcorder into digital data to generate a bitstream and transmit the bitstream to the streaming server420. As another example, when the multimedia input devices460such as a smartphone, a camera and a camcorder directly generate bit streams, the encoding server410may be omitted. The bitstream may be generated by an encoding method or a bitstream generation method to which the present disclosure is applied and the streaming server420can temporarily store the bitstream in the process of transmitting or receiving the bitstream. The streaming server420transmits multimedia data to the user equipment450based on a user request through the web server430and the web server430serves as a medium that informs a user of services. When the user sends a request for a desired service to the web server430, the web server430delivers the request to the streaming server420and the streaming server420transmits multimedia data to the user. Here, the content streaming system may include an additional control server, and in this case, the control server serves to control commands/responses between devices in the content streaming system. The streaming server420may receive content from the media storage440and/or the encoding server410. For example, when content is received from the encoding server410, the streaming server420can receive the content in real time. In this case, the streaming server420may store bit streams for a predetermined time in order to provide a smooth streaming service. Examples of the user equipment450may include a cellular phone, a smartphone, a laptop computer, a digital broadcast terminal, a PDA (personal digital assistant), a PMP (portable multimedia player), a navigation device, a slate PC, a tablet PC, an ultrabook, a wearable device (e.g., a smartwatch, a smart glass and an HMD (head mounted display)), a digital TV, a desktop computer, a digital signage, etc. Each server in the content streaming system may be operated as a distributed server, and in this case, data received by each server can be processed in a distributed manner. FIG.5shows embodiments to which the present disclosure is applicable,FIG.5Ais a diagram for describing a block segmentation structure according to QT (Quad Tree),FIG.5Bis a diagram for describing a block segmentation structure according to BT (Binary Tree),FIG.5Cis a diagram for describing a block segmentation structure according to TT (Ternary Tree),FIG.5Dis a diagram for describing a block segmentation structure according to AT (Asymmetric Tree). In video coding, a single block can be segmented based on QT. Further, a single subblock segmented according to QT can be further recursively segmented using QT. A leaf block that is no longer segmented according to QT can be segmented using at least one of BT, TT and AT. BT may have two types of segmentation: horizontal BT (2N×N, 2N×N); and vertical BT (N×2N, N×2N). TT may have two types of segmentation: horizontal TT (2N×1/2N, 2N×N, 2N×1/2N); and vertical TT (1/2N×2N, N×2N, 1/2N×2N). AT may have four types of segmentation: horizontal-up AT (2N×1/2N, 2N×3/2N); horizontal-down AT (2N×3/2N, 2N×1/2N); vertical-left AT (1/2N×2N, 3/2N×2N); and vertical-right AT (3/2N×2N, 1/2N×2N). Each type of BT, TT and AT can be further recursively segmented using BT, TT and AT. FIG.5Ashows an example of QT segmentation. A block A can be segmented into four subblocks A0, A1, A2and A3according to QT. The subblock A1can be further segmented into four subblocks B0, B1, B2and B3according to QT. FIG.5Bshows an example of BT segmentation. The block B3that is no longer segmented according to QT can be segmented into vertical BT (C0and C1) or horizontal BT (D0and D1). Each subblock such as the block C0can be further recursively segmented into horizontal BT (E0and E1) or vertical BT (F0and F1). FIG.5Cshows an example of TT segmentation. The block B3that is no longer segmented according to QT can be segmented into vertical TT (C0, C1and C2) or horizontal TT (D0, D1and D2). Each subblock such as the block C1can be further recursively segmented into horizontal TT (E0, E1and E2) or vertical TT (F0, F1and F2). FIG.5Dshows an example of AT segmentation. The block B3that is no longer segmented according to QT can be segmented into vertical AT (C0and C1) or horizontal AT (D0and D1). Each subblock such as the block C1can be further recursively segmented into horizontal AT (E0and E1) or vertical TT (F0and F1). Meanwhile, BT, TT and AT segmentation may be used in a combined manner. For example, a subblock segmented according to BT may be segmented according to TT or AT. Further, a subblock segmented according to TT may be segmented according to BT or AT. A subblock segmented according to AT may be segmented according to BT or TT. For example, each subblock may be segmented into vertical BT after horizontal BT segmentation or each subblock may be segmented into horizontal BT after vertical BT segmentation. In this case, finally segmented shapes are identical although segmentation orders are different. Further, when a block is segmented, a block search order can be defined in various manners. In general, search is performed from left to right and top to bottom, and block search may mean the order of determining whether each segmented subblock will be additionally segmented, an encoding order of subblocks when the subblocks are no longer segmented, or a search order when a subblock refers to information of neighboring other blocks. Transform may be performed on processing units (or transform blocks) segmented according to the segmentation structures as shown inFIGS.5A to5D, and particularly, segmentation may be performed in a row direction and a column direction and a transform matrix may be applied. According to an embodiment of the present disclosure, different transform types may be used according to the length of a processing unit (or transform block) in the row direction or column direction. Transform is applied to residual blocks in order to decorrelate the residual blocks as much as possible, concentrate coefficients on a low frequency and generate a zero tail at the end of a block. A transform part in JEM software includes two principal functions (core transform and secondary transform). Core transform is composed of discrete cosine transform (DCT) and discrete sine transform (DST) transform families applied to all rows and columns of a residual block. Thereafter, secondary transform may be additionally applied to a top left corner of the output of core transform. Similarly, inverse transform may be applied in the order of inverse secondary transform and inverse core transform. First, inverse secondary transform can be applied to a top left corner of a coefficient block. Then, inverse core transform is applied to rows and columns of the output of inverse secondary transform. Core transform or inverse transform may be referred to as primary transform or inverse transform. FIGS.6and7show embodiments to which the present disclosure is applied,FIG.6is a schematic block diagram of a transform and quantization unit120/130, and an inverse quantization and inverse transform unit140/150in the encoding apparatus100andFIG.7is a schematic block diagram of an inverse quantization and inverse transform unit220/230in the decoding apparatus200. Referring toFIG.6, the transform and quantization unit120/130may include a primary transform unit121, a secondary transform unit122and a quantization unit130. The inverse quantization and inverse transform unit140/150may include an inverse quantization unit140, an inverse secondary transform unit151and an inverse primary transform unit152. Referring toFIG.7, the inverse quantization and inverse transform unit220/230may include an inverse quantization unit220, an inverse secondary transform unit231and an inverse primary transform unit232. In the present disclosure, transform may be performed through a plurality of stages. For example, two states of primary transform and secondary transform may be applied as shown inFIG.6or more than two transform stages may be used according to algorithms. Here, primary transform may be referred to core transform. The primary transform unit121can apply primary transform to a residual signal. Here, primary transform may be predefined as a table in an encoder and/or a decoder. The secondary transform unit122can apply secondary transform to a primarily transformed signal. Here, secondary transform may be predefined as a table in the encoder and/or the decoder. In an embodiment, non-separable secondary transform (NSST) may be conditionally applied as secondary transform. For example, NSST is applied only to intra-prediction blocks and may have a transform set applicable per prediction mode group. Here, a prediction mode group can be set based on symmetry with respect to a prediction direction. For example, prediction mode52and prediction mode16are symmetrical based on prediction mode34(diagonal direction), and thus one group can be generated and the same transform set can be applied thereto. Here, when transform for prediction mode52is applied, input data is transposed and then transform is applied because a transform set of prediction mode52is the same as that of prediction mode16. In the case of the planar mode and the DC mode, there is no symmetry with respect to directions and thus they have respective transform sets and a corresponding transform set may be composed of two transforms. Each transform set may be composed of three transforms for the remaining directional modes. The quantization unit130can perform quantization on a secondarily transformed signal. The inverse quantization and inverse transform unit140/150performs the reverse of the aforementioned procedure and redundant description is omitted. FIG.7is a schematic block diagram of the inverse quantization and inverse transform unit220/230in the decoding apparatus200. Referring toFIG.7, the inverse quantization and inverse transform unit220/230may include the inverse quantization unit220, the inverse secondary transform unit231and the inverse primary transform unit232. The inverse quantization unit220obtains transform coefficients from an entropy-decoded signal using quantization step size information. The inverse secondary transform unit231performs inverse secondary transform on the transform coefficients. Here, inverse secondary transform refers to inverse transform of secondary transform described inFIG.6. The inverse primary transform unit232performs inverse primary transform on the inversely secondarily transformed signal (or block) and obtains a residual signal. Here, inverse primary transform refers to inverse transform of primary transform described inFIG.6. In addition to DCT-2 and 4×4 DST-4 applied to HEVC, adaptive multiple transform or explicit multiple transform) (AMT or EMT) is used for residual coding for inter- and intra-coded blocks. A plurality of transforms selected from DCT/DST families is used in addition to transforms in HEVC. Transform matrices newly introduced in JEM are DST-7, DCT-8, DST-1, and DCT-5. The following table 1 shows basic functions of selected DST/DCT. TABLE 1TransformTypeBasis function Ti(j), i, j = 0, 1, . . ., N − 1DCT-IITi(j)=ω0·2N·cos⁡(π·i·(2⁢j+1)2⁢N)where⁢ω0={2Ni=01i≠0DCT-VTi(j)=ω0·ω1·22⁢N-1·cos⁡(2⁢π·i·j2⁢N-1),where⁢ω0={2Ni=01i≠0,ω1={2Nj=01j≠0DCT-VIIITi(j)=42⁢N+1·cos⁡(π·(2⁢i+1)·(2⁢j+1)4⁢N+2)DST-ITi(j)=2N+1·sin⁡(π·(i+1)·(j+1)N+1)DST-VIITi(j)=42⁢N+1·sin⁡(π·(2⁢i+1)·(j+1)2⁢N+1) EMT can be applied to CUs having a width and height equal to or less than 64 and whether EMT is applied can be controlled by a CU level flag. When the CU level flag is 0, DCT-2 is applied to CUs in order to encode residue. Two additional flags are signaled in order to identify horizontal and vertical transforms to be used for a luma coding block in a CU to which EMT is applied. As in HEVC, residual of a block can be coded in a transform skip mode in JEM. For intra-residual coding, a mode-dependent transform candidate selection process is used due to other residual statistics of other intra-prediction modes. Three transform subsets are defined as shown in the following table 2 and a transform subset is selected based on an intra-prediction mode as shown in Table 3. TABLE 2Transform SetTransform Candidates0DST-VII, DCT-VIII1DST-VII, DST-I2DST-VII, DCT-VIII Along with the subset concept, a transform subset is initially confirmed based on Table 2 by using the intra-prediction mode of a CU having a CU-level EMT_CU_flag of 1. Thereafter, for each of horizontal EMT_TU_horizontal_flag) and vertical (EMT_TU_vertical_flag) transforms, one of two transform candidates in the confirmed transform subset is selected based on explicit signaling using flags according to Table 3. TABLE 3Intra Mode012345678910111213V21010101010101H21010101010101Intra Mode1415161718192021222324252627V00000000010101H22222222210101Intra Mode2829303132333435363738394041V01010101010101H01010101010101Intra Mode4243444546474849505152535455V01012222222221H01010000000001Intra Mode5657585960616263646566V01010101010H01010101010 TABLE 4Configu-HorizontalVertical35 intraration(row)(column)Prediction67 intragrouptransformtransformmodesPrediction modesGroup0DST7DST70001DCT5DST7(G0)2DST7DCT53DCT5DCT5Group0DST7DST71, 3, 5, 7, 13,1, 3, 5, 7, 9, 11, 13,11DST1DST715, 17, 19, 21,23, 25, 27, 29, 31, 33,(G1)2DST7DST123, 29, 31, 3335, 37, 39, 41, 43, 45,3DST1DST155, 57, 59, 61, 63, 65Group0DST7DST72, 4, 6, 14, 16,2, 4, 6, 8, 10, 12, 24,21DCT8DST718, 20, 22, 30,26, 28, 30, 32, 34, 36,(G2)2DST7DCT832, 3438, 40, 42, 44, 56, 58,3DCT8DCT860, 64, 66Group0DST7DST78, 9, 10, 11, 1214, 15, 16, 17, 18, 19,31DCT5DST7(Neighboring20, 21, 22(G3)2DST7DCT8angles to(Neighboring3DCT5DCT8horizontalangles todirections)horizontaldirections)Group0DST7DST724, 25, 26,46, 47, 48, 49, 50, 51,41DCT8DST727, 2852, 53, 54(G4)2DST7DCT5(Neighboring(Neighboring3DCT8DCT5angles toangles toverticalverticaldirections)directions)Group0DCT8DCT8interInter prediction51DST7DCT8prediction(G5)2DCT8DST73DST7DST7 Table 4 shows a transform configuration group to which adaptive multiple transform (AMT) is applied as an embodiment to which the present disclosure is applied. Referring to Table 4, transform configuration groups are determined based on a prediction mode and the number of groups may be 6 (G0 to G5). In addition, G0 to G4 correspond to a case in which intra-prediction is applied and G5 represents transform combinations (or transform set or transform combination set) applied to a residual block generated according to inter-prediction. One transform combination may be composed of horizontal transform (or row transform) applied to rows of a corresponding 2D block and vertical transform (or column transform) applied to columns thereof. Here, each of the transform configuration groups may have four transform combination candidates. The four transform combination candidates may be selected or determined using transform combination indexes0to3and a transform combination index may be encoded and transmitted from an encoder to a decoder. In an embodiment, residual data (or residual signal) obtained through intra-prediction may have different statistical characteristics according to intra-prediction modes. Accordingly, transforms other than normal cosine transform may be applied for respective intra-predictions as shown in Table 4. In the present description, a transform type may be represented as DCT-Type 2, DCT-II or DCT-2, for example. Referring to Table 4, a case in which 35 intra-prediction modes are used and a case in which 67 intra-prediction modes are used are shown. A plurality of transform combinations may be applied for each transform configuration group classified in each intra-prediction mode column. For example, a plurality of transform combinations may be composed of four combinations (of transforms in the row direction and transforms in the column direction). As a specific example, DST-7 and DCT-5 can be applied to group 0 in both the row (horizontal) direction and the column (vertical) direction and thus a total of four groups can be applied. Since a total of four transform kernel combinations can be applied to each intra-prediction mode, a transform combination index for selecting one therefrom can be transmitted per transform unit. In the present description, a transform combination index may be referred to as an AMT index and may be represented by amt_idx. Furthermore, a case in which DCT-2 is optimal for both the row direction and the column direction may be generated due to characteristics of a residual signal in addition to the transform kernels shown in Table 4. Accordingly, transform can be adaptively applied by defining an AMT flag for each coding unit. Here, DCT-2 can be applied to both the row direction and the column direction when the AMT flag is 0 and one of four combinations can be selected or determined through an AMT index when the AMT flag is 1. In an embodiment, if the number of transform coefficients is less than 3 for one transform unit when the AMT flag is 0, the transform kernels of Table 4 is not applied and DST-7 may be applied to both the row direction and the column direction. In an embodiment, if transform coefficient values are previously parsed and thus the number of transform coefficients is less than 3, an AMT index is not parsed and DST-7 is applied and thus the amount of transmission of additional information can be reduced. In an embodiment, AMT can be applied only when both the width and height of a transform unit are equal to or less than 32. In an embodiment, Table 4 can be preset through off-line training. In an embodiment, the AMT index can be defined as one index that can indicate a combination of horizontal transform and vertical transform. Alternatively, the AMT index can be defined as separate horizontal transform index and vertical transform index. FIG.8is a flowchart showing a process of performing adaptive multiple transform (AMT). Although an embodiment with respect to separable transform that is separately applied in the horizontal direction and the vertical direction is basically described in the present description, a transform combination may be composed of non-separable transforms. Alternatively, a transform combination may be configured as a mixture of separable transforms and non-separable transforms. In this case, row/column-wise transform selection or selection in the horizontal/vertical direction is unnecessary when separable transform is used and the transform combinations of Table 4 can be used only when separable transform is selected. In addition, methods proposed in the present description can be applied irrespective of primary transform and secondary transform. That is, the methods can be applied to both the transforms. Here, primary transform can refer to transform for initially transforming a residual block and secondary transform can refer to transform for applying transform to a block generated as a result of primary transform. First, the encoding apparatus100can determine a transform group corresponding to a current block (S805). Here, the transform group may refer to a transform group of Table 4 but the present disclosure is not limited thereto and the transform group may be composed of other transform combinations. The encoding apparatus100can perform transform on available candidate transform combinations in the transform group (S810). As a result of transform, the encoding apparatus100can determine or select a transform combination with a lowest rate distortion (RD) cost (S815). The encoding apparatus100can encode a transform combination index corresponding to the selected transform combination (S820). FIG.9is a flowchart showing a decoding process of performing AMT. First, the decoding apparatus200can determine a transform group for the current block (S905). The decoding apparatus200can parse a transform combination index, and the transform combination index can correspond to one of a plurality of transform combinations in the transform group (S910). The decoding apparatus200can derive a transform combination corresponding to the transform combination index (S915). Here, although the transform combination may refer to a transform combination shown in Table 4, the present disclosure is not limited thereto. That is, the transform combination may be configured as other transform combinations. The decoding apparatus200can perform inverse transform on the current block based on the transform combination (S920). When the transform combination is composed of row transform and column transform, row transform may be applied and then column transform may be applied. However, the present disclosure is not limited thereto, and row transform may be applied after column transform is applied, and when the transform combination is composed of non-separable transforms, a non-separable transform can be immediately applied. In another embodiment, the process of determining a transform group and the process of parsing a transform combination index may be simultaneously performed. In the embodiment of the present disclosure, the aforementioned term “AMT” may be redefined as “multiple transform set or multiple transform selection (MTS)”. MTS related syntaxes and semantics described below are summarized in Versatile Video coding (VVC) JVET-K1001-v4. In an embodiment of the present disclosure, two MTS candidates can be used for directional modes and four MTS candidates can be used for nondirectional modes as follows. A) Nondirectional modes (DC and planar) DST-7 is used for horizontal and vertical transforms when MTS index is 0. DST-7 is used for vertical transform and DCT-8 is used for horizontal transforms when MTS index is 1. DCT-8 is used for vertical transform and DST-7 is used for horizontal transforms when MTS index is 2. DCT-8 is used for horizontal and vertical transforms when MTS index is 3. B) Modes belonging to horizontal group modes DST-7 is used for horizontal and vertical transforms when MTS index is 0. DCT-8 is used for vertical transform and DST-7 is used for horizontal transforms when MTS index is 1. C) Modes belonging to vertical group modes DST-7 is used for horizontal and vertical transforms when MTS index is 0. DST-7 is used for vertical transform and DCT-8 is used for horizontal transforms when MTS index is 1. Here (In VTM 2.0 in which 67 modes are used), horizontal group modes include intra-prediction modes 2 to 34 and vertical modes include intra-prediction modes 35 to 66. In another embodiment of the present disclosure, three MTS candidates are used for all intra-prediction modes. DST-7 is used for horizontal and vertical transforms when MTS index is 0. DST-7 is used for vertical transform and DCT-8 is used for horizontal transforms when MTS index is 1. DCT-8 is used for vertical transform and DST-7 is used for horizontal transforms when MTS index is 2. In another embodiment of the present disclosure, two MTS candidates are used for directional prediction modes and three MTS candidates are used for nondirectional modes. A) Nondirectional modes (DC and planar) DST-7 is used for horizontal and vertical transforms when MTS index is 0. DST-7 is used for vertical transform and DCT-8 is used for horizontal transforms when MTS index is 1. DCT-8 is used for vertical transform and DST-7 is used for horizontal transforms when MTS index is 2. B) Prediction modes corresponding to horizontal group modes DST-7 is used for horizontal and vertical transforms when MTS index is 0. DCT-8 is used for vertical transform and DST-7 is used for horizontal transforms when MTS index is 1. C) Prediction modes corresponding to vertical group modes DST-7 is used for horizontal and vertical transforms when MTS index is 0. DST-7 is used for vertical transform and DCT-8 is used for horizontal transforms when MTS index is 1. In another embodiment of the present disclosure, one MTS candidate (e.g., DST-7) can be used for all intra-modes. In this case, encoding time can be reduced by 40% with some minor coding loss. In addition, one flag may be used to indicate between DCT-2 and DST-7. FIG.10is a flowchart showing an inverse transform process based on MTS according to an embodiment of the present disclosure. The decoding apparatus200to which the present disclosure is applied can obtain sps_mts_intra_enabled_flag or sps_mts_inter_enabled_flag (S1005). Here, sps_mts_intra_enabled_flag indicates whether cu_mts_flag is present in a residual coding syntax of an intra-coding unit. For example, cu_mts_flag is not present in the residual coding syntax of the intra-coding unit if sps_mts_intra_enabled_flag=0 and cu_mts_flag is present in the residual coding syntax of the intra-coding unit if, sps_mts_intra_enabled_flag=1. In addition, sps_mts_inter_enabled_flag indicates whether cu_mts_flag is present in a residual coding syntax of an inter-coding unit. For example, cu_mts_flag is not present in the residual coding syntax of the inter-coding unit if sps_mts_interenabled_flag=0 and cu_mts_flag is present in the residual coding syntax of the inter-coding unit if sps_mts_inter_enabled_flag=1. The decoding apparatus200can obtain cu_mts_flag based on sps_mts_intraenabled_flag or sps_mts_interenabled_flag (S1010). For example, the decoding apparatus200can obtain cu_mts_flag when sps_mts_intraenabled_flag=1 or sps_mts_inter_enabled_flag=1. Here, cu_mts_flag indicates whether MTS is applied to a residual sample of a luma transform block. For example, MTS is not applied to the residual sample of the luma transform block if cumts_flag=0 and MTS is applied to the residual sample of the luma transform block if cu_mts_flag=1. The decoding apparatus200can obtain mts_idx based on cu_mts_flag (S1015). For example, when cu_mts_flag=1, the decoding apparatus200can obtain mts_idx. Here, mts_idx indicates which transform kernel is applied to luma residual samples of a current transform block in the horizontal direction and/or the vertical direction. For example, at least one of embodiments described in the present description can be applied to mts_idx. The decoding apparatus200can derive a transform kernel corresponding to mts_idx (S1020). For example, the transform kernel corresponding to mts_idx can be separately defined as horizontal transform and vertical transform. For example, when MTS is applied to the current block (i.e., cu_mts_flag=1), the decoding apparatus200can configure MTS candidates based on the intra-prediction mode of the current block. In this case, the decoding flowchart ofFIG.10may further include a step of configuring MTS candidates. Then, the decoding apparatus200can determine an MTS candidate to be applied to the current block from among the configured MTS candidates using mts_idx. As another example, different transform kernels can be applied to horizontal transform and vertical transform. However, the present disclosure is not limited thereto and the same transform kernel may be applied to the horizontal transform and vertical transform. The decoding apparatus200can perform inverse transform based on the transform kernel (S1025). Furthermore, in the specification, MTS may be represented as AMT or EMT and mts_idx may be represented as AMT_idx, EMT_idx, AMT_TU_idx EMT_TU_idx, or the like but the present disclosure is not limited thereto. The present disclosure is described by being divided into a case in which the MTS is applied and a case in which the MTS is not applied based on the MTS flag, but is not limited to such an expression. For example, whether or not the MTS is applied may be the same meaning as whether to use other transform types (or transform kernels) other than a predefined specific transform type (which may be referred to as a basic transform type, a default transform type, etc.). If the MTS is applied, a transform type (e.g., any one transform type or a combined transform type of two or more transform types among a plurality of transform types) other than a basic transform type may be used for a transform. Further, if the MTS is not applied, the basic transform type may be used for the transform. In an embodiment, the basic transform type may be configured (or defined) as DCT-2. As an example, when a MTS flag syntax indicating whether or not the MTS is applied to a current transform block and the MTS are applied, a MTS index syntax indicating a transform type applied to the current transform block may also be individually transmitted from an encoder to a decoder. As another example, when whether or not the MTS is applied to a current transform block and the MTS are applied, a syntax (e.g., MTS index) including all of transform types applied to the current transform block may also be transmitted from an encoder to a decoder. That is, in the latter example, a syntax (or syntax element) indicating a transform type applied to the current transform block (or unit) may be transmitted from the encoder to the decoder within all of transform type groups (or transform type sets) including the above-described basic transform type. Accordingly, despite the expressions, a syntax (MTS index) indicating a transform type applied to a current transform block may include information on whether MTS is applied. In other words, in the latter embodiment, only an MTS index may be signaled without an MTS flag. In this case, DCT-2 may be interpreted as being included in MTS. However, in the present disclosure, a case where DCT-2 is applied may be described as a case where MTS is not applied. Nevertheless, a technical range related to MTS is not limited to corresponding defined contents. FIG.11is a block diagram of an apparatus that performs decoding based on MTS according to an embodiment of the present disclosure. The decoding apparatus200to which the present disclosure is applied may include a sequence parameter acquisition unit1105, an MTS flag acquisition unit1110, an MTS index acquisition unit1115, and a transform kernel derivation unit1120. The sequent parameter acquisition unit1105can acquire sps_mts_intraenabled_flag or sps_mts_interenabled_flag. Here, sps_mts_intraenabled_flag indicates whether cu_mts_flag is present in a residual coding syntax of an intra-coding unit and sps_mts_interenabled_flag indicates whether cu_mts_flag is present in a residual coding syntax of an inter-coding unit. Description with reference toFIG.10may be applied to a specific example. The MTS flag acquisition unit1110can acquire cu_mts_flag based on sps_mts_intra_enabled_flag or sps_mts_inter_enabled_flag. For example, the MTS flag acquisition unit1110can acquire cu_mts_flag when sps_mts_intra_enabled_flag=1 or sps_mts_inter_enabled_flag=1. Here, cu_mts_flag indicates whether MTS is applied to a residual sample of a luma transform block. Description with reference toFIG.10may be applied to a specific example. The MTS index acquisition unit1115can acquire mts_idx based on cu_mts_flag. For example, the MTS index acquisition unit1115can acquire mts_idx when cu_mts_flag=1. Here, mts_idx indicates which transform kernel is applied to luma residual samples of the current transform block in the horizontal direction and/or the vertical direction. Description with reference toFIG.10may be applied to a specific example. The transform kernel derivation unit1120can derive a transform kernel corresponding to mts_idx. Then, the decoding apparatus200can perform inverse transform based on the derived transform kernel. A method of generating, by the decoding apparatus200, a block including residual samples from a transform block through an inverse transform may be as follows. Input to this process is as follows:luma position positions (xTbY, yTbY) indicating top left samples of a current luma transform block for the top left luma sample of a current picture,a variable nTbW indicating the width of a current transform block,a variable nTbH indicating the height of the current transform block,a variable cIdx indicating a chroma component of a current block,an (nTbW)×(nTbH) array d[x][y] of scaled transform coefficients for x=−1, y=0 . . . nTbH−1. Output from this process is (nTbW)×(nTbH) array r[x][y] of residual samples for x=0 . . . nTbW−1, y=0 . . . nTbH−1. A variable trTypeHor indicating a horizontal transform kernel and a variable trTypeVer indicating a vertical transform kernel may be derived based on mts_idx[x][y] and CuPredMode[x] [y] of Table 5. The (nTbW)×(nTbH) array of the residual samples may be derived as follows: 1. A (vertical) column of each scaled transform coefficient d[x][y] (x=0 . . . nTbW−1, y=0 . . . nTbH−1) is transformed into e[x][y] (x=0 . . . nTbW−1, y=0 . . . nTbH−1) by a one-dimensional transform process for a column x=0 . . . nTbW−1. In this case, input is a transform type variable trType which is set identically with the height nTbH of the transform block, the list d[x][y] (y=0 . . . nTbH−1), and trTypeVer. The list e[x][y] (y=0 . . . nTbH−1) is output. 2. Intermediate sample values g[x][y] (x=0 . . . nTbW−1, y=0 . . . nTbH−1) are derived like Equation 4. g[x][y]=Clip3(CoeffMin,CoeffMax,(e[x][y]+256)>>9) 3. A (horizontal) row of each intermediate array g[x][y] (x=0 . . . nTbW−1, y=0 . . . nTbH−1) is transformed into r[x][y] (x=0 . . . nTbW−1, y=0 . . . nTbH−1) by a one-dimensional transform process for each row y=0 . . . nTbH−1. In this case, input is a transform type variable trType which is set identically with the width of the transform block nTbW, the list g[x][y] (x=0 . . . nTbW−1), and trTypeVer. The list r[x][y] (y=0 . . nTbH−1) is output. TABLE 5CuPredMode[ x ][ y ] = =CuPredMode[ x ][ y ] = =MODE_INTRAMODE_INTERmts_idx[ x ][ y ]trTypeHortrTypeVertrTypeHortrTypeVer−1 (inferred)00000 (00)11221 (01)21122 (10)12213 (11)2211 CuPredMode indicates a prediction mode applied to a current CU. Mode-dependent non-separable secondary transform (MDNSST) is introduced. To maintain low complexity, MDNSST is applied to only low-frequency coefficients after primary transform. Further, non-separable transform chiefly applied to low-frequency coefficients may be called low frequency non-separable transform (LFNST). If both the width (W) and height (H) of a transform coefficient block are equal to or greater than 8, 8×8 non-separable secondary transform is applied to an 8×8 top left region of the transform coefficient block. 4×4 non-separable secondary transform is applied if the width or height is less than 8, and the 4×4 non-separable secondary transform can be performed on top left min(8, W)×min(8, H) of the transform coefficient block. Here, min(A, B) is a function of outputting a smaller value between A and B. Further, W×H is the block size, W represents the width and H represents the height. In an embodiment, a total of 35×3 non-separable secondary transforms may be present for a 4×4 and/or 8×8 block size. In this case, 35 is the number of transform sets specified by an intra prediction mode. 3 is the number of NSST candidates for each prediction mode. Mapping from the intra prediction mode to the transform set may be variously defined. In order to indicate a transform kernel among transform sets, an NSST index (NSST idx) can be coded. When NSST is not applied, NSST index equal to 0 is signaled. FIGS.12and13are flowcharts showing encoding/decoding to which secondary transform is applied as an embodiment to which present disclosure is applied. In JEM, secondary transform (MDNSST) is not applied for a block coded with transform skip mode. When the MDNSST index is signaled for a CU and not equal to zero, MDNSST is not used for a block of a component that is coded with transform skip mode in the CU. The overall coding structure including coefficient coding and NSST index coding is shown inFIGS.12and13. A CBF flag is encoded to determine whether coefficient coding and NSST coding are performed. InFIGS.12and13, the CBF flag can represent a luma block cbg flag (cbf_luma_flag) or a chroma block cbf flag (cbf_cb flag or cbf_cr flag). When the CBF flag is 1, transform coefficients are coded. Referring toFIG.12, the encoding apparatus100checks whether CBF is 1 (S1205). If CBF is 0, the encoding apparatus100does not perform transform coefficient encoding and NSST index encoding. If CBF is 1, the encoding apparatus100performs encoding on transform coefficients (S1210). Thereafter, the encoding apparatus100determines whether to perform NSST index coding (S1215) and performs NSST index coding (S1220). When NSST index coding is not applied, the encoding apparatus100can end the transform procedure without applying NS ST and perform the subsequent step (e.g., quantization). Referring toFIG.13, the decoding apparatus200checks whether CBF is 1 (S1305). If CBF is 0, the decoding apparatus200does not perform transform coefficient decoding and NSST index decoding. If CBF is 1, the decoding apparatus200performs decoding on transform coefficients (S1310). Thereafter, the decoding apparatus200determines whether to perform NSST index coding (S1315) and parse an NSST index (S1320). NSST can be applied to an 8×8 or 4×4 top left region instead of being applied to the entire block (TU in the case of HEVC) to which primary transform has been applied. For example, 8×8 NSST can be applied when a block size is 8×8 or more (that is, both of the width and height of a block is greater than or equal to 8) and 4×4 NSST can be applied when a block size is less than 8×8 (that is, both of the width and height is less than 8). Further, when 8×8 NSST is applied (that is, when a block size is 8×8 or more), 4×4 NSST can be applied per 4×4 block (that is, top left 8×8 region is divided into 4×4 blocks and 4×4 NSST is applied to each 4×4 block). Both 8×8 NSST and 4×4 NSST can be determined according to the above-described transform set configuration, and 8×8 NSST may have 64 pieces of input data and 64 pieces of output data and 4×4 NSST may have 16 inputs and16outputs because they are non-separable transforms. FIGS.14and15show an embodiment to which the present disclosure is applied,FIG.14is a diagram for describing Givens rotation andFIG.15shows a configuration of one round in 4×4 NSST composed of Givens rotation layers and permutations. Both 8×8 NSST and 4×4 NSST can be configured as hierarchical combinations of Givens rotations. A matrix corresponding to one Givens rotation is represented as Equation 1 and a matrix product is represented asFIG.14. Rθ=[cos⁢θ-s⁢in⁢θsin⁢θcos⁢θ][Equation⁢1] InFIG.14, tm and to output according to Givens rotation can be calculated as represented by Equation 2. tm=xmcos θ−xnsin θ tb=xmsin θ−xncos θ  [Equation 2] Since Givens rotation rotates two pieces of data as shown inFIG.14, 32 or 8 Givens rotations are required to process 64 (in the case of 8×8 NSST) or 16 (in the case of 4×4 NSST) pieces of data. Accordingly, a group of 32 or 8 Givens rotations can form a Givens rotation layer. As shown inFIG.15, output data for one Givens rotation layer is transmitted as input data for the next Givens rotation layer through permutation (shuffling). A pattern permuted as shown inFIG.15is regularly defined, and in the case of 4×3 NSST, four Givens rotation layers and corresponding permutations form one round. 4×4 NSST is performed by two rounds and 8×8 NSST is performed by four rounds. Although different rounds use the same permutation pattern, applied Givens rotation angles are different. Accordingly, angle data for all Givens rotations constituting each permutation needs to be stored. As a final step, one more permutation is finally performed on data output through Givens rotation layers, and information about corresponding permutation is separately stored for each permutation. Corresponding permutation is performed at the end of forward NSST and corresponding reverse permutation is initially applied in inverse NSST. Reverse NSST reversely performs Givens rotation layers and permutations applied to forward NS ST and performs rotation by taking a negative value for each Given rotation angle. Reduced Secondary Transform (RST) FIG.16shows operation of RST as an embodiment to which the present disclosure is applied. When an orthogonal matrix representing a transform is N×N, a reduced transform (RT) leaves only R of N transform basic vectors (R<N). A matrix with respect to forward RT that generates transform coefficients can be defined by Equation 3. TR×N=[t11t12t13…t1⁢Nt21t22t23t2⁢N⋮⋱⋮tR⁢1tR⁢2tR⁢3…tRN][Equation⁢3] Since a matrix with respect to reverse RT is a transpose matrix of a forward RT matrix, application of forward RT and reverse RT is schematized as shown inFIGS.16A and16B. RT applied to an 8×8 top left block of a transform coefficient block to which primary transform has been applied can be referred to as 8×8 RST. When R is set to 16 in Equation 3, forward 8×8 RST has a form of 16×64 matrix and reverse 8×8 RST has a form of 64×16 matrix. In this case, an M×N matrix may consist of M rows and N columns. Further, the transform set configuration as shown in Table 5 can be applied to 8×8 RST. That is, 8×8 RST can be determined based on transform sets according to intra-prediction modes as shown in Table 5. Since one transform set is composed of two or three transforms according to an intra-prediction mode, one of a maximum of four transforms including a case in which secondary transform is not applied can be selected (one transform can correspond to an anisotropic matrix). When indices 0, 1, 2 and 3 are assigned to the four transforms, a transform to be applied can be designated by signaling a syntax element corresponding to an NSST index for each transform coefficient block. For example, the index 9 can assigned to an anisotropic matrix, that is, a case in which secondary transform is not applied. Consequently, 8×8 NSST can be designated according to JEM NSST and 8×8 RST can be designated according to RST configuration for an 8×8 top left block through the NSST index. FIG.17is a diagram showing a process of performing reverse scanning from the sixty-fourth coefficient to the seventeenth coefficient in reverse scanning order as an embodiment to which the present disclosure is applied. When 8×8 RST as represented by Equation 3 is applied, 16 valid transform coefficients are generated and thus 64 pieces of input data constituting an 8×8 region are reduced to 16 pieces of output data and only a quarter region is filled with valid transform coefficients according to the viewpoint of two-dimensional region. Accordingly, the 16 pieces of output data obtained by applying forward 8×8 RST fill a top left region ofFIG.17. InFIG.17, a 4×4 top left region becomes a region of interest (ROI) filled with valid transform coefficients and the remaining region is vacant. The vacant region may be filled with 0 as a default value. If non-zero valid transform coefficients are discovered in regions other than the ROI ofFIG.17, 8×8 RST has not been definitely applied and thus corresponding coding may be omitted for corresponding NSST index. On the other hand, if non-zero valid transform coefficients are not discovered in regions other than the ROI ofFIG.17(8×8 RST is applied or regions other than the ROI are filled with 0), the NSST index may be coded because 8×8 RST might be applied. Such conditional NSST index coding requires checking presence or absence of a non-zero transform coefficient and thus can be performed after the residual coding process. FIG.18is an exemplary flowchart showing encoding using a single transform indicator as an embodiment to which the present disclosure is applied. In an embodiment of the present disclosure, the single transform indicator (STI) is introduced. A single transform can be applied when the STI is enabled (STI coding==1) instead of sequentially used two transforms (primary transform and secondary transform). Here, the single transform may be any type of transform. For example, the single transform may be a separable transform or a non-separable transform. The single transform may be a transform approximated from a non-separable transform. A single transform index (ST_idx inFIG.18) can be signaled when the STI has been enabled. Here, the single transform index can indicate a transform to be applied form among available transform candidates. Referring toFIG.18, the encoding apparatus100determines whether CBF is 1 (S1805). When CBF is 1, the encoding apparatus100determines whether STI coding is applied (S1810). When STI coding is applied, the encoding apparatus100encodes an STI index STI_idx (S1845) and performs coding on transform coefficient (S1850). When STI coding is not applied, the encoding apparatus100encodes a flag EMT_CU_Flag indicating whether EMT (or MTS) is applied at a CU level (S1815). Thereafter, the encoding apparatus100performs coding on the transform coefficients (S1820). Then, the encoding apparatus100determines whether EMT is applied to a transform unit (TU) (S1825). When EMT is applied to the TU, the encoding apparatus100encodes a primary transform index EMT_TU Idx applied to the TU (S1830). Subsequently, the encoding apparatus100determines whether NSST is applied (S1835). When NSST is applied, the encoding apparatus100encodes an index NSST_Idx indicating NSST to be applied (S1840). In an example, if single transform coding conditions are satisfied/enabled (e.g., STI_coding==1), the single transform index ST_Idx may be implicitly derived instead of being signaled. ST_idx can be implicitly determined based on a block size and an intra-prediction mode. Here, ST_Idx can indicate a transform (or transform kernel) applied to the current transform block. The STI can be enabled if one or more of the following conditions are satisfied (STI_coding==1).1) The block size corresponds to a predetermined value such as 4 or 8.2) Block width==Block height (square block)3) The intra-prediction mode is one of predetermined modes such as DC and planar modes. In another example, the STI coding flag can be signaled in order to indicate whether the single transform is applied. The STI coding flag can be signaled based on an STI coding value and CBF. For example, the STI coding flag can be signaled when CBF is 1 and STI coding is enabled. Furthermore, the STI coding flag can be conditionally signaled in consideration of a block size, a block shape (square block or non-square block) or an intra-prediction mode. To use information acquired during coefficient coding, ST_idx may be determined after coefficient coding. In an example, ST_idx can be implicitly determined based on a block size, an intra-prediction mode and the number of non-zero coefficients. In another example, ST_idx can be conditionally encoded/decoded based on a block size, a block shape, an intra-prediction mode and/or the number of non-zero coefficients. In another example, ST_idx signaling may be omitted depending on a distribution of non-zero coefficients (i.e., positions of non-zero coefficients). Particularly, when non-zero coefficients are discovered in a region other than a 4×4 top left region, ST_idx signaling can be omitted. FIG.19is an exemplary flowchart showing encoding using a unified transform indicator (UTI) as an embodiment to which the present disclosure is applied. In an embodiment of the present disclosure, the unified transform indicator is introduced. The UTI includes a primary transform indicator and a secondary transform indicator. Referring toFIG.19, the encoding apparatus100determines whether CBF is 1 (S1905). When CBF is 1, the encoding apparatus100determines whether UTI coding is applied (S1910). When UTI coding is applied, the encoding apparatus100encodes a UTI index UTI_idx (S1945) and performs coding on transform coefficient (S1950). When UTI coding is not applied, the encoding apparatus100encodes the flag EMT_CU_Flag indicating whether EMT (or MTS) is applied at the CU level (S1915). Thereafter, the encoding apparatus100performs coding on the transform coefficients (S1920). Then, the encoding apparatus100determines whether EMT is applied to a transform unit (TU) (S1925). When EMT is applied to the TU, the encoding apparatus100encodes a primary transform index EMT_TU Idx applied to the TU (S1930). Subsequently, the encoding apparatus100determines whether NSST is applied (S1935). When NSST is applied, the encoding apparatus100encodes an index NSST_Idx indicating NSST to be applied (S1940). The UTI may be coded for each predetermined unit (CTU or CU). The UTI coding mode may be dependent on the following conditions.1) Block size2) Block shape3) Intra-prediction mode How to derive/extract a core transform index from the UTI is defined in advance. How to derive/extract a secondary transform index from the UTI is defined in advance. A syntax structure for the UTI can be optionally used. The UTI can depend on a CU (TU) size. For example, a smaller CU (TU) may have a UTI index in a narrower range. In an example, the UTI can indicate only the core transform index if a predefined condition (e.g., a block size is less than a predefined threshold value) is satisfied. TABLE 6UTI -BanalizationCoreSecondaryIndex(FLC)Transform IdxTransform Idx0000000010000101200010023000110340010010500101116001101270011113. . .. . .. . .. . .311111153 In another example, UTI index may be considered as the core transform index when secondary transform is not indicated to be used (e.g., secondary transform index==0 or secondary transform is already predetermined). In the same manner, UTI index may be considered as a secondary transform index when the core transform index is considered to be known. Specifically, considering the intra prediction mode and the block size, a predetermined core transform may be used. FIGS.20A and20Billustrate two exemplary flowcharts showing encoding using the UTI as an embodiment to which the present disclosure is applied. In another example, the transform coding structure may use UTI index coding as shown inFIGS.20A and20B. Here, the UTI index may be coded earlier than coefficient coding or later than coefficient coding. Referring to the left flowchart ofFIG.20A, the encoding apparatus100checks whether CBF is 1 (S2005). When CBF is 1, the encoding apparatus100codes the UTI index UTI_idx (S2010) and performs coding on transform coefficients (S2015). Referring to the right flowchart ofFIG.20B, the encoding apparatus100checks whether CBF is 1 (S2055). When CBF is 1, the encoding apparatus100performs coding on the transform coefficients (S2060) and codes the UTI index UTI_idx (S2065). In another embodiment of the present disclosure, data hiding and implicit coding methods for transform indicators are introduced. Here, transform indicators may include ST_idx, UTI_idx, EMT_CU_Flag, EMT_TU_Flag, NSST_Idx and any sort of transform related index which may be used to indicate a transform kernel. The above-mentioned transform indicator may not be signaled but the corresponding information may be inserted in a coefficient coding process (it can be extracted during a coefficient coding process). The coefficient coding process may include the following parts.Last_position_x, Last_position_yGroup flagSignificance mapGreather_than_1 flagGreather_than_2 flagRemaining level codingSign coding For example, transform indicator information may be inserted in one or more of above-mentioned coefficient coding processes. In order to insert transform indicator information, the followings may be considered jointly.Pattern of sing codingThe absolute value of remaining levelThe number of Greather_than_1 flagThe values of Last_position_X and Last_position_Y The above-mentioned data hiding method may be considered conditionally. For example, the data hiding method may be dependent on the number of non-zero coefficients. In another example, NSST_idx and EMT_idx may be dependent. For example, NSST_idx may not be zero when EMT_CU_Flag is equal to zero (or one). In this case, NSST_idx−1 may be signaled instead of NSST_idx. In another embodiment of the present disclosure, NSST transform set mapping based on intra-prediction mode is introduced as shown in the following table 7. Although NSST is described below as an example of non-separable transform, another known terminology (e.g., LFNST) may be used for non-separable transform. For example, NSST set and NSST index may be replaced with LFNST set and LFNST index. Further, RST described in this specification may also be replaced with LFNST as an example of non-separable transform (e.g., LFNST) using a non-square transform matrix having a reduced input length and/or a reduced output length in a square non-separable transform matrix applied to an at least a region (4×4 or 8×8 top left region or a region other than a 4×4 right bottom region in an 8×8 block) of a transform block. TABLE 7Intra mode012345678910111213141516NSST Set002222222222218181818Intra mode3435363738394041424344454647484950NSST Set3434343434343434343434181818181818Intra mode1718192021222324252627282930313233NSST Set1818181818181834343434343434343434Intra mode51525354555657585960616263646566NSST Set181818181822222222222 The NSST Set number may be rearranged from 0 to 3 as shown in Table 8. TABLE 8Intra mode012345678910111213141516NSST Set00111111111112222Intra mode3435363738394041424344454647484950NSST Set33333333333222222Intra mode1718192021222324252627282930313233NSST Set22222223333333333Intra mode51525354555657585960616263646566NSST Set2222211111111111 In the NSST transform set, only four transform sets (instead of35) are used so that the required memory space can be reduced. In addition, various numbers of transform kernels per transform set may be used as follows. Case A: Two available transform kernels for each transform set are used so that the NSST index range is from 0 to 2. For example, when the NSST index is 0, secondary transform (inverse secondary transform based on a decoder) may not be applied. When the NSST index is 1 or 2, secondary transform may be applied. The transform set may include two transform kernels to which an index 1 or 2 may be mapped. TABLE 9NSST Set0(DC, Planar)123# of transform kernels2222 Referring to Table 9, two transform kernels are used for each of non-separable transform (NSST or LFNST) sets 0 to 3. Case B: Two available transform kernels are used for transform set 0 and one is used for others. Available NSST indices for transform set 0 (DC and Planar) are 0 to 2. However, NSST indices for other modes (transform sets 1, 2 and 3) are 0 to 1. TABLE 10NSST Set0(DC, Planar)123# of transform kernels2111 Referring to Table 10, two non-separable transform kernels are set for a non-separable transform (NSST) set corresponding to index 0 and one non-separable transform kernel is set for each of non-separable transform (NSST) sets corresponding to indices 1, 2 and 3. Case C: One transform kernel is used per transform kernel and the NSST index range is 0 to 1. TABLE 11NSST Set0(DC, Planar)123# of transform kernels1111 FIG.21is an exemplary flowchart showing encoding for performing transform as an embodiment to which the present disclosure is applied. The encoding apparatus100performs primary transform on a residual block (S2105). The primary transform may be referred to as core transform. As an embodiment, the encoding apparatus100may perform the primary transform using the above-mentioned MTS. Further, the encoding apparatus100may transmit an MTS index indicating a specific MTS from among MTS candidates to the decoding apparatus200. Here, the MTS candidates may be configured based on the intra-prediction mode of the current block. The encoding apparatus100determines whether to apply secondary transform (S2110). For example, the encoding apparatus100may determine whether to apply the secondary transform based on transform coefficients of the primarily transformed residual block. For example, the secondary transform may be NSST or RST. The encoding apparatus100determines the secondary transform (S2115). Here, the encoding apparatus100may determine the secondary transform based on an NSST (or RST) transform set designated according to the intra-prediction mode. For example, the encoding apparatus100may determine a region to which the secondary transform will be applied based on the size of the current block prior to step S2115. The encoding apparatus100performs the secondary transform determined in step S2115(S2120). FIG.22is an exemplary flowchart showing decoding for performing transform as an embodiment to which the present disclosure is applied. The decoding apparatus200determines whether to apply inverse secondary transform (S2205). For example, the inverse secondary transform may be NSST or RST. For example, the decoding apparatus200may determine whether to apply the inverse secondary transform based on a secondary transform flag received from the encoding apparatus100. The decoding apparatus200determines the inverse secondary transform (S2210). Here, the decoding apparatus200may determine the inverse secondary transform applied to the current block based on the NSST (or RST) transform set designated according to the aforementioned intra-prediction mode. Further, for example, the decoding apparatus200may determine a region to which the inverse secondary transform will be applied based on the size of the current block prior to step S2210. The decoding apparatus200performs inverse secondary transform on an inversely quantized residual block using the inverse secondary transform determined in step S2210(S2215). The decoding apparatus performs inverse primary transform on the inversely secondarily transformed residual block (S2220). The inverse primary transform may be called inverse core transform. In an embodiment, the decoding apparatus200may perform the inverse primary transform using the aforementioned MTS. Further, as an example, the decoding apparatus200may determine whether MTS is applied to the current block prior to step S2220. In this case, the decoding flowchart ofFIG.22may further include a step of determining whether MTS is applied. For example, when MTS is applied to the current block (i.e., cu_mts_flag=1), the decoding apparatus200may configure MTS candidates based on the intra-prediction mode of the current block. In this case, the decoding flowchart ofFIG.22may further include a step of configuring MTS candidates. In addition, the decoding apparatus200may determine inverse primary transform applied to the current block using mtx_idx indicating a specific MTS from among the configured MTS candidates. FIG.23is a detailed block diagram of the transform unit120in the encoding apparatus100as an embodiment to which the present disclosure is applied. The encoding apparatus100to which an embodiment of the present disclosure is applied may include a primary transform unit2310, a secondary transform application determination unit2320, a secondary transform determination unit2330, and a secondary transform unit2340. The primary transform unit2310can perform primary transform on a residual block. The primary transform may be referred to as core transform. As an embodiment, the primary transform unit2310may perform the primary transform using the above-mentioned MTS. Further, the primary transform unit2310may transmit an MTS index indicating a specific MTS from among MTS candidates to the decoding apparatus200. Here, the MTS candidates may be configured based on the intra-prediction mode of the current block. The secondary transform application determination unit2320can determine whether to apply secondary transform. For example, the secondary transform application determination unit2320may determine whether to apply the secondary transform based on transform coefficients of the primarily transformed residual block. For example, the secondary transform may be NSST or RST. The secondary transform determination unit2330determines the secondary transform. Here, the secondary transform determination unit2330may determine the secondary transform based on an NSST (or RST) transform set designated according to the intra-prediction mode as described above. For example, the secondary transform determination unit2330may determine a region to which the secondary transform will be applied based on the size of the current block. The secondary transform unit2340can perform the determined secondary transform. FIG.24is a detailed block diagram of the inverse transform unit230in the decoding apparatus200as an embodiment to which the present disclosure is applied. The decoding apparatus200to which the present disclosure is applied includes an inverse secondary transform application determination unit2410, an inverse secondary transform determination unit2420, an inverse secondary transform unit2430, and an inverse primary transform unit2440. The inverse secondary transform application determination unit2410can determine whether to apply inverse secondary transform. For example, the inverse secondary transform may be NSST or RST. For example, the inverse secondary transform application determination unit2410may determine whether to apply the inverse secondary transform based on a secondary transform flag received from the encoding apparatus100. The inverse secondary transform determination unit2420can determine the inverse secondary transform. Here, the inverse secondary transform determination unit2420may determine the inverse secondary transform applied to the current block based on the NSST (or RST) transform set designated according to the intra-prediction mode. Further, for example, the inverse secondary transform determination unit2420may determine a region to which the inverse secondary transform will be applied based on the size of the current block. The inverse secondary transform unit2430can perform inverse secondary transform on an inversely quantized residual block using the determined inverse secondary transform. The inverse primary transform unit2440can perform inverse primary transform on the inversely secondarily transformed residual block. In an embodiment, the inverse primary transform unit2440may perform the inverse primary transform using the aforementioned MTS. Further, as an example, the inverse primary transform unit2440may determine whether MTS is applied to the current block. For example, when MTS is applied to the current block (i.e., cu_mts_flag=1), the inverse primary transform unit2440may configure MTS candidates based on the intra-prediction mode of the current block. In addition, the inverse primary transform unit2440may determine inverse primary transform applied to the current block using mtx_idx indicating a specific MTS from among the configured MTS candidates. FIG.25is a flowchart for processing a video signal as an embodiment to which the present disclosure is applied. The process of the flowchart ofFIG.25can be executed by the decoding apparatus200or the inverse transform unit230. First, the decoding apparatus200can determine whether reverse non-separable transform is applied to the current block based on a non-separable transform index and the width and height of the current block. For example, if the non-separable transform index is not 0 and the width and height of the current block are equal to or greater than 4, the decoding apparatus200can determine that the non-separable transform is applied. If the non-separable transform index is 0 or the width or the height of the current block is less than 4, the decoding apparatus200can omit he reverse non-separable transform and perform inverse primary transform. In step S2505, the decoding apparatus200determines a non-separable transform set index indicating a non-separable transform set used for non-separable transform of the current block from among non-separable transform sets predefined based on the intra-prediction mode of the current block. A non-separable transform set index can be set such that it is allocated to each of four transform sets configured according to the range of the intra-prediction mode, as shown in Table 7 or Table 8. That is, the non-separable transform set index can be determined as a first index value when the intra-prediction mode is 0 and 1, determined as a second index value when the intra-prediction mode is 2 to 12 or 56 to 66, determined as a third index value when the intra-prediction mode is 13 to 23 or 45 to 55, and determined as a fourth index value when the intra-prediction mode is 24 to 44, as shown in Table 7 or Table 8. Here, each of the predefined non-separable transform sets may include two transform kernels, as shown in Table 9. Further, each of the predefined non-separable transform sets may include one or two transform kernels, as shown in Table 10 or 11. In step S2510, the decoding apparatus200determines, as a non-separable transform matrix, a transform kernel indicated by the non-separable transform index for the current block from among transform kernels included in the non-separable transform set indicated by the non-separable transform set index. For example, two non-separable transform kernels may be configured for each non-separable transform set index value and the decoding apparatus200may determine a non-separable transform matrix based on the transform kernel indicated by the non-separable transform index between two transform matrix kernels corresponding to the non-separable transform set index. In step S2515, the decoding apparatus200applies the non-separable transform matrix to a top left region of the current block determined based on the width and height of the current block. For example, non-separable transform may be applied to an 8×8 top left region of the current block if both the width and height of the current block are equal to or greater than 8 and non-separable transform may be applied to a 4×4 region of the current block if the width or height of the current block is less than 8. The size of non-separable transform may also be set to a size (e.g. 48×16, 16×16) corresponding to 8×8 or 4×4 in response to a region to which non-separable transform will be applied. Furthermore, the decoding apparatus200may apply horizontal transform and vertical transform to the current block to which non-separable transform has been applied. Here, the horizontal transform and vertical transform may be determined based on an MTS index for selection of the prediction mode and transform matrix applied to the current block. Hereinafter, a method of applying a primary transform and a secondary transform in a combined manner is described. That is, an embodiment of the present disclosure proposes a method of efficiently designing a transform used in the primary transform and the secondary transform. In this instance, the methods illustrated inFIGS.1to25can be applied, and the redundant description is omitted. As described above, the primary transform represents a transform that is first applied to a residual block in an encoder. If the secondary transform is applied, the encoder may perform the secondary transform on the primary transformed residual block. If the secondary transform was applied, a secondary inverse transform may be performed before a primary inverse transform in a decoder. The decoder may perform the primary inverse transform on a secondary inverse transformed transform coefficient block to derive a residual block. In addition, as described above, a non-separable transform may be used as the secondary transform, and the secondary transform may be applied only to coefficients of a low frequency of a top-left specific region in order to maintain low complexity. The secondary transform applied to these coefficients of the low frequency may be referred to as a non-separable secondary transform (NSST), a low frequency non-separable transform (LFNST), or a reduced secondary transform (RST). The primary transform may be referred to as a core transform. In an embodiment of the present disclosure, a primary transform candidate used in the primary transform and a secondary transform kernel used in the secondary transform may be predefined as various combinations. In the present disclosure, the primary transform candidate used in the primary transform may be referred to as a MTS candidate, but is not limited to the name. For example, the primary transform candidate may be a combination of transform kernels (or transform types) respectively applied to horizontal and vertical directions, and the transform kernel may be one of DCT-2, DST-7 and/or DCT8. In other words, the primary transform candidate may be at least one combination of DCT-2, DST-7 and/or DCT-8. The following description is given with detailed examples. Combination A In a combination A, as illustrated in the following Table 12, a primary transform candidate and a secondary transform kernel may be defined according to an intra prediction mode. TABLE 12Primary transformSecondary TransformCase 12 MTS candidates for angular2 transform kernels for angularmodemode4 MTS candidates for non-2 transform kernels for non-angularangular modeCase 22 MTS candidates for angular1 transform kernels for angularmodemode4 MTS candidates for non-2 transform kernels for non-angularangular modeCase 32 MTS candidates for angular1 transform kernels for angularmodemode4 MTS candidates for non-1 transform kernels for non-angularangular mode Referring to the above Table 12, as an example (Case 1), two primary transform candidates may be used if the intra prediction mode has directionality, and four primary transform candidates may be used if the intra prediction mode has no directionality (e.g., DC mode, planar mode). In this instance, a secondary transform candidate may include two transform kernels irrespective of the directionality of the intra prediction mode. That is, as described above, a plurality of secondary transform kernel sets may be predefined according to the intra prediction mode, and each of the plurality of predefined secondary transform kernel sets may include two transform kernels. Further, as an example (Case 2), two primary transform candidates may be used if the intra prediction mode has directionality, and four primary transform candidates may be used if the intra prediction mode has no directionality. In this instance, a secondary transform candidate may include one transform kernel if the intra prediction mode has directionality, and a secondary transform candidate may include two transform kernels if the intra prediction mode has no directionality. Further, as an example (Case 3), two primary transform candidates may be used if the intra prediction mode has directionality, and four primary transform candidates may be used if the intra prediction mode has no directionality. In this instance, a secondary transform candidate may include one transform kernel irrespective of the directionality of the intra prediction mode. Combination B In a combination B, as illustrated in the following Table 13, a primary transform candidate and a secondary transform kernel may be defined according to an intra prediction mode. TABLE 13Primary transformSecondary TransformCase 13 MTS candidates for angular2 transform kernels for angularmodemode3 MTS candidates for non-2 transform kernels for non-angularangular modeCase 23 MTS candidates for angular1 transform kernels for angularmodemode3 MTS candidates for non-2 transform kernels for non-angularangular modeCase 33 MTS candidates for angular1 transform kernels for angularmodemode3 MTS candidates for non-1 transform kernels for non-angularangular mode Referring to the above Table 13, as an example (Case 1), three primary transform candidates may be used irrespective of the directionality of the intra prediction mode. In this instance, a secondary transform candidate may include two transform kernels irrespective of the directionality of the intra prediction mode. That is, as described above, a plurality of secondary transform kernel sets may be predefined according to the intra prediction mode, and each of the plurality of predefined secondary transform kernel sets may include two transform kernels. Further, as an example (Case 2), three primary transform candidates may be used irrespective of the directionality of the intra prediction mode. In this instance, a secondary transform candidate may include one transform kernel if the intra prediction mode has directionality, and the secondary transform candidate may include two transform kernels if the intra prediction mode has no directionality. Further, as an example (Case 3), three primary transform candidates may be used irrespective of the directionality of the intra prediction mode. In this instance, a secondary transform candidate may include one transform kernel irrespective of the directionality of the intra prediction mode. Combination C In a combination C, as illustrated in the following Table 14, a primary transform candidate and a secondary transform kernel may be defined according to an intra prediction mode. TABLE 14Primary transformSecondary TransformCase 12 MTS candidates for angular2 transform kernels for angularmodemode3 MTS candidates for non-2 transform kernels for non-angularangular modeCase 22 MTS candidates for angular1 transform kernels for angularmodemode3 MTS candidates for non-2 transform kernels for non-angularangular modeCase 32 MTS candidates for angular1 transform kernels for angularmodemode3 MTS candidates for non-1 transform kernels for non-angularangular mode Referring to the above Table 14, as an example (Case 1), two primary transform candidates may be used if the intra prediction mode has directionality, and three primary transform candidates may be used if the intra prediction mode has no directionality (e.g., DC mode, planar mode). In this instance, a secondary transform candidate may include two transform kernels irrespective of the directionality of the intra prediction mode. That is, as described above, a plurality of secondary transform kernel sets may be predefined according to the intra prediction mode, and each of the plurality of predefined secondary transform kernel sets may include two transform kernels. Further, as an example (Case 2), two primary transform candidates may be used if the intra prediction mode has directionality, and three primary transform candidates may be used if the intra prediction mode has no directionality. In this instance, a secondary transform candidate may include one transform kernel if the intra prediction mode has directionality, and the secondary transform candidate may include two transform kernels if the intra prediction mode has no directionality. Further, as an example (Case 3), two primary transform candidates may be used if the intra prediction mode has directionality, and three primary transform candidates may be used if the intra prediction mode has no directionality. In this instance, a secondary transform candidate may include one transform kernel irrespective of the directionality of the intra prediction mode. The above description was given focusing on the case of using the plurality of primary transform candidates. The following describes combinations of a primary transform and a secondary transform in case of using a fixed primary transform candidate, by way of example. Combination D In a combination D, as illustrated in the following Table 15, a primary transform candidate and a secondary transform kernel may be defined according to an intra prediction mode. TABLE 15Primary transformSecondary TransformCase 11 fixed MTS candidate2 transform kernels for angular modefor all modes2 transform kernels for non-angularmodeCase 21 fixed MTS candidate1 transform kernels for angular modefor all modes2 transform kernels for non-angularmodeCase 31 fixed MTS candidate1 transform kernels for angular modefor all modes1 transform kernels for non-angularmode Referring to the above Table 15, as an embodiment, one primary transform candidate may be fixedly used irrespective of the intra prediction mode. For example, the fixed primary transform candidate may be at least one combination of DCT-2, DST-7 and/or DCT-8. As an example (Case 1), one primary transform candidate may be fixedly used irrespective of the intra prediction mode. In this instance, a secondary transform candidate may include two transform kernels irrespective of the directionality of the intra prediction mode. That is, as described above, a plurality of secondary transform kernel sets may be predefined according to the intra prediction mode, and each of the plurality of predefined secondary transform kernel sets may include two transform kernels. Further, as an example (Case 2), one primary transform candidate may be fixedly used irrespective of the intra prediction mode. In this instance, a secondary transform candidate may include one transform kernel if the intra prediction mode has directionality, and the secondary transform candidate may include two transform kernels if the intra prediction mode has no directionality. Further, as an example (Case 3), one primary transform candidate may be fixedly used irrespective of the intra prediction mode. In this instance, a secondary transform candidate may include one transform kernel irrespective of the directionality of the intra prediction mode. Combination E In a combination E, as illustrated in the following Table 16, a primary transform candidate and a secondary transform kernel may be defined according to an intra prediction mode. TABLE 16Primary transform(DCT2 applied)Secondary TransformCase 1DCT2 is applied2 transform kernels for angular mode2 transform kernels for non-angularmodeCase 2DCT2 is applied1 transform kernels for angular mode2 transform kernels for non-angularmodeCase 3DCT2 is applied1 transform kernels for angular mode1 transform kernels for non-angularmode Referring to the above Table 16, as long as DCT-2 is applied as the primary transform, a secondary transform may be defined. In other words, if MTS is not applied (i.e., if the DCT-2 is applied as the primary transform), a secondary transform can be applied. As illustrated inFIG.10above, the present disclosure is described by being divided into a case in which the MTS is applied and a case in which the MTS is not applied, but is not limited to such an expression. For example, whether or not the MTS is applied may be the same meaning as whether to use a transform type (or transform kernel) other than a predefined specific transform type (which may be referred to as a basic transform type, a default transform type, etc.). If the MTS is applied, a transform type (e.g., any one transform type or a combined transform type of two or more transform types among a plurality of transform types) other than the basic transform type may be used for transform. Further, if the MTS is not applied, the basic transform type may be used for the transform. In an embodiment, the basic transform type may be configured (or defined) as DCT-2. As an example (Case 1), when the DCT-2 is applied to a primary transform, a secondary transform can be applied. In this instance, a secondary transform candidate may include two transform kernels irrespective of the directionality of the intra prediction mode. That is, as described above, a plurality of secondary transform kernel sets may be predefined according to the intra prediction mode, and each of the plurality of predefined secondary transform kernel sets may include two transform kernels. Further, as an example (Case 2), when the DCT-2 is applied to a primary transform, a secondary transform can be applied. In this instance, a secondary transform candidate may include one transform kernel if the intra prediction mode has directionality, and the secondary transform candidate may include two transform kernels if the intra prediction mode has no directionality. Further, as an example (Case 3), when the DCT2 is applied to a primary transform, a secondary transform can be applied. In this instance, a secondary transform candidate may include one transform kernel irrespective of the directionality of the intra prediction mode. FIG.26is a flow chart illustrating a method for transforming a video signal according to an embodiment to which the present disclosure is applied. Referring toFIG.26, the present disclosure is described based on a decoder for the convenience of the explanation, but is not limited thereto. A transform method for a video signal according to an embodiment of the disclosure can be substantially equally applied to even an encoder. The flow chart illustrated inFIG.26may be performed by the decoding device200or the inverse transform unit230. The decoding device200parses a first syntax element indicating a primary transform kernel applied to the primary transform of a current block in52601. The decoding device200determines whether a secondary transform is applicable to the current block based on the first syntax element in S2602. If the secondary transform is applicable to the current block, the decoding device200parses a second syntax element indicating a secondary transform kernel applied to the secondary transform of the current block in52603. The decoding device200derives a secondary inverse-transformed block, by performing a secondary inverse-transform for a top-left specific region of the current block using the secondary transform kernel indicated by the second syntax element in S2604. The decoding device200derives a residual block of the current block, by performing a primary inverse-transform for the secondary inverse-transformed block using the primary transform kernel indicated by the first syntax element in52605. As described above, the step S2602may be performed by determining that the secondary transform is applicable to the current block if the first syntax element indicates a predefined first transform kernel. In this instance, the first transform kernel may be defined as DCT-2. Further, as described above, the decoding device200may determine a secondary transform kernel set used for a secondary transform of the current block among predefined secondary transform kernel sets based on an intra prediction mode of the current block. The second syntax element may indicate a secondary transform kernel applied to the secondary transform of the current block in the determined secondary transform kernel set. Further, as described above, each of the predefined secondary transform kernel sets may include two transform kernels. In an embodiment of the present disclosure, an example of a syntax structure in which a multiple transform set (MTS) is used will be described. For example, the following table 17 shows an example of a syntax structure of a sequence parameter set. TABLE 17Descriptorseq_parameter_set_rbsp( ) {sps_seq_parameter_set_idue(v)chroma_format_idcue(v)if( chroma_format_idc = = 3 )separate_colour_plane_flagu(1)pic_width_in_luma_samplesue(v)pic_height_in_luma_samplesue(v)bit_depth_luma_minus8ue(v)bit_depth_chroma_minus8ue(v)qtbtt_dual_tree_intra_flague(v)log2_ctu_size_minus2ue(v)log2_min_qt_size_intra_slices_minus2ue(v)log2_min_qt_size_inter_slices_minus2ue(v)max_mtt_hierarchy_depth_inter_slicesue(v)max_mtt_hierarchy_depth_intra_slicesue(v)sps_cclm_enabled_flague(1)sps_mts_intra_enabled_flague(1)sps_mts_inter_enabled_flague(1)rbsp_trailing_bits( )} Referring to Table 17, whether the MTS according to an embodiment of the present disclosure can be used may be signaled through a sequence parameter set syntax. Here, sps_mts_intraenabled_flag indicates presence or absence of an MTS flag or an MTS index in a lower level syntax (e.g., a residual coding syntax or a transform unit syntax) with respect to an intra-coding unit. In addition, sps_mts_inter_enabled_flag indicates presence or absence of an MTS flag or an MTS index in a lower level syntax with respect to an inter-coding unit. As another example, the following table 18 shows an example of a transform unit syntax structure. TABLE 18Descriptortransform_unit( x0, y0, tbWidth, tbHeight, treeType ) {if( treeType = = SINGLE_TREE || treeType = = DUAL_TREE_LUMA )tu_cbf_luma[ x0 ][ y0 ]ae(v)if( treeType = = SINGLE_TREE || treeType = = DUAL_TREE_CHROMA ) {tu_cbf_cb[ x0 ][ y0 ]ae(v)tu_cbf_cr[ x0 ][ y0 ]ae(v)}if( ( ( ( CuPredMode[ x0 ][ y0 ] = = MODE_INTRA ) && sps_mts_intra_enabled_flag ) ||( ( CuPredMode[ x0 ][ y0 ] = = MODE_INTER ) && sps_mts_inter_enabled_flag ) )&& tu_cbf_luma[ x0 ][ y0 ] && treeType ! = DUAL_TREE_CHROMA&& ( tbWidth <= 32 ) && ( tbHeight <= 32 ) )cu_mts_flag[ x0 ][ y0 ]ae(v)if( tu_cbf_luma[ x0 ][ y0 ] )residual_coding( x0, y0, log2( tbWidth), log2( tbHeight ), 0 )if( tu_cbf_cb[ x0 ][ y0 ] )residual_coding( x0, y0, log2( tbWidth / 2 ), log2( tbHeight / 2 ), 1 )if( tu_cbf_cr[ x0 ][ y0 ] )residual_coding( x0, y0, log2( tbWidth / 2 ), log2( tbHeight / 2 ), 2 )} Referring to Table 18, cu_mts_flag indicates whether MTS is applied to a residual sample of a luma transform block. For example, MTS is not applied to the residual sample of the luma transform block if cu_mts_flag=0, and MTS is applied to the residual sample of the luma transform block if cu_mts_flag=1. Although a case in which MTS is applied and a case in which MTS is not applied based on the MTS flag are separately described in the present disclosure, as described above, the present disclosure is not limited thereto. For example, whether MTS is applied may mean whether a transform type (or transform kernel) other than a predefined specific transform type (which may be referred to as a basic transform type, a default transform type, or the like) is used. A transform type (e.g., any one of a plurality of transform types or a combination of two or more thereof) other than the default transform type may be used for a transform if MTS is applied, and the default transform type may be used if MTS is not applied. In an embodiment, the default transform type may be set (or defined) as DCT-2. For example, an MTS flag syntax indicating whether MTS is applied to a current transform block, and an MTS flag syntax indicating a transform type applied to the current block when MTS is applied can be individually transmitted from an encoder to a decoder. As another example, a syntax (e.g., MTS index) including both information on whether MTS is applied to the current transform block and a transform type applied to the current block when MTS is applied can be transmitted from the encoder to the decoder. That is, in the latter embodiment, a syntax (or syntax element) indicating a transform type applied to the current transform block (or unit) in a transform type groups (or transform type set) including the aforementioned default transform type can be transmitted from the encoder to the decoder. Accordingly, despite the expressions, a syntax (MTS index) indicating a transform type applied to a current transform block may include information on whether MTS is applied. In other words, in the latter embodiment, only an MTS index may be signaled without an MTS flag. In this case, DCT-2 may be interpreted as being included in MTS. However, in the present disclosure, a case where DCT-2 is applied may be described as a case where MTS is not applied. Nevertheless, a technical range related to MTS is not limited to corresponding defined contents. As another example, the following table 19 shows an example of a residual unit syntax structure. TABLE 19Descriptorresidual_coding( x0, y0, log2TbWidth, log2TbHeight, cIdx ) {if( transform_skip_enabled_flag &&( cIdx ! = 0 || cu_mts_flag[ x0 ][ y0 ] = = 0 ) &&( log2TbWidth <= 2 ) && ( log2TbHeight <= 2 ) )transform_skip_flag[ x0 ][ y0 ][ cIdx ]ae(v)last_sig_coeff_x_prefixae(v)last_sig_coeff_y_prefixae(v)if( last_sig_coeff_x_prefix > 3 )last_sig_coeff_x_suffixae(v)if( last_sig_coeff_y_prefix > 3 )last_sig_coeff_y_suffixae(v)log2SbSize = ( Min( log2TbWidth, log2TbHeight ) < 2 ? 1 : 2 )numSbCoeff = 1 << ( log2SbSize << 1 )lastScanPos = numSbCoefflastSubBlock = ( 1 << ( log2TbWidth + log2TbHeight − 2 * log2SbSize ) ) − 1do {if( lastScanPos = = 0 ) {lastScanPos = numSbCoefflastSubBlock− −}lastScanPos− −xS = DiagScanOrder[ log2TbWidth − log2SbSize ][ log2TbHeight − log2SbSize ][ lastSubBlock ][ 0 ]yS = DiagScanOrder[ log2TbWidth − log2SbSize ][ log2TbHeight − log2SbSize ][ lastSubBlock ][ 1 ]xC = ( xS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ lastScanPos ][ 0 ]yC = ( yS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ lastScanPos ][ 1 ]} while( ( xC != LastSignificantCoeffX ) || ( yC !=LastSignificantCoeffY ) )QState = 0for( i = lastSubBlock; i >= 0; i− − ) {startQStateSb = QStatexS = DiagScanOrder[ log2TbWidth − log2SbSize ][ log2TbHeight − log2SbSize ][ lastSubBlock ][ 0 ]yS = DiagScanOrder[ log2Tb Width − log2SbSize ][ log2TbHeight − log2SbSize ][ lastSubBlock ][ 1 ]inferSbDcSigCoeffFlag = 0if( ( i < lastSubBlock ) && ( i > 0 ) ) {coded_sub_block_flag[ xS ][ yS ]ae(v)inferSbDcSigCoeffFlag = 1}firstSigScanPosSb = numSbCoefflastSigScanPosSb = −1for( n = ( i = = lastSubBlock ) ? lastScanPos − 1 : numSbCoeff − 1; n >=0; n− − ) {xC = ( xS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 0 ]yC = ( yS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 1 ]if( coded_sub_block_flag[ xS ][ yS ] && ( n > 0 ||!inferSbDcSigCoeffFlag ) ) {sig_coeff_flag[ xC ][ yC ]ae(v)}if( sig_coeff_flag[ xC ][ yC ] ) {par_level_flag[ n ]ae(v)rem_abs_gt1_flag[ n ]ae(v)if( lastSigScanPosSb = = −1 )lastSigScanPosSb = nfirstSigScanPosSb = n}AbsLevelPass1[ xC ][ yC ] =sig_coeff_flag[ xC ][ yC ] + par_level_flag[ n ] + 2 *rem_abs_gt1_flag[ n ]if( dep_quant_enabled_flag )QState = QStateTransTable[ QState ][ par_level_flag[ n ] ]}for( n = numSbCoeff − 1; n >= 0; n− − ) {if( rem_abs_gt1_flag[ n ] )rem_abs_gt2_flag[ n ]ae(v)}for( n = numSbCoeff − 1; n >= 0; n− − ) {xC = ( xS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 0 ]yC = ( yS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 1 ]if( rem_abs_gt2_flag[ n ] )abs_remainder[ n ]AbsLevel[ xC ][ yC ] = AbsLevelPass1[ xC ][ yC ] +2 * ( rem_abs_gt2_flag[ n ] +abs_remainder[ n ] )}if( dep_quant_enabled_flag || !sign_data_hiding_enabled_flag )signHidden = 0elsesignHidden = ( lastSigScanPosSb − firstSigScanPosSb > 3 ? 1 : 0 )for( n = numSbCoeff − 1; n >= 0; n− − ) {xC = ( xS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 0 ]yC = ( yS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 1 ]if( sig_coeff_flag[ xC ][ yC ] &&( !signHidden || ( n != firstSigScanPosSb ) ) )coeff_sign_flag[ n ]ae(v)}if( dep_quant_enabled_flag ) {QState = startQStateSbfor( n = numSbCoeff − 1; n >= 0; n− − ) {xC = ( xS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 0 ]yC = ( yS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 1 ]if( sig_coeff_flag[ xC ][ yC ] )TransCoeffLevel[ x0 ][ y0 ][ cIdx ][ xC ][ yC ] =( 2 * AbsLevel[ xC ][ yC ] − ( QState > 1 ? 1 : 0 ) ) *( 1 − 2 * coeff_sign_flag[ n ] )QState = QStateTransTable[ QState ][ par_level_flag[ n ] ]} else {sumAbsLevel = 0for( n = numSbCoeff − 1; n >= 0; n− − ) {xC = ( xS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 0 ]yC = ( yS << log2SbSize ) +DiagScanOrder[ log2SbSize ][ log2SbSize ][ n ][ 1 ]if( sig_coeff_flag[ xC ][ yC ] ) {TransCoeffLevel[ x0 ][ y0 ][ cIdx ][ xC ][ yC ] =AbsLevel[ xC ][ yC ] * ( 1 − 2 * coeff_sign_flag[ n ] )if( signHidden ) {sumAbsLevel += AbsLevel[ xC ][ yC ]if( ( n = = firstSigScanPosSb ) && ( sumAbsLevel % 2 ) = =1 ) )TransCoeffLevel[ x0 ][ y0 ][ cIdx ][ xC ][ yC ] =−TransCoeffLevel[ x0 ][ y0 ][ cIdx ][ xC ][ yC ]}}}}}if( cu_mts_flag[ x0 ][ y0 ] && ( cIdx = = 0 ) &&!transform_skip_flag[ x0 ][ y0 ][ cIdx ] &&( ( CuPredMode[ x0 ][ y0 ] = = MODE_INTRA && numSigCoeff > 2 ) ||( CuPredMode[ x0 ][ y0 ] = = MODE_INTER ) ) ) {mts_idx[ x0 ][ y0 ]ae(v)} Referring to Table 19, transform_skip_flag and/or mts_idx syntax (or syntax element) can be signaled through a residual syntax. However, this is merely an example and the present disclosure is not limited thereto. For example, transform_skip_flag and/or mts_idx syntax may be signaled through a transform unit syntax. Hereinafter, a method for improving complexity by applying primary transform only to a predefined region is proposed. When combinations of various transforms (or transform kernels) such as MTS (e.g., DCT-2, DST-7, DCT-8, DST-1, DCT-5, etc.) are selectively applied to primary transform, complexity may increase. Particularly, various transforms need to be considered as a coding block (or transform block) size increases, which may considerably increase complexity. Accordingly, in an embodiment of the present disclosure, a method for performing a transform only on a predefined region according to specific conditions instead of performing the transform on (or applying to) all regions in order to reduce complexity is proposed. As an embodiment, an encoder may obtain an R×R transform block instead of an M×M transform block by applying forward primary transform to an M×M pixel block (luma block) based on the reduced transform (RT) method described above with respect toFIGS.16to24. For example, an R×R region may be a top-left R×R region in a current block (coding block or transform block). A decoder may obtain an M×M transform block by performing inverse primary transform only on an R×R (MR) region. Consequently, non-zero coefficients may be present only in the R×R region. In this case, the decoder can zero-out coefficients present in regions other than the R×R region without performing calculation therefor. The encoder can perform forward transform such that only the R×R region remains (such that non-zero coefficients are present only in the R×R region). Further, the decoder may apply primary transform (i.e., reverse transform) only to a predefined region determined according to the size of a coding block (or transform block) and/or transform (or transform kernel) type. The following table 20 shows Reduced Adaptive Multiple Transform (RAMT) using a predefined R value (which may be referred to as a reduced factor, a reduced transform factor, or the like) depending on the size of a transform (or the size of a transform block). In the present disclosure, Reduced Adaptive Multiple Transform (RAMT) representing reduced transform adaptively determined depending on a block size may be referred to as Reduced MTS (Multiple Transform Selection), Reduced explicit multiple transform, Reduced primary transform, and the like. TABLE 20TransformReducedReducedReducedSizetransform 1transform 2transform 38 × 84 × 46 × 66 × 616 × 168 × 812 × 128 × 832 × 3216 × 1616 × 1616 × 1664 × 6432 × 3216 × 1616 × 16128 × 12832 × 3216 × 1616 × 16 Referring to Table 20, at least one reduced transform can be defined depending on a transform size (or transform block size). In an embodiment, which reduced transform among reduced transforms shown in Table 20 will be used may be determined according to a transform (or transform kernel) applied to a current block (coding block or transform block). Although a case in which three reduced transforms are used is assumed in Table 20, the present disclosure is not limited thereto and one or more various reduced transforms may be predefined depending on transform sizes. Further, in an embodiment of the present disclosure, a reduced transform factor (R) may be determined depending on primary transform in application of the aforementioned reduced adaptive multiple transform. For example, when the primary transform is DCT-2, coding performance deterioration can be minimized by not using reduced transform for a small block or by using a relatively large R value because computational complexity of DCT-2 is lower than those of other primary transforms (e.g., a combination of DST-7 and/or DCT-8). The following table 21 shows Reduced Adaptive Multiple Transform (RAMT) using a predefined R value depending on a transform size (or transform block size) and a transform kernel. TABLE 21TransformReduced transformReduced transformSizefor DCT2except DCT28 × 88 × 84 × 416 × 1616 × 168 × 832 × 3232 × 3216 × 1664 × 6432 × 3232 × 32128 × 12832 × 3232 × 32 Referring to Table 21, in a case in which a transform applied as primary transform is DCT-2 and a case in which the transform applied as primary transform is a transform except DCT-2 (e.g., a combination of DST7 and/or DCT8), different reduced transform factors can be used. FIG.27is a diagram illustrating a method for encoding a video signal using reduced transform as an embodiment to which the present disclosure is applied. Referring toFIG.27, an encoder determines whether to apply a transform to a current block (S2701). The encoder may encode a transform skip flag according to a determination result. In this case, the step of encoding the transform skip flag may be included in step S2701. When the transform is applied to the current block, the encoder determines a transform kernel applied to primary transform of the current block (S2702). The encoder may encode a transform index indicating the determined transform kernel. In this case, the step of encoding the transform index may be included in step S2702. The encoder determines a region in which a significant coefficient is present within the current block based on the transform kernel applied to the primary transform of the current block and the size of the current block (S2703). As an embodiment, the encoder may determine a region having a width and/or a height corresponding to a predefined size as the region in which the significant coefficient is present when the transform kernel indicated by the transform index are a predefined transform and the width and/or the height of the current block are greater than the predefined size. For example, the predefined transform may be one of a plurality of transform combinations of DST-7 and/or DCT-8, and the predefined size may be 16. Alternatively, the predefined transform may be a transform except DCT-2. As an example, the encoder may determine a region having a width and/or a height of 32 as the region to which the primary transform is applied when the transform kernel indicated by the transform index is DCT-2 and the width and/or the height of the current block are greater than 32. Further, as an embodiment, the encoder may determine a smaller value between the width of the current block and a first threshold value as the width of the region to which the primary transform is applied and determine a smaller value between the height of the current block and the first threshold value as the height of the region in which the significant coefficient is present when the transform kernel indicated by the transform index belongs to a first transform group. For example, the first threshold value may be 32, but the present disclosure is not limited thereto and the first threshold value may be 4, 8, or 16 as shown in Table 20 or Table 21. In addition, the encoder may determine a smaller value between the width of the current block and a second threshold value as the width of the region to which the primary transform is applied and determine a smaller value between the height of the current block and the second threshold value as the height of the region in which the significant coefficient is present when the transform kernel indicated by the transform index belongs to a second transform group. For example, the second threshold value may be 16, but the present disclosure is not limited thereto and the second threshold value may be 4, 6, 8, 12, or 32 as shown in Table 20 or Table 21. As an embodiment, the first transform group may include DCT2 and the second transform group may include a plurality of transform combinations of DST7 and/or DCT8. The encoder performs forward primary transform using the transform kernel applied to the primary transform of the current block (S2704). The encoder can obtain primarily transformed transform coefficients in the region in which the significant coefficient is present by performing the forward primary transform. As an embodiment, the encoder may apply secondary transform to the primarily transformed transform coefficients. In this case, the methods described above with reference toFIG.6toFIG.26can be applied. FIG.28is a diagram illustrating a method for decoding a video signal using reduced transform as an embodiment to which the present disclosure is applied. A decoder checks whether transform skip is applied to a current block (S2801). When transform skip is not applied to the current block, the decoder obtains a transform index indicating a transform kernel applied to the current block from a video signal (S2802). The decoder determines a region in which primary transform (i.e., primary inverse transform) is applied to the current block based on the transform kernel indicated by the transform index and the size (i.e., the width and/or the height) of the current block (S2803). As an embodiment, the decoder may set coefficients of the remaining region except the region to which the primary transform is applied as 0. In addition, as an embodiment, when the transform kernel indicated by the transform index is a predefined transform and the width and/or the height of the current block are greater than a predefined size, the decoder may determine a region having a width and/or a height corresponding to the predefined size as the region to which the primary transform is applied. For example, the predefined transform may be any one of a plurality of transform combinations of DST-7 and/or DCT-8, and the predefined size may be 16. Alternatively, the predefined transform may be a transform except DCT-2. For example, when the transform kernel indicated by the transform index is DCT-2 and the width and/or the height of the current block are greater than 32, the decoder may determine a region having a width and/or a height of 32 as the region to which the primary transform is applied. Furthermore, as an embodiment, the decoder may determine a smaller value between the width of the current block and a first threshold value as the width of the region to which the primary transform is applied and determine a smaller value between the height of the current block and the first threshold value as the height of the region to which the primary transform is applied when the transform kernel indicated by the transform index belongs to a first transform group. For example, the first threshold value may be 32, but the present disclosure is not limited thereto and the first threshold value may be 4, 8, or 16 as shown in Table 20 or Table 21. In addition, the decoder may determine a smaller value between the width of the current block and a second threshold value as the width of the region to which the primary transform is applied and determine a smaller value between the height of the current block and the second threshold value as the height of the region to which the primary transform is applied is present when the transform kernel indicated by the transform index belongs to a second transform group. For example, the second threshold value may be 16, but the present disclosure is not limited thereto and the second threshold value may be 4, 6, 8, 12, or 32 as shown in Table 20 or Table 21. As an embodiment, the first transform group may include DCT-2 and the second transform group may include a plurality of transform combinations of DST7 and/or DCT8. The decoder performs inverse primary transform on the region to which the primary transform is applied using the transform kernel indicated by the transform index (S2804). The decoder can obtain primarily inversely transformed transform coefficients by performing the inverse primary transform. As an embodiment, the decoder may apply secondary transform to inversely quantized transform coefficients prior to the primary transform. In this case, the methods described above with reference toFIG.6toFIG.26may be applied. First Embodiment According to the embodiments of the present disclosure, it is possible to considerably reduce worst case complexity by performing a transform only on a predefined region according to specific conditions. In addition, in an embodiment of the present disclosure, when the MTS (EMT or AMT) flag is 0 (i.e., when DCT-2 transform is applied in both the horizontal (lateral) direction and the vertical (longitudinal) direction), the encoder/decoder can perform zero-out for high frequency components (i.e., derive or set the high frequency components as 0) except 32 top-left coefficients in the horizontal and vertical directions. Although the present embodiment is referred to as a first embodiment for convenience of description in embodiments which will be described later, embodiments of the present disclosure are not limited thereto. For example, in the case of a 64×64 TU (or CU), the encoder/decoder can keep transform coefficients only for a top-left 32×32 region and perform zero-out for coefficients of the remaining region. Further, in the case of a 64×16 TU, the encoder/decoder can keep transform coefficients only for a top-left 32×16 region and perform zero-out for coefficients of the remaining region. In addition, in the case of an 8×64 TU, the encoder/decoder can keep transform coefficients only for a top-left 8×32 region and perform zero-out for coefficients of the remaining region. That is, transform coefficients can be set such that transform coefficients are present only for a maximum length of 32 in both the horizontal and vertical directions, which can improve transform efficiency. As an embodiment, such a zero-out method may be applied to only a residual signal to which intra-prediction is applied, applied to only a residual signal to which inter-prediction is applied, or applied to both a residual signal to which intra-prediction is applied and a residual signal to which inter-prediction is applied. Second Embodiment In addition, in an embodiment of the present disclosure, when the MTS flag is 1 (i.e., when a transform (e.g., DST-7 or DCT-8) other than DCT-2 transform is applied in the horizontal direction and the vertical direction), the encoder/decoder can perform zero-out for high frequency components (i.e., derive or set the high frequency components as 0) except coefficients of a specific top-left region. Although the present embodiment is referred to as a second embodiment for convenience of description in embodiments which will be described later, embodiments of the present disclosure are not limited thereto. As an embodiment, the encoder/decoder may keep only a transform coefficient region corresponding to a part of the top-left region as in the following examples. That is, the encoder/decoder can preset the length (or number) of transform coefficients in the horizontal and/or vertical directions to which primary transform is applied depending on a width and/or a height. For example, coefficients out of the length to which primary transform is applied can be zero-out.When the width (w) is equal to or greater than 2n, transform coefficients only for a length of w/2p from the left side may be kept and transform coefficients of the remaining region may be fixed (or set) to 0 (zero-out).When the height (h) is equal to or greater than 2m, transform coefficients only for a length of h/2q from the top may be kept and transform coefficients of the remaining region may be fixed to 0. For example, the values m, n, p, and q may be predefined as various values. For example, the values m, n, p, and q may be set to integer values equal to or greater than 0. Alternatively, they may be set as in the following examples.1) (m, n, p, q)=(5, 5, 1, 1)2) (m, n, p, q)=(4, 4, 1, 1) When the configuration of 1) is predefined, for example, transform coefficients may be kept only for a top-left 16×16 region with respect to a 32×16 TU, and transform coefficients may be kept only for a top-left 8×16 region with respect to an 8×32 TU. As an embodiment, such a zero-out method may be applied to only a residual signal to which intra-prediction is applied, applied to only a residual signal to which inter-prediction is applied, or applied to both a residual signal to which intra-prediction is applied and a residual signal to which inter-prediction is applied. Third Embodiment In another embodiment of the present disclosure, when the MTS flag is 1 (i.e., when a transform (e.g., DST-7 or DCT-8) other than DCT-2 transform is applicable in the horizontal direction and the vertical direction), the encoder/decoder can perform zero-out for high frequency components (i.e., derive or set the high frequency components as except coefficients of a specific top-left region. More specifically, the encoder can keep the coefficients of the specific top-left region and perform zero-out for the remaining high frequency components, and the decoder can recognize the zero-out region in advance and perform decoding using the coefficients of the non-zero-out region. However, embodiments of the present disclosure are not limited thereto, and the zero-out process from the viewpoint of the decoder can be understood as a process of deriving (recognizing or setting) the zero-out region as 0. Although the present embodiment is referred to as a third embodiment for convenience of description in embodiments which will be described later, embodiments of the present disclosure are not limited thereto. As an embodiment, the encoder/decoder may keep only a transform coefficient region corresponding to a part of the top-left region as in the following examples. That is, the encoder/decoder can preset the length (or number) of transform coefficients in the horizontal and/or vertical directions to which primary transform is applied depending on a width and/or a height. For example, coefficients out of the length to which primary transform is applied can be zero-out.When the height (h) is equal to or greater than the width (w) and equal to or greater than 2n, transform coefficients of only a top-left region w×(h/2p) may be kept and transform coefficients of the remaining region may be fixed (or set) to 0 (zero-out).When the width (w) is greater than the height (h) and equal to or greater than 2m, transform coefficients of only a top-left region (w/2q)×h may be kept and transform coefficients of the remaining region may be fixed to 0. Although the length in the vertical direction is reduced (h/2p) when the height (h) equals the width (w) in the above-described example, the length in the horizontal direction may be reduced (w/2q). For example, the values m, n, p, and q may be predefined as various values. For example, the values m, n, p, and q may be set to integer values equal to or greater than 0. Alternatively, they may be set as in the following examples.1) (m, n, p, q)=(4, 4, 1, 1)2) (m, n, p, q)=(5, 5, 1, 1) When the configuration of 1) is predefined, for example, transform coefficients may be kept only for a top-left 16×16 region with respect to a 32×16 TU, and transform coefficients may be kept only for a top-left 8×8 region with respect to an 8×16 TU. As an embodiment, such a zero-out method may be applied to only a residual signal to which intra-prediction is applied, applied to only a residual signal to which inter-prediction is applied, or applied to both a residual signal to which intra-prediction is applied and a residual signal to which inter-prediction is applied. The first embodiment pertaining to a method of limiting a transform coefficient region when the MTS flag is 0, and the second and third embodiments pertaining to a method of limiting a transform coefficient region when the MTS flag is 1 may be individually applied or may be applied in a combined manner. As an embodiment, configurations combined as follows may be applied.1) First embodiment+second embodiment2) First embodiment+third embodiment As mentioned in the second and third embodiments, the zero-out method may be applied to only a residual signal to which intra-prediction is applied, applied to only a residual signal to which inter-prediction is applied, or applied to both a residual signal to which intra-prediction is applied and a residual signal to which inter-prediction is applied as an embodiment. Accordingly, configurations combined as follows may be applied to a case in which the MTS flag is 1. Here, the first embodiment may be applied to a case in which the MTS flag is 0. TABLE 22Config.Intra-predictionInter-predictionIndexresidual signalresidual signal1Zero-out is not appliedZero-out is not applied2Zero-out is not appliedFirst embodiment3Zero-out is not appliedSecond embodiment4First embodimentZero-out is not applied5First embodimentFirst embodiment6First embodimentSecond embodiment7Second embodimentZero-out is not applied8Second embodimentFirst embodiment9Second embodimentSecond embodiment In an embodiment of the present disclosure, the encoder/decoder may not perform residual coding for a region regarded as a region having transform coefficients of 0 according to zero-out. That is, the encoder/decoder can be defined such that they perform residual coding only for regions other than zero-out regions. In the above-described first, second and third embodiments, a region (or coefficient) that needs to have a value of 0 is obviously determined. That is, regions other than the top-left region in which presence of transform coefficients is permitted are zero-out. Accordingly, in an entropy coding (or residual coding) process, the encoder/decoder may be configured to bypass a region guaranteed to have a value of 0 without performing residual coding thereon. In an embodiment, the encoder/decoder may code a flag (referred to as subblock_flag) (or a syntax, or a syntax element) indicating presence or absence of a non-zero transform coefficient in a coefficient group (CG). Here, the CG is a subblock of a TU and may be set to a 4×4 or 2×2 block according to the shape of the TU block and/or whether the TU is a chroma/luma component. Here, the encoder/decoder can scan the CG to code coefficient values (or coefficient level values) only in a case where the subblock_flag is 1. Accordingly, the encoder/decoder may configure CGs belonging to a zero-out region such that they have a value of 0 by default without performing subblock_flag coding thereon. In an embodiment, the encoder may code the position of the last coefficient in forward scanning order (or a syntax or a syntax element indicating the position of the last significant coefficient). For example, the encoder may code last_coefficient_position_x that is a horizontal position and last_coefficient_position_y that is a vertical position. Although maximum values of available values of last_coefficient_position_x and last_coefficient_position_y may be determined as (width−1) and (height−1) of a TU, when a region in which non-zero coefficients can be present is limited according to zero-out, the maximum values of available values of last_coefficient_position_x and last_coefficient_position_y may also be limited. Accordingly, the encoder/decoder may limit the maximum values of available values of last_coefficient_position_x and last_coefficient_position_y in consideration of zero-out and then code them. For example, when a binarization method applied to last_coefficient_position_x and last_coefficient_position_y is a truncated unary (or truncated Rice (TR), or truncated binary (TB)) binarization method, the encoder/decoder can control (reduce) a maximum length of truncated unary code such that it corresponds to adjusted maximum values (i.e., available maximum values of last_coefficient_position_x and last_coefficient_position_y). FIG.29is an example of a case where a separable transform is applied according to an embodiment of the present disclosure.FIG.29aillustrates a region where a significant coefficient is present and a region where zero-out is applied upon forward transform.FIG.29billustrates a region where a significant coefficient is present and a region where zero-out is applied upon backward transform. A technique of saving only coefficients of a low-frequency region (e.g., a top left 16×16 region in a 32×32 block) and applying zero-out to the remaining coefficients (setting or deriving the coefficients as 0) for a block to which MTS is applied based on forward transform may be referred to as reduced multipole transform selection (RMTS). For example, when the size of a block including residual sample values is 32×32 and a reduced 16×16 block according to application of RMTS is output inFIG.29a, horizontal transform is applied to only the left region including 16 samples in the row direction and the right region is regarded to have a coefficient of 0 if horizontal transform is applied. Thereafter, vertical transform is applied to only an upper region including 16 samples in the column direction and the remaining lower region is regarded to have a coefficient 0. Referring toFIG.29b, the size of a transform block including transform coefficients is 32×32, a transform is applied to a top left 16×16 region by the application of the RMTS, and the remaining regions are considered to have coefficients of 0. Since a vertical inverse transform is applied to the 16×16 region, significant coefficients are generated in the left region of the transform block, and the right region is still considered to have coefficients of 0. Thereafter, since a horizontal inverse transform is applied to the right region, significant coefficients may be present for the entire 32×32 region of the transform block. As an embodiment of the present disclosure, there is proposed reduced 32-point MTS (RMTS32) in which a transform for high frequency coefficients is omitted. In this case, the 32-point MTS indicates a method of applying a transform to a row or a column having a length of 32. In this case, the 32-point MTS may require a maximum of 64 multiplication operations per output sample by considering the worst computational complexity. RMTS32 is proposed to reduce operational complexity and also reduce memory usage. According to RMTS32, when an MTS flag is 1 (or when an MTS index is greater than 0) and a block width (height) is greater than or equal to 32, a maximum top left 16×16 region is maintained and the remaining regions are considered (zero-out) to be 0, and up to left (top) 16 coefficients are maintained. Zero-out is independently applied horizontally or vertically, RMTS may be applied to all block shapes. Assuming that RMTS is applied to a 32 length, the top left 16×16 region may be maintained for a 32×32 transform block, a top left 16×8 region may be maintained for a 32×8 transform block, and a top left 16×16 region may be maintained for 16×32. Operational complexity of 32-point MTS can be reduced to a half by using RMTS32 from a viewpoint of an operation count. In this case, the 32-point MTS is a transform matrix applied to a row or a column having a length 32 of a block (when an MTS flag is 1 or an MTS index is greater than 0). Furthermore, from a viewpoint of memory usage, only half the transform base vectors of 32-point MTS matrices may need to be stored. With respect to a region considered to be 0, residual coding may be omitted because a related subblock flag is implicitly derived as 0. Truncated unary binarization of the last coefficient position may be also adjusted by considering a maximum possible position. From a viewpoint of memory usage, RMTS32 generates 16 coefficients for a row or a column having a 32-length. Accordingly, in 32×32 DST-7/DCT-8, only the first 16 transform base vectors need to be stored. Accordingly, memory usage for storing 32-length DST-7/DCT-8 can be reduced to a half (e.g., from 2 KB to 1 KB). For example, a residual coding syntax for implementing the aforementioned RMTS32 may be set as in Table 23. TABLE 23Descriptorresidual_coding( x0, y0, log2TbWidth, log2TbHeight, cIdx ) {if( transform_skip_enabled_flag &&( cIdx ! = 0 || tu_mts_flag[ x0 ][ y0 ] = = 0 ) &&( log2TbWidth <= 2 ) && ( log2TbHeight <= 2 ) )transform_skip_flag[ x0 ][ y0 ][ cIdx ]ae(v)last_sig_coeff_x_prefixae(v)last_sig_coeff_y_prefixae(v)if( last_sig_coeff_x_prefix > 3 )last_sig_coeff_x_suffixae(v)if( last_sig_coeff_y_prefix > 3 )last_sig_coeff_y_suffixae(v)......for( i = lastSubBlock; i >= 0; i− − ) {......if( ( i < lastSubBlock ) && ( i > 0 ) ) {if( transform_skip_flag[ x0 ][ y0 ][ cIdx ] == 1 ||( tu_mts_flag[ x0 ][ y0 ] == 0 &&( xS << log2SbSize ) < 32 && ( yS <<log2SbSize ) < 32 ) ||( tu_mts_flag[ x0 ][ y0 ] == 1 && ( cIdx != 0 ||( ( xS << log2SbSize ) < 16 ) &&( ( yS << log2SbSize ) < 16 ) ) ) )coded_sub_block_flag[ xS ][ yS ]ae(v)inferSbDcSigCoeffFlag = 1}......}if( tu_mts_flag[ x0 ][ y0 ] && ( cIdx = = 0 ) )mts_idx[ x0 ][ y0 ][ cIdx ]ae(v)} Furthermore, a valid width (nonZeroW) and a valid height (nonZeroH) of a region to which a transform is applied in a transform block may be determined as follows. nonZeroW=Min(nTbW,trTypeHor==0 ?32:16) nonZeroH=Min(nTbH,trTypeVer==0 ? 32:16) In this case, nTbW indicates the width of a current block (transform block), nTbH indicates the height of the current block (transform block), and trTypeHor and trTypeVer indicate the type of horizontal transform kernel and the type of vertical transform kernel, respectively. Min(A, B) is a function for outputting a smaller value among A and B. For example, trTypeHor and trTypeVer may be determined as in Table 24 based on an MTS index (mts_idx), that is, an index indicating a transform type. TABLE 24mts_idxtrTypeHortrTypeVer000111221312422 In this case, a value of trTypeHor, trTypeVer indicates one of types of transform kernel. For example, 0 may indicate DCT-2, 1 may indicate DST-7, and 2 may indicate DCT-8. In Table 24, when mts_idx is 0, trTypeHor and trTypeVer are also determined as 0. When mts_idx is not 0 (when it is greater than 0), trTypeHor and trTypeVer are also determined as a non-zero (greater than 0) value. In other words, the valid width (nonZeroW) of the region to which a transform is applied is determined as a smaller value among the width (nTbW) of a current block (transform block) and 16 when a transform index (trTypeHor) is greater than a reference value (e.g., 0) (i.e., when trTypeHor is not 0), and may be determined a smaller value among the width of the current block and 32 when the transform index (trTypeHor) is not greater than a reference value (e.g., 0) (i.e., when trTypeHor is 0). Furthermore, the valid height (nonZeroH) of the region to which a transform is applied may be determined as a smaller value among the height (nTbH) of a current block (transform block) and 16 when a vertical transform type index (trTypeVer) is greater than a reference value (e.g., 0), and may be determined as a smaller value among the height (nTbH) of the current block (transform block) and 32 when the vertical transform type index (trTypeVer) is not greater than a reference value (e.g., 0). According to an embodiment of the present disclosure, as follows, only half the 32-length DST-7/DCT-8 may be stored in the memory. For example, when a (horizontal or vertical) transform type index (trType) is 1 (e.g., DST-7) and the number of samples of a row or column (nTbs) of a transform target block is 32, a transform matrix may be derived as in Tables 25 and 26. Matrices of Tables 25 and 26 are horizontally concatenated to constitute one matrix. In [m][n], m is a transversal index, and n is a longitudinal index. If the matrix of Table 25 and the matrix of Table 26 are concatenated, a 16×32 matrix is derived, and the corresponding matrix becomes a matrix for a forward transform. An inverse transform may be performed through the matrix configured by Tables 25 and 26 and proper indexing. Furthermore, in Tables 25 and 26, the strike-out for 16 rows below means that the 16 rows are deleted because they become unnecessary by the application of the reduced transform. TABLE 25transMatrix[ m ][ n ] = transMatrixCol0to15[ m ][ n ] with m = 0 . . . 15,n = 0 . . . 15transMatrixCol0to15 ={{491317212630343842455053566063 }{13263850606877828689908885807466 }{2142607484898984746042210−21−42−60 }{30567788898063389−21−50−72−85−90−84−68 }{3868868874459−306384−90−78−53−172156 }{4578907742−4−50−80−90−74−38953828972 }{538585530−53−85−85−530538585530−53 }{60897421−42−8484−42217489600−60−89−74 }{669056−13−74−88−4526808434−38−85−78−2150 }{728634−45−89−6313788221−56−90−53268477 }{77809−72−84−1766862660−88−3453904245 }{8072−17−86−60349045−50−89−30638513−74−78 }{8460−42−89−217474−21−89−4260840−84−6042 }{8645−63−78219026−77−664288485−506080 }{8830−78−566077−34−884892680−536374−38 }{9013−88−268438−78−507260−63−685377−42−82 }}, TABLE 26transMatrix[ m ][ n ] = transMatrixCol16to31 [ m − 16 ][ n ] withm = 16 . . . 31, n = 0 . . . 15                (8-826)transMatrixCol16to31 ={{66687274777880828485868888899090 }{564534219−4−17−30−42−53−63−72−78−84−88−90 }{−74−84−89−89−84−74−60−42−210214260748489 }{−45−17134266829086745326−4−34−60−78−88 }{8090826026−13−50−77−89−85−66−344427288 }{34−13−56−84−88−68−30176085886626−21−63−86 }{−85−85−530538585530−53−85−85−5305385 }{−2142848442−21−74−89−60060897421−4284 }{88729−60−90−634688953−17−77−86−423082 }{9−66−88−42388868−4−74−85−30509060−17−80 }{−90−5038895630−88−63218568−13−82−74478 }{48268−21−88−56389042−5388−2666849−77 }{8921−74−74218942−60−8408460−42−89−2174 }{−17−90−307468−38−88−98453−5682138934−72 }{−8699021−82−506672−42−85139017−84−4568 }{3086−17−894909−88−218534−80−45745666 }}, Furthermore, when a (horizontal or vertical) transform type index (trType) is 2 (e.g., DCT-8) and the number of samples of a row or column (nTbs) of a transform target block is 32, a transform matrix may be derived as in Tables 27 and 28. Matrices of Tables 27 and 28 are horizontally concatenated to constitute one matrix. In [m][n], m is a transversal index, and n is a longitudinal index. If the matrix of Table 27 and the matrix of Table 28 are concatenated, a 16×32 matrix is derived, and the corresponding matrix becomes a matrix for a forward transform. An inverse transform may be performed through the matrix configured by Tables 27 and 28 and proper indexing. Furthermore, in Tables 27 and 28, the strike-out for 16 rows below means that the 16 rows are deleted because they become unnecessary by the application of the reduced transform. TABLE 27transMatrix[ m ][ n ] = transMatrixCol0to15[ m ][ n ] with m = 0 . . . 15,n = 0 . . . 15transMatrixCol0to15 ={{90908988888685848280787774726866 }{908884787263534230174−9−21−34−45−56 }{8984746042210−21−4260−74−84−89−89−84−74 }{88786034426−53−74−86−90−82−66−42−131745 }{8872424−3466−85−89−77−50−132660829080 }{866321−26−66−88−8560−1730688884561334 }{85530−53−85−85−530538585530−538585 }{8442−21−74−89−60060897421−42−84−84−4221 }{8230−42−86−77−175389684−63−90−6097288 }{8017−60−90−50308574468−88−38428866−9 }{784−74−82−1368852163−88−30568938−5090 }{77−9−84−66268853−42−90−38568821−68−82−4 }{74−21−89−4260840−84−60428921−74−742189 }{72−34−89−138256−53−849883868−74309017 }{68−45−84179013−85−427266−50−8221909−86 }{66−56−744580−34−852188−9−90−48917−86−30 }}, TABLE 28transMatrix[ m ][ n ] = transMatrixCol16to31[ m − 16 ][ n ] withm = 16 . . . 31, n = 0 . . . 15transMatrixCol16to31 ={{636056535045423834302621171394 }{−66−74−80−85−88−90−89−86−82−77−68−60−50−38−26−13 }{−60−42−210214260748489898474604221 }{68849085725021−9−3863−80−89−88−77−56−30 }{5621−17−53−78−90−84−63−309457488866838 }{−72−89−82−53−938749080504−42−77−90−78−45 }{−530538585530−53−8585−53053858553 }{7489600−60−89−74−2142848442−21−74−89−60 }{50−21−7885−3834848026−45−88−74−13569066 }{−77−84−26539056−21−82−78−13638945−34−86−72 }{−45429053−34−88−60268666−17−84−7298077 }{7874−13−85−63308950−45−90−34608617−72−80 }{42−60−8408460−42−89−217474−21−89−426084 }{−80−605085−488−426677−26−90−21786345−86 }{−387463−53−802689488−347760−56−783088 }{8242−77−536863−60−725078−38−842688−13−90 }}, By constructing a transform matrix for generating an output vector including 16 values with respect to an input vector including 32 residual signal samples (upon forward transform) as in Tables 25 and 26 or Tables 27 and 28 and performing an inverse transform that outputs an output vector including 32 residual signal samples with respect to an input vector including 16 values by using the matrices of Tables 25 and 26 or Tables 27 and 28 and indexing for an inverse transform (upon backward transform), operational complexity and memory usage can be reduced. In an embodiment, information on the position of a non-zero last significant coefficient (last_sig_coeff_x_prefix, last_sig_coeff_y_prefix) may be binarized as in Table 29. TABLE 29BinarizationSyntax structureSyntax elementProcessInput parameters. . .. . .. . .. . .residual_coding( ). . .. . .. . .last_sig_coeff_x_prefixTRlog2MaxX = ( transform_skip_flag [ x0 ][ y0 ][ cIdx ] == 1 ) ?log2TbWidth : Min( log2TbWidth, ( tu_mts_flag[ x0 ][ y0 ] == 1&& cIdx == 0 ) ? 4 : 5 )cMax = ( log2MaxX << 1 ) − 1, cRiceParam = 0last_sig_coeff_y_prefixTRlog2MaxY = ( transform_skip_flag [ x0 ][ y0 ][ cIdx ] == 1 ) ?log2TbHeight : Min( log2TbHeight, ( tu_mts_flag[ x0 ][ y0 ] == 1&& cIdx == 0 ) ? 4 : 5 )cMax = ( log2MaxY << 1 ) − 1, cRiceParam = 0. . .. . .. . . In Table 29, information about the position of a non-zero last significant coefficient, last_sig_coeff_x_prefix, last_sig_coeff_y_prefix, may be determined in consideration of whether a flag tu_mts_flag indicating application of MTS is 1 (or whether an MTS index is greater than 1) and information regarding the width and height of a transform block, log 2TbWidth and log 2TbHeight. Further, Process in Table 29 is an item indicating a binarization type, and TR represents a truncated rice (or truncated unary) binarization method. Although some of the above-described embodiments of the present disclosure have been separately described for convenience of description, the present disclosure is not limited thereto. That is, the above-described embodiments may be independently performed or one or more embodiments may be performed in combination. FIG.30illustrates an example of a flowchart for encoding a video signal according to an embodiment of the present disclosure. Although operations inFIG.30are performed by an encoder in the following description, a method for transforming a video signal according to the present embodiment may be substantially equally applied to a decoder. The process of the flowchart ofFIG.30may be performed by the encoding apparatus100or the transform unit120. The encoder determines transform kernels to be applied to horizontal and vertical directions of a current block including a residual signal except a predicted signal in a video signal (S3010). Here, the transform kernels include a horizontal transform kernel and a vertical transform kernel. In an embodiment, the transform kernels may include at least one of DCT-2, DST-7, and DCT-8. The encoder generates transform information including information about the determined transform kernels (S3020). Here, the transform information may include a horizontal transform type index trTypeHor for the horizontal transform kernel and a vertical transform type index trTypeVer for the vertical transform kernel. The encoder generates a transformed block by applying a horizontal transform and a vertical transform to the horizontal direction and the vertical direction of the current block based on the transform kernels related to the transform information (S3030). Here, a reduced transform according to an embodiment of the present disclosure may be applied. For example, the encoder may generate a 16×16 transformed block by applying the transform to the current block having a size of 32×32 if the horizontal transform type index trTypeHor and the vertical transform type index trTypeVer are greater than 0 (if MTS is applied). On the other hand, the encoder may generate a 32×32 transformed block by applying the transform to the current block having a size of 32×32 if the horizontal transform type index trTypeHor and the vertical transform type index trTypeVer are 0 (if MTS is not applied). However, the embodiment of the present disclosure may also be applied to a nonsquare block such as a 32×8 or 8×32 block as well as a square block such as a 32×32 block. If RMTS is applied to a 32×8 block or an 8×32 block when MTS is applied based on the encoder, transform coefficients are saved for only a top left 16×8 block or a top left 8×16 and the remaining region may be zero out. That is, only 16 transform coefficients are saved in the row direction or the column direction in the 32×8 block or the 8×32 block. In other words, the size of the transformed block may be determined based on the size of the current block and transform indexes, and the width nonZeroW of the transformed block may be determined as a smaller value between the width nTbW of the current block and a first size (e.g., 16) if the horizontal transform type index trTypeHor is greater than a reference value (e.g., 0) and may be determined as a smaller value between the width nTbW of the current block and a second size (e.g., 32) if the horizontal transform type index trTypeHor is not greater than the reference value (e.g., 0). Further, the height nonZeroH of the transformed block may be determined as a smaller value between the height nTbH of the current block and the first size if the vertical transform type index is greater than the reference value (e.g., 0) and may be determined as a smaller value between the height nTbH of the current block and the second size if the vertical transform type index trTypeVer is not greater than the reference value (e.g., 0). Here, the first size may be less than the second size, and the second size may be the same as the height or width of the transformed block. In an embodiment, the encoder may apply a horizontal transform to each row of the current block and apply a vertical transform to each column of the current block for at least a part (left region) of the current block to which the horizontal transform has been applied, as shown inFIG.29a. In an embodiment, when the row size or the column size of the current block is 32, an input vector including 32 residual sample values arranged in the row direction or the column direction of the current block may be applied as input to horizontal transform or vertical transform, and an output vector including 16 transform coefficients may be output therefrom. For example, the horizontal transform or the vertical transform may be configured as shown in Tables 25 and 26 (in the case of tyType=1(DST-7)) or Tables 27 and 28 (in the case of tyType=2(DCT-8)). In an embodiment, coefficients existing in a region other than a region corresponding to the transformed block in the current block may be regarded as 0. In addition, the encoder may encode syntax elements (e.g., last_coefficient_position_x, last_coefficient_position_y) related to the position of the last significant coefficient in scanning order in the current block using TR binarization in consideration of the region regarded as 0. FIG.31illustrates an example of a flowchart for decoding a video signal according to an embodiment of the present disclosure. Although operations inFIG.31are performed by a decoder in the following description, the present disclosure is not limited thereto and a method for transforming a video signal according to the present embodiment may be substantially equally applied to an encoder. The process of the flowchart ofFIG.31may be performed by the decoding apparatus200or the inverse transform unit230. The decoder obtains a horizontal transform type index tyTypeHor for a horizontal transform kernel and a vertical transform type index tyTypeVer for a vertical transform kernel of a transform block from a video signal (S3110). In an embodiment, the horizontal transform kernel and the vertical transform kernel may include at least one of DCT-2, DST-7, and DCT-8. The decoder determines a transform region in the transform block based on the horizontal transform type index tyTypeHor, the vertical transform type index tyTypeVer, and the size of the transform block (row size and column size) nTbW and nTbH (S3120). The decoder applies inverse-transform to the transform region based on the horizontal transform type index tyTypeHor and the vertical transform type index tyTypeVer. Here, the width nonZeroW of the transform region may be determined as a smaller value between the width nTbW of the transform block and a first size (e.g., 16) if the horizontal transform type index tyTypeHor is greater than a reference value (e.g., 0) (that is, if MTS is applied) and may be determined as a smaller value between the width nTbW of the transform block and a second size (e.g., 32) if the horizontal transform type index tyTypeHor is not greater than the reference value (e.g., 0) (that is, if MTS is not applied). Application of MTS means that a plurality of transform combinations can be used in the row direction and the column direction of a block (if an MTS flag is 1 or an MTS index is greater than 0). If MTS is not applied (if the MTS flag is 0 or the MTS index is 0), a determined transform DCT-2 is applied in the row direction and the column direction of the block. In addition, the height nonZeroH of the transform region may be determined as a smaller value between the height nTbH of the transform block and the first size if the vertical transform type index tyTypeVer is greater than the reference value (e.g., 0) and may be determined as a smaller value between the height nTbH of the transform block and the second size if the vertical transform type index tyTypeVer is not greater than the reference value (e.g., 0). Here, the first size may be less than the second size, and the second size may be the same as the height or width of the transform block. In an embodiment, the decoder may generate an intermediate matrix including intermediate sample values by applying vertical transform related to the vertical transform type index trTypeVer to each column of the transform region in the transform block and apply horizontal transform related to the horizontal transform type index trTypeHor to each row of the intermediate matrix, as shown inFIG.29b. In an embodiment, when the row size or the column size of the transform block is 32, inverse-transform may correspond to a matrix to which an input vector including 16 transform coefficients arranged in the row direction or the column direction of the transform block is applied and from which an output vector including 32 residual samples is output. For example, horizontal inverse-transform or vertical inverse-transform may be performed through a (forward) transform matrix of Tables 25 and 26 (in the case of (tyType=1(DST-7)) or Table 27 and 28 (in the case of tyType=2(DCT-8)) and indexing for inverse-transform. In an embodiment, coefficients existing in a region other than the transform region in the transform block may be regarded as 0. In addition, the decoder may obtain a syntax element encoded using TR binarization in consideration of the region related to the position of the last significant coefficient in scanning order in the transform block and regarded as 0 in the transform block, and coefficients to which inverse-transform is applied may be obtained based on the position of the last significant coefficient. FIG.32is an embodiment to which the present disclosure is applied and illustrates an example of a block diagram of an apparatus for processing a video signal. The apparatus for processing a video signal ofFIG.32may correspond to the encoding apparatus100ofFIG.1or the decoding apparatus200ofFIG.2. An image processing apparatus3200for processing an image signal includes a memory3220for storing an image signal and a processor3210coupled to the memory, for processing an image signal. The processor3210according to an embodiment of the present disclosure may consist of at least one processing circuit for processing an image signal, and may process an image signal by executing instructions for encoding or decoding the image signal. That is, the processor3210may encode the original image data or decode an encoded image signal by executing the aforementioned encoding or decoding methods. Furthermore, the processing methods to which the present disclosure is applied may be manufactured in the form of a program executed by a computer and stored in computer-readable recording media. Multimedia data having the data structure according to the present disclosure may also be stored in computer-readable recording media. The computer-readable recording media include all types of storage devices and distributed storage devices in which data readable by a computer is stored. The computer-readable recording media may include a Blueray disk (BD), a universal serial bus (USB), a ROM, a PROM, an EEPROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device, for example. Furthermore, the computer-readable recording media includes media implemented in the form of carrier waves (e.g., transmission through the Internet). Furthermore, a bit stream generated by the encoding method may be stored in a computer-readable recording medium or may be transmitted over wired/wireless communication networks. Moreover, embodiments of the present disclosure may be implemented as computer program products according to program code and the program code may be executed in a computer according to embodiment of the present disclosure. The program code may be stored on computer-readable carriers. As described above, the embodiments of the present disclosure may be implemented and executed on a processor, a microprocessor, a controller or a chip. For example, functional units shown in each figure may be implemented and executed on a computer, a processor, a microprocessor, a controller or a chip. Furthermore, the decoder and the encoder to which the present disclosure is applied may be included in multimedia broadcast transmission/reception apparatuses, mobile communication terminals, home cinema video systems, digital cinema video systems, monitoring cameras, video conversation apparatuses, real-time communication apparatuses such as video communication, mobile streaming devices, storage media, camcorders, video-on-demand (VoD) service providing apparatuses, over the top video (OTT) video systems, Internet streaming service providing apparatuses, 3D video systems, video phone video systems, medical video systems, etc. and may be used to process video signals or data signals. For example, OTT video systems may include game consoles, Blueray players, Internet access TVs, home theater systems, smartphones, tablet PCs, digital video recorders (DVRs), etc. Furthermore, the processing methods to which the present disclosure is applied may be manufactured in the form of a program executed by a computer and stored in computer-readable recording media. Multimedia data having the data structure according to the present disclosure may also be stored in computer-readable recording media. The computer-readable recording media include all types of storage devices and distributed storage devices in which data readable by a computer is stored. The computer-readable recording media may include a Blueray disk (BD), a universal serial bus (USB), a ROM, a PROM, an EEPROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device, for example. Furthermore, the computer-readable recording media includes media implemented in the form of carrier waves (e.g., transmission through the Internet). Furthermore, a bit stream generated by the encoding method may be stored in a computer-readable recording medium or may be transmitted over wired/wireless communication networks. Moreover, embodiments of the present disclosure may be implemented as computer program products according to program code and the program code may be executed in a computer according to embodiment of the present disclosure. The program code may be stored on computer-readable carriers. Embodiments described above are combinations of elements and features of the present disclosure. The elements or features may be considered selective unless otherwise mentioned. Each element or feature may be practiced without being combined with other elements or features. Further, an embodiment of the present disclosure may be constructed by combining parts of the elements and/or features. Operation orders described in embodiments of the present disclosure may be rearranged. Some constructions of any one embodiment may be included in another embodiment and may be replaced with corresponding constructions of another embodiment. It is obvious to those skilled in the art that claims that are not explicitly cited in each other in the appended claims may be presented in combination as an exemplary embodiment or included as a new claim by a subsequent amendment after the application is filed. The implementations of the present disclosure may be achieved by various means, for example, hardware, firmware, software, or a combination thereof. In a hardware configuration, the methods according to the implementations of the present disclosure may be achieved by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc. In a firmware or software configuration, the implementations of the present disclosure may be implemented in the form of a module, a procedure, a function, etc. Software code may be stored in the memory and executed by the processor. The memory may be located at the interior or exterior of the processor and may transmit data to and receive data from the processor via various known means. Those skilled in the art will appreciate that the present disclosure may be carried out in other specific ways than those set forth herein without departing from the spirit and essential characteristics of the present disclosure. Accordingly, the above embodiments are therefore to be construed in all aspects as illustrative and not restrictive. The scope of the present disclosure should be determined by the appended claims and their legal equivalents, not by the above description, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein. INDUSTRIAL APPLICABILITY Although exemplary aspects of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from essential characteristics of the disclosure.
174,416
11943480
DETAILED DESCRIPTION The detailed description presents innovations for encoding and decoding bitstreams having clean random access (CRA) pictures and other random access point (RAP) pictures. In particular, the detailed description describes embodiments in which a bitstream is allowed to have a CRA picture at the beginning of a bitstream and is also allowed to have a CRA picture that is not at the beginning of the bitstream, where any of such CRA pictures is allowed to have one or more non-decodable leading pictures. Such CRA pictures are sometimes referred to herein as “broken link access” (BLA) pictures. The detailed description further describes embodiments in which new definitions of unit types for RAP pictures, and strategic constraints on RAP pictures, simplify mapping of units of video elementary stream data to a container format, and redundant unit types are eliminated. Some of the innovations described herein are illustrated with reference to syntax elements and operations specific to the HEVC standard. For example, reference is made to certain draft versions of the HEVC standard, including JCTVC-I1003 of the HEVC standard—“High efficiency video coding (HEVC) text specification draft 7”, JCTVC-I1003_d5, 9thmeeting, Geneva, April 2012 (hereinafter “JCTVC-I1003_d5”). The innovations described herein can also be implemented for other standards or formats. More generally, various alternatives to the examples described herein are possible. For example, any of the methods described herein can be altered by changing the ordering of the method acts described, by splitting, repeating, or omitting certain method acts, etc. The various aspects of the disclosed technology can be used in combination or separately. Different embodiments use one or more of the described innovations. Some of the innovations described herein address one or more of the problems noted in the background. Typically, a given technique/tool does not solve all such problems. I. EXAMPLE COMPUTING SYSTEMS FIG.1illustrates a generalized example of a suitable computing system (100) in which several of the described innovations may be implemented. The computing system (100) is not intended to suggest any limitation as to scope of use or functionality, as the innovations may be implemented in diverse general-purpose or special-purpose computing systems. With reference toFIG.1, the computing system (100) includes one or more processing units (110,115) and memory (120,125). InFIG.1, this most basic configuration (130) is included within a dashed line. The processing units (110,115) execute computer-executable instructions. A processing unit can be a general-purpose central processing unit (CPU), processor in an application-specific integrated circuit (ASIC) or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. For example,FIG.1shows a central processing unit (110) as well as a graphics processing unit or co-processing unit (115). The tangible memory (120,125) may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s). The memory (120,125) stores software (180) implementing one or more innovations for encoding or decoding RAP pictures with unit types and/or strategic constraints that simplify mapping to a media container format (see Sections V, VI, and VII), in the form of computer-executable instructions suitable for execution by the processing unit(s). A computing system may have additional features. For example, the computing system (100) includes storage (140), one or more input devices (150), one or more output devices (160), and one or more communication connections (170). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing system (100). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing system (100), and coordinates activities of the components of the computing system (100). The tangible storage (140) may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing system (100). The storage (140) stores instructions for the software (180) implementing one or more innovations for encoding or decoding RAP pictures with unit types and/or strategic constraints that simplify mapping to a media container format (see Sections V, VI, and VII). The input device(s) (150) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing system (100). For video encoding, the input device(s) (150) may be a camera, video card, TV tuner card, or similar device that accepts video input in analog or digital form, or a CD-ROM or CD-RW that reads video samples into the computing system (100). The output device(s) (160) may be a display, printer, speaker, CD-writer, or another device that provides output from the computing system (100). The communication connection(s) (170) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier. The innovations can be described in the general context of computer-readable media. Computer-readable media are any available tangible media that can be accessed within a computing environment. By way of example, and not limitation, with the computing system (100), computer-readable media include memory (120,125), storage (140), and combinations of any of the above. The innovations can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing system on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing system. The terms “system” and “device” are used interchangeably herein. Unless the context clearly indicates otherwise, neither term implies any limitation on a type of computing system or computing device. In general, a computing system or computing device can be local or distributed, and can include any combination of special-purpose hardware and/or general-purpose hardware with software implementing the functionality described herein. The disclosed methods can also be implemented using specialized computing hardware configured to perform any of the disclosed methods. For example, the disclosed methods can be implemented by an integrated circuit (e.g., an application specific integrated circuit (ASIC) (such as an ASIC digital signal process unit (DSP), a graphics processing unit (GPU), or a programmable logic device (PLD), such as a field programmable gate array (FPGA)) specially designed or configured to implement any of the disclosed methods. For the sake of presentation, the detailed description uses terms like “determine” and “use” to describe computer operations in a computing system. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation. II. EXAMPLE NETWORK ENVIRONMENTS FIGS.2aand2bshow example network environments (201,202) that include video encoders (220) and video decoders (270). The encoders (220) and decoders (270) are connected over a network (250) using an appropriate communication protocol. The network (250) can include the Internet or another computer network. In the network environment (201) shown inFIG.2a, each real-time communication (“RTC”) tool (210) includes both an encoder (220) and a decoder (270) for bidirectional communication. A given encoder (220) can produce output compliant with the SMPTE 421M standard, ISO/IEC 14496-10 standard (also known as H.264 or AVC), HEVC standard, another standard, or a proprietary format, with a corresponding decoder (270) accepting encoded data from the encoder (220). The bidirectional communication can be part of a video conference, video telephone call, or other two-party communication scenario. Although the network environment (201) inFIG.2aincludes two real-time communication tools (210), the network environment (201) can instead include three or more real-time communication tools (210) that participate in multi-party communication. A real-time communication tool (210) manages encoding by an encoder (220).FIG.3shows an example encoder system (300) that can be included in the real-time communication tool (210). Alternatively, the real-time communication tool (210) uses another encoder system. A real-time communication tool (210) also manages decoding by a decoder (270).FIG.4shows an example decoder system (400), which can be included in the real-time communication tool (210). Alternatively, the real-time communication tool (210) uses another decoder system. In the network environment (202) shown inFIG.2b, an encoding tool (212) includes an encoder (220) that encodes video for delivery to multiple playback tools (214), which include decoders (270). The unidirectional communication can be provided for a video surveillance system, web camera monitoring system, remote desktop conferencing presentation, video distribution system (e.g., a streaming video streaming distribution system) or other scenario in which video is encoded and sent from one location to one or more other locations. Although the network environment (202) inFIG.2bincludes two playback tools (214), the network environment (202) can include more or fewer playback tools (214). In general, a playback tool (214) communicates with the encoding tool (212) to determine a stream of video for the playback tool (214) to receive. The playback tool (214) receives the stream, buffers the received encoded data for an appropriate period, and begins decoding and playback. FIG.3shows an example encoder system (300) that can be included in the encoding tool (212). Alternatively, the encoding tool (212) uses another encoder system. The encoding tool (212) can also include server-side controller logic for managing connections with one or more playback tools (214).FIG.4shows an example decoder system (400), which can be included in the playback tool (214). Alternatively, the playback tool (214) uses another decoder system. A playback tool (214) can also include client-side controller logic for managing connections with the encoding tool (212). III. EXAMPLE ENCODER SYSTEMS FIG.3is a block diagram of an example encoder system (300) in conjunction with which some described embodiments may be implemented. The encoder system (300) can be a general-purpose encoding tool capable of operating in any of multiple encoding modes such as a low-latency encoding mode for real-time communication, transcoding mode, and regular encoding mode for media playback from a file or stream, or it can be a special-purpose encoding tool adapted for one such encoding mode. The encoder system (300) can be implemented as an operating system module, as part of an application library or as a standalone application. Overall, the encoder system (300) receives a sequence of source video frames (311) from a video source (310) and produces encoded data as output to a channel (390). The encoded data output to the channel can include coded data for RAP pictures having the strategic constraints and/or unit types described in Sections V, VI, and VII. The video source (310) can be a camera, tuner card, storage media, or other digital video source. The video source (310) produces a sequence of video frames at a frame rate of, for example, 30 frames per second. As used herein, the term “frame” generally refers to source, coded or reconstructed image data. For progressive video, a frame is a progressive video frame. For interlaced video, in example embodiments, an interlaced video frame is de-interlaced prior to encoding. Alternatively, two complementary interlaced video fields are encoded as an interlaced video frame or separate fields. Aside from indicating a progressive video frame, the term “frame” can indicate a single non-paired video field, a complementary pair of video fields, a video object plane that represents a video object at a given time, or a region of interest in a larger image. The video object plane or region can be part of a larger image that includes multiple objects or regions of a scene. An arriving source frame (311) is stored in a source frame temporary memory storage area (320) that includes multiple frame buffer storage areas (321,322, . . . ,32n). A frame buffer (321,322, etc.) holds one source frame in the source frame storage area (320). After one or more of the source frames (311) have been stored in frame buffers (321,322, etc.), a frame selector (330) periodically selects an individual source frame from the source frame storage area (320). The order in which frames are selected by the frame selector (330) for input to the encoder (340) may differ from the order in which the frames are produced by the video source (310), e.g., a frame may be ahead in order, to facilitate temporally backward prediction. Before the encoder (340), the encoder system (300) can include a pre-processor (not shown) that performs pre-processing (e.g., filtering) of the selected frame (331) before encoding. The encoder (340) encodes the selected frame (331) to produce a coded frame (341) and also produces memory management control operation (MMCO) signals (342) or reference picture set (RPS) information. If the current frame is not the first frame that has been encoded, when performing its encoding process, the encoder (340) may use one or more previously encoded/decoded frames (369) that have been stored in a decoded frame temporary memory storage area (360). Such stored decoded frames (369) are used as reference frames for inter-frame prediction of the content of the current source frame (331). Generally, the encoder (340) includes multiple encoding modules that perform encoding tasks such as motion estimation and compensation, frequency transforms, quantization and entropy coding. The exact operations performed by the encoder (340) can vary depending on compression format. The format of the output encoded data can be a Windows Media Video format, VC-1 format, MPEG-x format (e.g., MPEG-1, MPEG-2, or MPEG-4), H.26x format (e.g., H.261, H.262, H.263, H.264), HEVC format or other format. For example, within the encoder (340), an inter-coded, predicted frame is represented in terms of prediction from reference frames. A motion estimator estimates motion of macroblocks, blocks or other sets of samples of a source frame (341) with respect to one or more reference frames (369). When multiple reference frames are used, the multiple reference frames can be from different temporal directions or the same temporal direction. The motion estimator outputs motion information such as motion vector information, which is entropy coded. A motion compensator applies motion vectors to reference frames to determine motion-compensated prediction values. The encoder determines the differences (if any) between a block's motion-compensated prediction values and corresponding original values. These prediction residual values are further encoded using a frequency transform, quantization and entropy encoding. Similarly, for intra prediction, the encoder (340) can determine intra-prediction values for a block, determine prediction residual values, and encode the prediction residual values. In particular, the entropy coder of the encoder (340) compresses quantized transform coefficient values as well as certain side information (e.g., motion vector information, quantization parameter values, mode decisions, parameter choices). Typical entropy coding techniques include Exp-Golomb coding, arithmetic coding, differential coding, Huffman coding, run length coding, variable-length-to-variable-length (V2V) coding, variable-length-to-fixed-length (V2F) coding, LZ coding, dictionary coding, probability interval partitioning entropy coding (PIPE), and combinations of the above. The entropy coder can use different coding techniques for different kinds of information, and can choose from among multiple code tables within a particular coding technique. The coded frames (341) and MMCO/RPS information (342) are processed by a decoding process emulator (350). The decoding process emulator (350) implements some of the functionality of a decoder, for example, decoding tasks to reconstruct reference frames that are used by the encoder (340) in motion estimation and compensation. The decoding process emulator (350) uses the MMCO/RPS information (342) to determine whether a given coded frame (341) needs to be reconstructed and stored for use as a reference frame in inter-frame prediction of subsequent frames to be encoded. If the MMCO/RPS information (342) indicates that a coded frame (341) needs to be stored, the decoding process emulator (350) models the decoding process that would be conducted by a decoder that receives the coded frame (341) and produces a corresponding decoded frame (351). In doing so, when the encoder (340) has used decoded frame(s) (369) that have been stored in the decoded frame storage area (360), the decoding process emulator (350) also uses the decoded frame(s) (369) from the storage area (360) as part of the decoding process. The decoded frame temporary memory storage area (360) includes multiple frame buffer storage areas (361,362, . . . ,36n). The decoding process emulator (350) uses the MMCO/RPS information (342) to manage the contents of the storage area (360) in order to identify any frame buffers (361,362, etc.) with frames that are no longer needed by the encoder (340) for use as reference frames. After modeling the decoding process, the decoding process emulator (350) stores a newly decoded frame (351) in a frame buffer (361,362, etc.) that has been identified in this manner. The coded frames (341) and MMCO/RPS information (342) are also buffered in a temporary coded data area (370). The coded data that is aggregated in the coded data area (370) can contain, as part of the syntax of an elementary coded video bitstream, coded data for RAP pictures having the strategic constraints and/or unit types described in Sections V, VI, and VII. The coded data that is aggregated in the coded data area (370) can also include media metadata relating to the coded video data (e.g., as one or more parameters in one or more supplemental enhancement information (SEI) messages or video usability information (VUI) messages). The aggregated data (371) from the temporary coded data area (370) are processed by a channel encoder (380). The channel encoder (380) can packetize the aggregated data for transmission as a media stream (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel encoder (380) can add syntax elements as part of the syntax of the media transmission stream. Or, the channel encoder (380) can organize the aggregated data for storage as a file (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel encoder (380) can add syntax elements as part of the syntax of the media storage file. Or, more generally, the channel encoder (380) can implement one or more media system multiplexing protocols or transport protocols, in which case the channel encoder (380) can add syntax elements as part of the syntax of the protocol(s). The channel encoder (380) provides output to a channel (390), which represents storage, a communications connection, or another channel for the output. IV. EXAMPLE DECODER SYSTEMS FIG.4is a block diagram of an example decoder system (400) in conjunction with which some described embodiments may be implemented. The decoder system (400) can be a general-purpose decoding tool capable of operating in any of multiple decoding modes such as a low-latency decoding mode for real-time communication and regular decoding mode for media playback from a file or stream, or it can be a special-purpose decoding tool adapted for one such decoding mode. The decoder system (400) can be implemented as an operating system module, as part of an application library or as a standalone application. Overall, the decoder system (400) receives coded data from a channel (410) and produces reconstructed frames as output for an output destination (490). The coded data can include coded data for RAP pictures having the strategic constraints and/or unit types described in Sections V, VI, and VII. The decoder system (400) includes a channel (410), which can represent storage, a communications connection, or another channel for coded data as input. The channel (410) produces coded data that has been channel coded. A channel decoder (420) can process the coded data. For example, the channel decoder (420) de-packetizes data that has been aggregated for transmission as a media stream (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel decoder (420) can parse syntax elements added as part of the syntax of the media transmission stream. Or, the channel decoder (420) separates coded video data that has been aggregated for storage as a file (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel decoder (420) can parse syntax elements added as part of the syntax of the media storage file. Or, more generally, the channel decoder (420) can implement one or more media system demultiplexing protocols or transport protocols, in which case the channel decoder (420) can parse syntax elements added as part of the syntax of the protocol(s). The coded data (421) that is output from the channel decoder (420) is stored in a temporary coded data area (430) until a sufficient quantity of such data has been received. The coded data (421) includes coded frames (431) and MMCO/RPS information (432). The coded data (421) in the coded data area (430) can contain, as part of the syntax of an elementary coded video bitstream, coded data for RAP pictures having the strategic constraints and/or unit types described in Sections V, VI or VII. The coded data (421) in the coded data area (430) can also include media metadata relating to the encoded video data (e.g., as one or more parameters in one or more SEI messages or VUI messages). In general, the coded data area (430) temporarily stores coded data (421) until such coded data (421) is used by the decoder (450). At that point, coded data for a coded frame (431) and MMCO/RPS information (432) are transferred from the coded data area (430) to the decoder (450). As decoding continues, new coded data is added to the coded data area (430) and the oldest coded data remaining in the coded data area (430) is transferred to the decoder (450). The decoder (450) periodically decodes a coded frame (431) to produce a corresponding decoded frame (451). As appropriate, when performing its decoding process, the decoder (450) may use one or more previously decoded frames (469) as reference frames for inter-frame prediction. The decoder (450) reads such previously decoded frames (469) from a decoded frame temporary memory storage area (460). Generally, the decoder (450) includes multiple decoding modules that perform decoding tasks such as entropy decoding, inverse quantization, inverse frequency transforms and motion compensation. The exact operations performed by the decoder (450) can vary depending on compression format. For example, the decoder (450) receives encoded data for a compressed frame or sequence of frames and produces output including decoded frame (451). In the decoder (450), a buffer receives encoded data for a compressed frame and makes the received encoded data available to an entropy decoder. The entropy decoder entropy decodes entropy-coded quantized data as well as entropy-coded side information, typically applying the inverse of entropy encoding performed in the encoder. Sections V, VI, and VII describe examples of coded data for RAP pictures, strategic constraints, and/or unit types that can be decoded by the decoder450. A motion compensator applies motion information to one or more reference frames to form motion-compensated predictions of sub-blocks, blocks and/or macroblocks (generally, blocks) of the frame being reconstructed. An intra prediction module can spatially predict sample values of a current block from neighboring, previously reconstructed sample values. The decoder (450) also reconstructs prediction residuals. An inverse quantizer inverse quantizes entropy-decoded data. An inverse frequency transformer converts the quantized, frequency domain data into spatial domain information. For a predicted frame, the decoder (450) combines reconstructed prediction residuals with motion-compensated predictions to form a reconstructed frame. The decoder (450) can similarly combine prediction residuals with spatial predictions from intra prediction. A motion compensation loop in the video decoder (450) includes an adaptive de-blocking filter to smooth discontinuities across block boundary rows and/or columns in the decoded frame (451). The decoded frame temporary memory storage area (460) includes multiple frame buffer storage areas (461,462, . . . ,46n). The decoded frame storage area (460) is an example of a DPB. The decoder (450) uses the MMCO/RPS information (432) to identify a frame buffer (461,462, etc.) in which it can store a decoded frame (451). The decoder (450) stores the decoded frame (451) in that frame buffer. An output sequencer (480) uses the MMCO/RPS information (432) to identify when the next frame to be produced in output order is available in the decoded frame storage area (460). When the next frame (481) to be produced in output order is available in the decoded frame storage area (460), it is read by the output sequencer (480) and output to the output destination (490) (e.g., display). In general, the order in which frames are output from the decoded frame storage area (460) by the output sequencer (480) may differ from the order in which the frames are decoded by the decoder (450). V. IMPROVEMENTS TO BITSTREAMS HAVING CRA PICTURES This section describes several variations for encoding and/or decoding bitstreams having clean random access (CRA) pictures. In particular, this section presents examples in which bitstreams having CRA pictures are allowed to have mid-bitstream CRA pictures potentially having one or more non-decodable leading pictures. Any of the encoders or decoders described above can be adapted to use the disclosed encoding and decoding techniques. According to JCTVC-I1003_d5, a CRA picture is a coded picture containing only I slices (slices that are decoded using intra prediction only). Further, all coded pictures that follow a CRA picture both in decoding order and output order must not use inter prediction from any picture that precedes the CRA picture either in decoding order or output order; and any picture that precedes the CRA picture in decoding order also precedes the CRA picture in output order. A “leading picture” is a coded picture that follows some other particular picture in decoding order and precedes it in output order. When a leading picture is associated with a CRA picture, it is a coded picture that follows the CRA picture in decoding order but precedes the CRA picture in output order. A leading picture associated with a CRA picture has a picture order count that is less than the picture order count of the CRA picture. According to JCTVC-I1003_d5, an “instantaneous decoding refresh picture” or “IDR picture” is a coded picture that causes the decoding process to mark all reference pictures as “unused for reference.” All coded pictures that follow an IDR picture in decoding order can be decoded without inter prediction from any picture that precedes the IDR picture in decoding order. The first picture of each coded video sequence in decoding order is an IDR picture or a CRA picture. FIG.5is a block diagram (500) illustrating a series of pictures comprising three CRA pictures, multiple pictures comprising bi-directionally predicted slices (“B” pictures), and multiple pictures comprising uni-directionally predicted slices (“P” pictures). The arrows inFIG.5are used to illustrate from which pictures the B pictures and the P pictures depend for purposes of motion compensation (e.g., according to one or more motion vectors).FIG.5also illustrates the output order and the decode order of the pictures. The first CRA picture in the series shown inFIG.5is at the beginning of a bitstream in decode order, and the remaining two CRA pictures are in the middle of the bitstream (that is, after the beginning of the bitstream in decode order). For the second CRA picture (picture 5 in decode order) or third CRA picture (picture 11 in decode order), there are several leading pictures that follow the CRA picture in decode order but precede it in output order. For each of these CRA pictures, a flag indicates information about the leading pictures that may follow the CRA picture, as explained below. According to a previous HEVC submission, JCTVC-H0496, a bitstream was allowed to begin with a CRA picture that is not an IDR picture. For example, the series shown inFIG.5begins with a CRA picture. Further, such a CRA picture was allowed to have non-decodable leading pictures (pictures that follow the CRA picture in decoding order but precede it in output/display order that contain references to reference pictures that are not actually present in the bitstream). According to JCTVC-H0496, if the bitstream starts with a CRA picture, the leading pictures associated with the CRA picture, if present in the bitstream, are ignored (removed from the bitstream or discarded). (FIG.5does not show such leading pictures after the 1StCRA picture, which is picture 1 in decode order.) Allowing a bitstream to begin with a CRA picture that is not an IDR picture is intended to increase editing flexibility. JCTVC-I1003_d5 required a CRA splice point that lies in the middle of the bitstream to be “sensible.” That is, it required all of the leading pictures of the CRA picture to be fully decodable when the decoder starts decoding the bitstream at an IDR or CRA earlier than the current CRA picture. The decoded picture buffer (DPB) was required to contain all of the pictures that are referenced by the syntax of the leading pictures (in the DPB picture set description syntax or referenced for inter prediction). Thus, if a CRA picture after the start of the bitstream had leading pictures, the pictures were understood to be decodable. For example, inFIG.5the third CRA picture (which is the 11thpicture in decode order) is followed by two pictures in decode order (pictures 12 and 13 in decode order) that precede it in output order. These two leading pictures are dependent only on the third CRA picture. For that reason, they would be decodable even if decoding starts at the third CRA point as a random access point. If placement of a CRA picture is constrained such that any leading pictures are guaranteed to be decodable, however, an encoder may be strictly limited in where it can designate pictures as CRA pictures. According to certain embodiments of the disclosed technology, the requirement about decodability of leading pictures of a CRA picture is removed as being unnecessary and undesirable. Embodiments of the disclosed technology additionally allow CRA pictures that are not at the beginning of the bitstream to provide information to a decoder indicative of the presence and type of leading pictures associated with the mid-stream CRA picture. As more fully explained below, such CRA pictures are sometimes referred to herein as BLA pictures and can have one of a plurality of broken link access picture types. Accordingly, encoders or video processing devices using such embodiments can more flexibly place BLA-type CRA pictures within a bitstream, as illustrated inFIG.5. InFIG.5, the second CRA picture (picture 5 in decode order) is followed by two leading pictures in decode order (pictures 6 and 7 in decode order) that precede the CRA picture in output order (pictures 5 and 6 in output order versus picture 7 in output order). In the previous approach, a mid-bitstream CRA picture could not be used as a splice point or as a random access point to begin decoding as part of a scan, fast-forward, rewind, or bitstream switching operation because the second CRA has leading pictures with motion compensation dependencies on reference pictures before the CRA picture in decoding and such reference pictures would not be guaranteed to be available. InFIG.5, for example, the leading pictures that are 6thand 7thin decode order are dependent on the picture that is 2ndin decode order. Using embodiments of the disclosed technology, however, the second CRA picture can be designated as a BLA picture (e.g., using a flag or syntax element that identified the picture as a BLA-type CRA picture (sometimes referred to herein as just a “BLA picture”) when the splicing operation or random access operation or bitstream switching operation occurs. Such an indication can be used by a decoder to properly process any non-decodable leading pictures associated with the BLA (e.g., by not decoding the non-decodable leading pictures, by not outputting the non-decodable leading pictures, or otherwise dropping the non-decodable pictures). Furthermore, in some implementations and as more fully explained below, multiple types of BLA pictures can be specified, thereby providing the decoder with additional information about whether and what type of leading pictures may be associated with the BLA picture. These multiple BLA types provide additional information so that the decoder can properly decode the bitstream and output only decodable pictures. In certain implementations, a syntax element for a CRA picture indicates the potential presence of a “broken link” in that leading pictures for the CRA picture may be missing reference pictures needed for decoding those leading pictures. For example, a flag signalling whether non-decodable leading pictures are potentially present is added to the picture-level information of a CRA picture. The flag can be added to the slice header or to another syntax location that can be established (e.g., another appropriate place for picture-level information, such as the APS). In one particular implementation, when this flag is equal to “1”, the bitstream is allowed to contain leading pictures of the CRA picture that are not decodable due to missing preceding reference pictures (as is currently the case with leading pictures of a CRA picture that starts a bitstream). Thus, during decoding, the flag signals the decoder to ignore or discard leading pictures associated with the CRA picture (including leading pictures that might be decodable). In a particular implementation, a CRA picture with a broken link flag equal to “1” would act essentially the same way as an IDR picture, except as follows: The CRA picture would be allowed to be followed (in bitstream order) by leading pictures that might refer to pictures that are not present in the bitstream. Leading pictures of the CRA picture would be ignored and discarded by the decoder. For instance, the standard for the decoder would specify that the decoder skip the decoding process for all leading pictures of the CRA picture and not output them (as is already the case for a CRA picture at the beginning of the bitstream). The broken link flag therefore indicates to the decoder that the leading pictures associated with the CRA picture should be ignored and discarded, even though one or more of the leading picture might, in fact, be decodable. Further, the CRA's picture order count would not be required to be equal to “0”. Instead, and in one example implementation, the picture order count MSBs would be set to “0” and the LSBs would be set to the LSB value sent in the CRA picture (as is already specified for CRA pictures at the beginning of the bitstream). Furthermore, in some implementations, the picture order count of an IDR picture is allowed to be non-zero. In other words, the picture order count of an IDR picture is not required to be equal to “0”. In certain implementations, a CRA picture with a broken link flag (e.g., broken_link_flag) equal to “1” also contains a no_output_of_prior_pics_flag that acts in the same way as for an IDR picture, and a random_access_pic_id that acts in the same way as the idr_pic_id of IDR pictures. In some implementations, the current idr_pic_id is renamed to random_access_pic_id and its constraints made to apply to both CRA pictures and IDR pictures rather than just to IDR pictures only. Furthermore, like an IDR picture, a CRA picture with the broken link flag equal to “1” could activate a different SPS, change the picture size, etc. In this implementation, when the value of the broken link flag is “0” for a CRA picture, the bitstream is not allowed to contain leading pictures of the CRA picture that might not be decodable due to missing preceding reference pictures unless that CRA picture is the first picture in the bitstream (in decode order). That is, the bitstream contains no leading pictures after a CRA picture with broken link flag of “0” in decode order unless such leading pictures are fully decodable when decoding starts at an earlier CRA or IDR picture in decode order. Thus, during decoding, the flag signals the decoder to decode the leading pictures associated with the CRA picture. When the CRA picture is the first picture in the bitstream and has a broken link flag of “0”, however, then the flag can be ignored and the CRA picture can be treated “as if” the broken link flag was “1”. In the example shown inFIG.5, for the second CRA picture (picture 5 in decode order), the value of the flag could be “1” since some leading pictures may be missing a reference picture upon random access. This allows the second CRA picture inFIG.5to be used for greater random access functionality than previously possible. For example, the second CRA picture could now be used as a starting picture as part of a scan, fast forward, rewind operation, or bitstream switching operation. Furthermore, the second CRA picture could be used as a splice point where the bitstream is cropped to begin at the second picture and then appended to the end of another bitstream. Because the second CRA picture is identified as a BLA picture (broken link flag value of “1”), the resulting bitstream can be properly decoded and represents a valid bitstream. Additionally, in certain implementations, a video encoder or video processing device can alter the status of a CRA picture to become a BLA picture. For example, as part of the splicing operation, a video processing device can modify the designation of a CRA picture to indicate that it is a BLA picture so that the resulting spliced bitstream will be valid. For the third CRA picture (picture 11 in decode order), the value of the flag would be “0” since no leading picture will be missing a reference picture upon random access. Although the above-described embodiments refer to a “flag” for signalling whether the decoder should skip non-decodable leading pictures associated with a CRA picture, any suitable indicator can be used. For example, in some implementations, another picture-level indicator or syntax element that specifies various characteristics of a picture is used. In some implementations, the syntax element used for this purpose may be the syntax indicator known as the network abstraction layer (“NAL”) unit type (or other indicator associated with a picture in the bitstream) associated with a given CRA picture. For example, one NAL unit type value may be used for CRA pictures that are indicated to have a potential “broken link” status, and another NAL unit type value may be used for CRA pictures that are indicated not to have such a potential “broken link” status. Furthermore, although the above-described embodiments refer to “clean” random access pictures, the innovations disclosed herein can be used in connection with any random access picture or equivalent (such as a recovery frame or other picture potentially used to begin a bitstream). Furthermore, in such alternative embodiments, the indicator can be used to signal the possibility of associated non-decodable pictures of any type (not just leading pictures that are identified based on temporal output order; e.g., including leading pictures identified in some other way). Although the above-described embodiments refer to the identification of a potentially non-decodable picture by determination of whether or not a picture is a leading picture of the CRA picture (that is, by identifying whether a picture that follows the CRA picture in decoding order precedes it in output order), other or additional classification rules or indicators may be used to identify the potentially non-decodable pictures. For example, a “flag” or syntax element value, or other indicator associated with a picture in the bitstream, can be sent with each picture to indicate whether or not it is a potentially non-decodable picture, regardless of its output order position relative to the output order position of an associated CRA picture. In other words, the indicator is signaled for the picture that is potentially non-decodable. In some implementations, the syntax element used for this purpose may be the syntax indicator known as the NAL unit type. For example, for pictures that are indicated not to be CRA pictures, one NAL unit type value may be used by a picture that is to be discarded as a potentially non-decodable picture when a random access decoding process begins at the location of the CRA picture or a “broken link” CRA picture has been indicated, and another NAL unit type value may be used by pictures that are indicated to be decodable. A leading picture that is to be discarded as a potentially non-decodable picture is sometimes referred to herein (or indicated in a bitstream) as a tagged-for-discard (TFD) picture or a random access skipped leading (RASL) picture. A leading picture that is decodable is sometimes referred to herein (or indicated in a bitstream) as a random access decodable leading (RADL) picture (or decodable leading picture (DLP)). In some embodiments, the determination of whether or not a picture can be decoded may not only include identification of whether the decoding process of a picture may depend on some pictures that appear prior to a CRA picture in bitstream order, but also may include identification of whether the decoding process of a picture may depend on some pictures that appear prior to more than one CRA picture in bitstream order. This determination can be helpful, for example, as it is typically necessary for the decoder to be able to identify whether pictures can be decoded that follow more than one CRA picture that is encountered after a random access decoding process is initiated. In such scenarios, it may be helpful to constrain reference picture selection as follows—a picture shall not use any picture in the bitstream as a reference for inter-picture prediction that precedes more than x CRA pictures in decoding order. For example, x is two. Without such a constraint, recovery may not be assured when performing random access by a decoder—even after multiple CRA pictures have been encountered. In many respects, use of IDR pictures is unnecessary if the pictures that are to be discarded under some circumstances are indicated explicitly. An IDR picture can be followed in bitstream order by leading pictures (in output order) that are to be decoded and output when performing random access by a decoder. If the classification of whether a picture is to be decoded or not is determined by an explicit syntax indicator (such as the NAL unit type) rather than determined implicitly by the picture order count, then a CRA picture can have all the functionality of an IDR picture. For instance, in certain specific implementations, a syntax element value or other indicator (such as the NAL unit type value) associated with a picture in a bitstream can be used to identify at least the following four types of pictures:One type that identifies the picture to be a CRA picture without a “broken link”,One type that identifies the picture to be a CRA picture with a “broken link”,One type that identifies the picture to be a picture that is always to be decoded when the decoding process began at the location of any preceding CRA picture in bitstream order, andOne type that identifies the picture to be a picture that is not to be decoded when random access has been performed at the random access point of the preceding CRA picture in bitstream order or when the preceding CRA picture in bitstream order is indicated as a “broken link”. VI. STRATEGIC CONSTRAINTS AND UNIT TYPES FOR RAP PICTURES In the HEVC draft JCTVC-I1003_d5, a RAP (“random access point”) picture is represented by NAL unit types 4 to 8. Depending on the characteristics of the RAP picture, for some types of media container format, the unit type can be mapped to one of the three SAP (“stream access point”) types described below, which are also defined in ISO/IEC 14496-12 4thEdition, “Information technology—Coding of audio-visual objects—Part 12: ISO base media file format”, w12640, 100thMPEG meeting, Geneva, April 2012. Although a total of 6 SAP types are defined, a RAP picture can only be mapped to three of the SAP types of that document. The available SAP types include: Type 1, Type 2, and Type 3. Type 1 corresponds to some examples of a “closed GOP random access point” (in which all access units, in decoding order, starting from the point ISAPcan be correctly decoded, resulting in a continuous time sequence of correctly decoded access units with no gaps), for which the access unit in decoding order is also the first access unit in presentation order. Type 2 corresponds to other examples of “closed GOP random access point”, for which the first access unit in decoding order in the media stream starting from the point ISAUis not the first access unit in presentation order. Type 3 corresponds to examples of “open GOP random access point”, in which there are some access units in decoding order following the point ISAUthat cannot be correctly decoded and have presentation times less than the time TSAP. From a systems perspective, it is desirable to make the SAP mapping as simple as possible while allowing the use of as many types as possible. In some embodiments of the disclosed technology, a RAP picture includes one or more of the following constraints and adjustments to permitted unit types. In the following examples, a RAP picture can be further classified into a CRA picture, BLA (“broken link access”) picture or IDR picture, depending on the NAL unit type. In the HEVC design in JCTVC-I1003_d5, a CRA picture is represented by NAL unit types 4 and 5, a BLA picture is represented by NAL unit types 6 and 7, and an IDR picture is represented by a NAL unit type of 8. A NAL unit type of 5 and 7 can only be used for a CRA and BLA picture respectively only when the CRA or BLA picture does not have any associated TFD (“tagged for discard”) pictures. A. Removal of IDR Pictures or Constraint on IDR Pictures Consistent with some of the embodiments disclosed above in Section V, the concept of BLA pictures has been adopted into the HEVC design in JCTVC-I1003_d5 from the proposal JCTVC-I0404 (G. J. Sullivan, “CRA pictures with broken links”, JCTVC-I0404, 9thmeeting, Geneva, April 2012). That proposal also pointed out that a CRA/BLA picture can achieve the functionality of an IDR picture (and more) and hence recommended that the concept of an IDR picture be dropped from the HEVC design, but IDR pictures remained in the HEVC design in JCTVC-I1003_d5. In certain embodiments of the disclosed technology, IDR pictures are still used, but encoding follows a further constraint that simplifies the mapping of an IDR picture to an SAP type. In the HEVC design in JCTVC-I1003, an IDR picture can map to SAP types 1 or 2. If the IDR picture has leading pictures (coded (and decodable) pictures that follow the current picture in decoding order but precede it in output order), it will be mapped to SAP type 2. If the IDR picture does not have leading pictures, it will be mapped to SAP type 1. So, when a system encounters an IDR picture, the system must check whether there are leading pictures or not in order to determine the correct mapping to SAP type, which can unnecessarily consume computing and storage resources to check for a rare case. According to one exemplary embodiment of the disclosed technology, IDR pictures are constrained to not have leading pictures. With this constraint, an IDR picture always maps to a SAP of type 1. B. NAL Unit Types for CRA/BLA Pictures In certain implementations of the disclosed technology, when there are no TFD pictures, the functionality of a CRA picture is identical to that of a BLA picture. Hence, the necessity of defining two NAL unit types for this purpose can be avoided, and a single type value can indicate a CRA picture or BLA picture with no associated TFD pictures. Moreover, a CRA/BLA picture with no associated TFD pictures can map to SAP types 1 or 2 depending on whether it has leading pictures or not. In particular implementations, one of the redundant NAL unit types can be used to indicate the case where a CRA/BLA picture directly maps to SAP type 1 (which occurs when the CRA/BLA picture has no leading pictures). This simplifies mapping to an appropriate SAP type for the common case of a CRA/BLA with no leading pictures. One specific exemplary implementation comprises NAL unit types (in this example, NAL unit types 4 through 7) defined as in Table 1 below: TABLE 1NALSAP typesunit typeDescriptionpossible4CRA picture1, 2, 35BLA picture1, 2, 36CRA/BLA picture with no1, 2associated TFD pictures7CRA/BLA picture with no1leading pictures Another specific exemplary implementation comprises NAL unit types (in this example, NAL unit types 16-21) as defined below. In this example, TFD leading pictures are referred to as random access skipped leading (“RASL”) pictures. In particular implementations, all RASL pictures are leading pictures of an associated BLA or CRA picture. When the associated RAP picture is a BLA picture or is the first coded picture in the bitstream, the RASL picture is not output and may not be correctly decodable, as the RASL picture may contain references to pictures that are not present in the bitstream. Further, RASL pictures are not used as reference pictures for the decoding process of non-RASL pictures. In certain example implementations, when present, all RASL pictures precede, in decoding order, all trailing pictures of the same associated RAP picture. Furthermore, in the example below, decodable leading pictures are referred to as random access decodable leading (RADL) pictures. In particular implementations, all RADL pictures are leading pictures, and RADL pictures are not used as reference pictures for the decoding process of trailing pictures of the same associated RAP picture. In certain example implementations, when present, all RADL pictures precede, in decoding order, all trailing pictures of the same associated RAP picture. Further, in particular implementations, the BLA picture (a) contains only I slices, and may be the first picture in the bitstream in decoding order, or may appear later in the bitstream; (b) begins a new coded video sequence, and has the same effect on the decoding process as an IDR picture; and (c) contains syntax elements that specify a non-empty reference picture set. TABLE 2NALunitNAL unittypeDescriptiontype name16A BLA picture that may have associatedBLA_W_LPRASL pictures, which are not output bythe decoder and may not be decodable, asthey may contain references to picturesthat are not present in the bitstream. TheBLA picture may also have associatedRADL pictures, which are specified to bedecoded.17A BLA picture that does not haveBLA_W_DLPassociated RASL pictures but may haveassociated RADL pictures, which arespecified to be decoded.18A BLA picture that does not have anyBLA_N_LPassociated leading pictures pictures19An IDR picture that does not haveIDR_W_DLPassociated RASL pictures present in thebitstream, but may have associated RADLpictures in the bitstream.20An IDR picture that does not haveIDR_N_LPassociated leading pictures present in thebitstream.21A CRA pictureCRA_NUT Alternatively, other type values are used for the video elementary bitstream data (e.g., other NAL unit type values, or other video type values) and/or the media container format data (e.g., other SAP type values or other container format values), consistent with one or more of these constraints on RAPs and permitted combinations of types of pictures. C. Constraint on the Bitstream Order of Leading Pictures When an encoding system maps a RAP picture into one of the possible SAP types, it checks for the existence of leading pictures and, if present, whether any of the pictures is a TFD picture. According to constraints on inter-picture dependencies in the HEVC design in JCTVC-I1003_d5, leading pictures of a current RAP picture can be present anywhere in the bitstream after the current RAP picture and the next RAP picture. The extent of the search for leading pictures is potentially very long. In order to make this search simpler, and according to certain implementations of the disclosed technology, a constraint exists to ensure the occurrence of all leading pictures in the bitstream (that is, in decoding order) prior to any non-leading picture for a RAP picture. That is, for a given RAP picture, the bitstream is constrained so that all leading pictures for that RAP picture occur in the bitstream (that is, in decoding order) prior to any non-leading pictures for that RAP picture. VII. GENERAL EMBODIMENTS FOR IMPLEMENTING ASPECTS OF THE DISCLOSED TECHNOLOGY FIGS.6-11are flow charts illustrating example embodiments according to the disclosed technology. The methods shown inFIGS.6-11can include any one or more of the specific aspects disclosed above or below. Furthermore, the methods shown inFIGS.6-11should not be construed as limiting, as any one or more of the method acts shown therein can be used alone or in various combinations or sub-combinations with one another. Furthermore, the sequence of the method acts can, in some cases, be re-arranged or performed at least partially concurrently. Additionally, and as noted above, the methods disclosed inFIGS.6-11can be implemented as computer-executable instructions stored on a computer-readable storage medium (where such storage medium does not include propagating waves) or by a digital media processing system. FIG.6is an example method600that can be performed by an encoder or digital media processing tool or device. At610, a picture (e.g., a picture from a group of pictures in a video sequence) is designated as being one of a plurality of picture types. In certain embodiments, the picture types include any one or more of the following: (1) a type indicating that the picture is a broken link access (BLA) picture that is capable of being used as a random access point (RAP) picture and further indicating that the picture does not have any associated non-decodable leading pictures but may have one or more associated decodable leading pictures; (2) a type indicating that the picture is a BLA picture that is capable of being used as a RAP picture and further indicating that the picture does not have any associated leading pictures; (3) a type indicating that the picture is a BLA picture that is capable of being used as a RAP picture and further indicating that the picture may have one or more associated decodable or non-decodable leading pictures; (4) a type indicating that the encoded picture is an instantaneous decoding refresh (IDR) picture that may have associated RADL pictures; (5) a type indicating that the encoded picture is an IDR that does not have any associated leading pictures; and/or (6) a type indicating that the encoded picture is a clean random access (CRA) picture that is capable of being used as a RAP picture. As noted above, one or more of the types indicate that the picture is a BLA picture. In certain embodiments, a BLA picture contains only I slices, and may be the first picture in the bitstream in decoding order, or may appear later in the bitstream. Further, in these embodiments, a BLA picture begins a new coded video sequence, and has the same effect on the decoding process as an IDR picture; however, a BLA picture contains syntax elements that specify a non-empty reference picture set (which can be ignored during decoding). In some embodiments, the first BLA type noted above indicates that the BLA picture does not have associated random access skipped leading (RASL) pictures but may have associated random access decodable leading (RADL) pictures, which are specified to be decoded (e.g., a NAL unit type can be used to specify the leading picture as either a RASL picture or RADL picture). In certain implementations, all RASL pictures are leading pictures of an associated BLA or CRA picture. When the associated RAP picture is a BLA picture or is the first coded picture in the bitstream, the RASL picture is not output by the decoder and may not be correctly decodable, as the RASL picture may contain references to pictures that are not present in the bitstream. RASL pictures are not used as reference pictures for the decoding process of non-RASL pictures. Further, in certain implementations, when present, all RASL pictures precede, in decoding order, all trailing pictures of the same associated RAP picture. Additionally, in some implementations, all RADL pictures are leading pictures. RADL pictures are not used as reference pictures for the decoding process of trailing pictures of the same associated RAP picture. Further, in certain implementations, when present, all RADL pictures precede, in decoding order, all trailing pictures of the same associated RAP picture. Although this type can have a wide variety of names, this type is named “BLA_W_DLP” in one particular implementation. In some embodiments, the second BLA type noted above indicates that the BLA picture does not have any associated leading pictures. Although this type can have a wide variety of names, in one particular implementation, this type is named “BLA_N_LP”. In certain embodiments, the third BLA type noted above indicates that the BLA picture may have associated RASL pictures, which are not output by the decoder and may not be decodable, as they may contain references to pictures that are not present in the bitstream. This type also indicates that the BLA picture may also have associated RADL pictures, which are specified to be decoded. Although the this type can have a wide variety of names, this type is named “BLA_W_LP” in one particular implementation. In some embodiments, the fourth type noted above indicates that the picture is an IDR picture that does not have associated RASL pictures present in the bitstream, but may have associated RADL pictures in the bitstream. In particular implementations, an IDR picture contains only I slices, and may be the first picture in the bitstream in decoding order, or may appear later in the bitstream. Each IDR picture is the first picture of a coded video sequence in decoding order. An IDR picture does not have associated RASL pictures. Although this type can have a wide variety of names, this type is named “IDR_W_DLP” in one particular implementation. In certain embodiments, the fifth type noted above indicates that the picture is an IDR picture that does not have any associated leading pictures. Although this type can have a wide variety of names, this type is named “IDR_N_LP” in one particular implementation. In some embodiments, the sixth type noted above indicates that the picture is CRA picture. In particular implementations, a CRA picture contains only I slices, and may be the first picture in the bitstream in decoding order, or may appear later in the bitstream. Further, a CRA picture may have associated RADL or RASL pictures. When a CRA picture is the first picture in the bitstream in decoding order, the CRA picture is the first picture of a coded video sequence in decoding order, and any associated RASL pictures are not output by the decoder and may not be decodable, as they may contain references to pictures that are not present in the bitstream. Although this type can have a wide variety of names, in one particular implementation, this type is named “CRA_NUT”. At612, at least part of a bitstream is generated. In the illustrated embodiment, the at least part of the bitstream comprises the picture type designated for the picture (e.g., as a syntax element, such as the NAL unit type). In certain implementations, the method600is performed by an encoder and the method further comprises encoding the picture. The bitstream can further include the encoded picture. A wide variety of encoding techniques can be used. For example, any of the encoding techniques described above can be used. In certain embodiments, the encoded picture that is designated as a BLA picture is not the first picture of the bitstream. In some embodiments, the method further comprises encoding one or more leading pictures and non-leading pictures associated with the encoded picture. In such embodiments, the act of generating the at least a portion of a bitstream can further comprise ordering the encoded leading pictures and encoded non-leading pictures such that all of the encoded leading pictures precede all of the encoded non-leading pictures in the at least a portion of a bitstream. The leading pictures can also be designated as either a RADL or RASL picture (e.g., using a NAL unit type value). FIG.7is an example method700performed by a decoder or digital media processing tool or device. In general, the method700can be performed to decode the bitstream generated from, for example, the method600ofFIG.6. At710, at least part of a bitstream is received (e.g., buffered, accessed, loaded, or otherwise prepared for further processing). In the illustrated embodiment, the at least part of the bitstream comprises an encoded picture and a picture type designated for the encoded picture. The picture type is selected from one of a plurality of picture types. In certain embodiments, the plurality of picture types include one or more of the following: (1) a type indicating that the encoded picture is a broken link access (BLA) picture that is capable of being used as a random access point (RAP) picture and further indicating that the encoded picture does not have any associated non-decodable leading pictures but may have one or more associated decodable leading pictures); (2) a type indicating that the encoded picture is a BLA picture that is capable of being used as a RAP picture and further indicating that the encoded picture does not have any associated leading pictures; (3) a type indicating that the encoded picture is a BLA picture that is capable of being used as a RAP picture and further indicating that the encoded picture may have one or more associated decodable or non-decodable leading pictures; (4) a type indicating that the encoded picture is an instantaneous decoding refresh (IDR) picture that may have associated RADL pictures; (5) a type indicating that the encoded picture is an IDR that does not have any associated leading pictures; and/or (6) a type indicating that the encoded picture is a clean random access (CRA) picture that is capable of being used as a RAP picture. Further details concerning exemplary implementations for the pictures types are described above with respect toFIG.6. At712, the encoded picture is decoded. A wide variety of decoding techniques can be used. For example, any of the decoding techniques described above can be used. In certain embodiments, the encoded picture is not the first picture of the bitstream. In some embodiments, the method further comprises decoding one or more leading pictures associated with the encoded picture of710and one or more non-leading pictures associated with the encoded picture of710. In such embodiments, the at least a portion of the bitstream can be ordered such that all of the encoded leading pictures associated with the first picture precede all of the encoded non-leading pictures associated with the first picture. Additionally, the leading pictures can be designated as either a RADL or RASL picture (e.g., using a NAL unit type value). FIG.8is an example method800performed by an encoder or digital media processing tool or device. At810, at least a portion of a bitstream is generated. In the illustrated embodiment, the bitstream is generated such that it includes a random access point picture that is not the initial picture of the bitstream (e.g., the random access point picture is in a picture order position subsequent to an initial picture of the bitstream) and such that the random access point picture has one or more associated non-decodable leading pictures. Further, in the illustrated embodiment, the bitstream is generated to include an indication for a decoder that signals that the random access point picture is a picture from which a decoder can begin decoding. In certain implementations, the indication includes further information. For example, the indication can further indicate whether a RAP picture has any associated leading pictures or not and, if the encoded picture has any associated leading pictures, whether all the associated leading pictures are decodable or not. The indication can have a variety of formats. For instance, in one implementation, the indication is a syntax element (such as a NAL unit type as shown, for example, in Table 2). In some implementations, the at least a portion of the bitstream further comprises one or more leading pictures for the encoded picture and one or more non-leading pictures for the encoded picture. In such implementations, the act of generating the at least a portion of the bitstream can comprise ordering the leading pictures for the encoded picture and the non-leading pictures for the encoded picture such that all of the leading pictures precede the non-leading pictures. At812, the at least a portion of the bitstream is output (e.g., by storing in a computer-readable storage medium, writing to a file, or other such form of outputting). FIG.9is an example method900performed by a decoder or digital media processing tool or device. At910, at least a portion of a bitstream is received. In the illustrated embodiment, the at least a portion of the bitstream comprises a random access point picture at a picture order position subsequent to an initial picture of the bitstream. Further, the at least a portion of the bitstream includes one or more non-decodable leading pictures associated with the random access point picture. The at least a portion of the bitstream can also comprise an indication of whether the random access point picture has any associated leading pictures or not and, if the random access point picture has any associated leading pictures, an indication of whether all the associated leading pictures are decodable. The indication can have a variety of formats. For instance, in one implementation, the indication is a syntax element (such as a NAL unit value as shown, for example, in Table 2) that signals whether the random access point picture has any associated leading pictures or not and, if the random access point picture has any associated leading pictures, an indication of whether all the associated leading pictures are decodable. At912, the random access point picture is decoded. FIG.10is an example method1000performed by an encoder or digital media processing tool or device. At1010, a bitstream is generated that includes a picture designated to be a random access point (“RAP”). Furthermore, the generating is performed such that any and all leading pictures for the RAP precede any non-leading picture for the RAP in decoding order. At1012, the bitstream is output (e.g., by storing the bitstream in a computer-readable storage medium or by writing the bitstream to a file). FIG.11is an example method1100performed by a decoder or digital media processing tool or device. At1110, a bitstream comprising encoded data for plural pictures is received (e.g., buffered into memory, accessed, loaded, or otherwise input for further processing). In the illustrated embodiment, the bitstream includes a picture designated to be a random access point (“RAP”) picture. Further, in the illustrated embodiment, the bitstream has been generated under a constraint that any and all leading pictures for the RAP precede in decoding order any non-leading picture for the RAP. At1112, the plural pictures are decoded. Embodiments of the disclosed technology can be used to increase the number of available random access points from which a decoder can begin decoding without substantially affecting video playback quality. Thus, embodiments of the disclosed technology can improve the speed and/or seamlessness with which video coding systems can operate. For example, the use of BLA pictures and associated indicators can improve the performance of a wide variety of operations—such as fast forward operations, rewind operations, scanning operations, splicing operations, or switching operations between video streams—by presenting increased numbers of random access points for beginning the decoding process and by presenting information that can be used by a decoder to appropriately handle the video that begins at the random access point (e.g., at the BLA picture).FIGS.12and13below present exemplary video processing methods that take advantage of the BLA techniques disclosed herein. The disclosed techniques can be used in video encoding or video decoding systems to more flexibly perform adaptive video delivery, production editing, commercial insertion, and the like. FIG.12is an example method1200performed by a media processing tool or device. At1210, a first bitstream portion is decoded. At1212, an indication that an encoded picture in a second bitstream portion is a broken link access picture is detected (e.g., by parsing and processing a NAL unit type value for the picture). At1214, at least some of the second bitstream portion is decoded beginning at the broken link access picture. In certain implementations, the decoding further includes skipping (e.g., not decoding or not outputting) one or more pictures of the second bitstream associated with the broken link access picture. For example, the decoder can decode leading pictures designated as RADL pictures and skip the decoding of pictures designated as RASL pictures. At1216, the decoded pictures from the first bitstream portion are output followed by decoded pictures from the second bitstream portion. The example method1200can be performed as part of a fast forward operation, rewind operation, or a scan operation (e.g., initiated by a user who wishes to scan to a particular point or time in a video stream) implemented by a media processing device (e.g., a video playback tool or device). In such instances, the bitstream portions are part of the same bitstream. The example method1200can also be performed when a stream, broadcast, or channel switching operation is performed (e.g., as performed by a video decoder used in connection with a cable, satellite, or Internet TV system). In such instances, the bitstream portions are from different bitstreams. Furthermore, in certain implementations, the indication indicates that the encoded picture is one of a plurality of types of broken link access pictures, where the plurality of types include two or more of the following: a type that may include one or more leading pictures, a type that may contain one or more leading pictures but no non-decodable leading pictures, a type that contains no leading pictures. The indication may signal any one or more of the other types disclosed herein as well. FIG.13is an example method1300performed by a media processing device or application. At1310, at least a portion of a first bitstream is received. At1312, at least a portion of a second bitstream is received. At1314, the at least a portion of the first bitstream is spliced with the at least a portion of the second bitstream at a broken link access picture. In certain embodiments, the splicing operation additionally comprises omitting random access skipped leading (RASL) pictures associated with the broken link access picture. Furthermore, in some embodiments, the splicing can include identifying a clean random access picture as the splice point and designating the clean random access picture as the broken link access picture in the spliced bitstream. Furthermore, in certain embodiments, the method can further comprise detecting an indication (e.g., by parsing and processing a NAL unit type value for the picture) that the broken link access picture is one of a plurality of broken link access picture types (e.g., any of the types disclosed herein). The method1300can be performed, for example, by a video editing device or application, or by a media playback device or application. In certain embodiments, the indication indicates that the encoded picture is one of a plurality of types of broken link access pictures, wherein the plurality of types include any two or more of the following: a type that may include one or more leading pictures, a type that may contain one or more leading pictures but no non-decodable leading pictures, or a type that contains no leading pictures. The indication may signal any one or more of the other types disclosed herein as well. FIG.14is an example method1400that can be performed by an encoder or digital media processing tool or device. At1410, a BLA picture is encoded. At1412, one or more leading pictures associated with the BLA picture are encoded. At1414, a bitstream is generated that comprises the encoded BLA picture and the one or more encoded associated leading pictures. Furthermore, in the illustrated embodiment, the act of generating the bitstream further comprises generating in the bitstream explicit indications for each of the one or more encoded associated leading pictures indicating whether the respective leading picture is decodable or not decodable when pictures from before the BLA picture in decoding order are unavailable to a decoder (e.g., as may occur after a splicing, fast forward, rewind, video stream changing operation, or the like). In particular implementations, the indications are NAL unit types that identify whether the respective leading picture is a RASL picture or a RADL picture. Further, in certain implementations, the act of generating the bitstream can further comprise generating an explicit indication that the BLA picture is one of a plurality of types of BLA pictures. For example, the picture can be designated as a BLA type that may have one or more associated decodable or non-decodable leading pictures (e.g., a BLA_W_LP type). FIG.15is an example method1500performed by a decoder or digital media processing tool or device. For example, the method can be performed to decode the bitstream generated inFIG.14. At1510, a bitstream is received that comprises a BLA picture and one or more encoded leading pictures associated with the BLA picture. In the illustrated example, the bitstream further comprises explicit indications for each of the one or more encoded associated leading pictures indicating whether a respective leading picture is decodable or not decodable when pictures from before the BLA picture are unavailable to the decoder (e.g., as may occur after a splicing, fast forward, rewind, video stream changing operation, or the like). At1512, the encoded BLA picture and the one or more encoded associated leading pictures are decoded in accordance to the explicit indications. In some instances, the bitstream further comprises an explicit indication that the BLA picture is one of a plurality of types of BLA pictures. For example, the BLA picture can be a broken link access type that may have one or more associated decodable or non-decodable leading pictures. Further, in some examples, the explicit indications for each of the one or more encoded associated leading pictures indicate that a respective leading picture is either a decodable leading picture or a non-decodable picture when pictures from before the BLA picture in decoding order are unavailable (e.g., the leading pictures can be designated as RASL or RADL pictures). VIII. CONCLUDING REMARKS In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.
78,645
11943481
DETAILED DESCRIPTION The present document provides various techniques that can be used by a decoder of image or video bitstreams to improve the quality of decompressed or decoded digital video or images. For brevity, the term “video” is used herein to include both a sequence of pictures (traditionally called video) and individual images. Furthermore, a video encoder may also implement these techniques during the process of encoding in order to reconstruct decoded frames used for further encoding. Section headings are used in the present document for ease of understanding and do not limit the embodiments and techniques to the corresponding sections. As such, embodiments from one section can be combined with embodiments from other sections. 1. SUMMARY This document is related to video coding technologies. Specifically, it is related to index and escape symbols coding in palette coding. It may be applied to the existing video coding standard like High Efficiency Video Coding (HEVC), or the standard (Versatile Video Coding (VVC)) to be finalized. It may be also applicable to future video coding standards or video codec. 2. BACKGROUND Video coding standards have evolved primarily through the development of the well-known International Telecommunication Union (ITU) Telecommunication Standardization Sector (ITU-T) and International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standards. The ITU-T produced H.261 and H.263, ISO/IEC produced Moving Picture Experts Group (MPEG)-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by Video Coding Experts Group (VCEG) and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM). In April 2018, the Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting a 50% bitrate reduction compared to HEVC. The latest version of VVC draft, i.e., Versatile Video Coding (Draft 6) could be found at: http://phenix.it-sudparis.eu/jvet/doc_end_user/documents/15_Gothenburg/wg11/JVET-02001-v14.zip The latest reference software of VVC, named VTM, could be found at: https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/tags/VTM-5.0 2.1 Palette Mode in HEVC Screen Content Coding Extensions (HEVC-SCC 2.1.1 Concept of Palette Mode The basic idea behind a palette mode is that the pixels in the CU are represented by a small set of representative color values. This set is referred to as the palette. And it is also possible to indicate a sample that is outside the palette by signalling an escape symbol followed by (possibly quantized) component values. This kind of pixel is called an escape pixel. The palette mode is illustrated inFIG.1. As depicted inFIG.1, for each pixel with three color components (luma, and two chroma components), an index to the palette is identified, and the block could be reconstructed based on the identified values in the palette. 2.1.2 Coding of the Palette Entries For coding of the palette entries, a palette predictor is maintained. The maximum size of the palette as well as the palette predictor is signalled in the sequence parameter set (SPS). In HEVC-SCC, a palette_predictor_initializer_present_flag is introduced in the picture parameter set (PPS). When this flag is 1, entries for initializing the palette predictor are signalled in the bitstream. The palette predictor is initialized at the beginning of each coding tree unit (CTU) row, each slice and each tile. Depending on the value of the palette_predictor_initializer_present_flag, the palette predictor is reset to 0 or initialized using the palette predictor initializer entries signalled in the PPS. In HEVC-SCC, a palette predictor initializer of size 0 was enabled to allow explicit disabling of the palette predictor initialization at the PPS level. For each entry in the palette predictor, a reuse flag is signalled to indicate whether it is part of the current palette. This is illustrated inFIG.2. The reuse flags are sent using run-length coding of zeros. After this, the number of new palette entries are signalled using Exponential Golomb (EG) code of order 0, i.e., EG-0. Finally, the component values for the new palette entries are signalled. 2.1.3 Coding of Palette Indices The palette indices are coded using horizontal and vertical traverse scans as shown inFIG.3. The scan order is explicitly signalled in the bitstream using the palette_transpose_flag. For the rest of the subsection it is assumed that the scan is horizontal. The palette indices are coded using two palette sample modes: ‘COPY_LEFT’ and ‘COPY_ABOVE’. In the ‘COPY_LEFT’ mode, the palette index is assigned to a decoded index. In the ‘COPY_ABOVE’ mode, the palette index of the sample in the row above is copied. For both “COPY_LEFT” and ‘COPY_ABOVE’ modes, a run value is signalled which specifies the number of subsequent samples that are also coded using the same mode. In the palette mode, the value of an index for the escape symbol is the number of palette entries. And, when escape symbol is part of the run in ‘COPY_LEFT’ or ‘COPY_ABOVE’ mode, the escape component values are signalled for each escape symbol. The coding of palette indices is illustrated inFIG.4. This syntax order is accomplished as follows. First the number of index values for the CU is signalled. This is followed by signalling of the actual index values for the entire CU using truncated binary coding. Both the number of indices as well as the index values are coded in bypass mode. This groups the index-related bypass bins together. Then the palette sample mode (if necessary) and run are signalled in an interleaved manner. Finally, the component escape values corresponding to the escape symbols for the entire CU are grouped together and coded in bypass mode. The binarization of escape symbols is EG coding with 3rdorder, i.e., EG-3. An additional syntax element, last_run_type_flag, is signalled after signalling the index values. This syntax element, in conjunction with the number of indices, eliminates the need to signal the run value corresponding to the last run in the block. In HEVC-SCC, the palette mode is also enabled for 4:2:2, 4:2:0, and monochrome chroma formats. The signalling of the palette entries and palette indices is almost identical for all the chroma formats. In case of non-monochrome formats, each palette entry consists of 3 components. For the monochrome format, each palette entry consists of a single component. For subsampled chroma directions, the chroma samples are associated with luma sample indices that are divisible by 2. After reconstructing the palette indices for the CU, if a sample has only a single component associated with it, only the first component of the palette entry is used. The only difference in signalling is for the escape component values. For each escape symbol, the number of escape component values signalled may be different depending on the number of components associated with that symbol. In addition, there is an index adjustment process in the palette index coding. When signalling a palette index, the left neighboring index or the above neighboring index should be different from the current index. Therefore, the range of the current palette index could be reduced by 1 by removing one possibility. After that, the index is signalled with truncated binary (TB) binarization. The texts related to this part is shown as follows, where the CurrPaletteIndex is the current palette index and the adjustedRefPaletteIndex is the prediction index. The variable PaletteIndexMap[xC][yC] specifies a palette index, which is an index to the array represented by CurrentPaletteEntries. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The value of PaletteIndexMap[xC][yC] shall be in the range of 0 to MaxPaletteIndex, inclusive. The variable adjustedRefPaletteIndex is derived as follows: adjustedRefPaletteIndex = MaxPaletteIndex + 1if( PaletteScanPos > 0 ) {xcPrev = x0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1][ 0 ]ycPrev = y0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1][ 1 ]if( CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] == 0 ) {adjustedRefPaletteIndex = PaletteIndexMap[ xcPrev ][ ycPrev ] { (7-157)}else {if( !palette_transpose_flag )adjustedRefPaletteIndex = PaletteIndexMap[ xC ][ yC − 1 ]elseadjustedRefPaletteIndex = PaletteIndexMap[ xC − 1][ yC ]}} When CopyAboveIndicesFlag[xC][yC] is equal to 0, the variable CurrPaletteIndex is derived as follows: if(CurrPaletteIndex>=adjustedRefPaletteIndex) CurrPaletteIndex++ In addition, the run length elements in the palette mode are context coded. The related context derivation process described in JVET-02011-vE is shown as follows. Derivation Process of ctxInc for the Syntax Element Palette_Run_Prefix Inputs to this process are the bin index binIdx and the syntax elements copy_above_palette_indices_flag and palette_idx_idc. Output of this process is the variable ctxInc. The variable ctxInc is derived as follows:— If copy_above_palette_indices_flag is equal to 0 and binIdx is equal to 0, ctxInc is derived as follows: ctxInc=(palette_idx_idc<1)?0:((palette_idx_idc<3)?1:2)  (9-69)Otherwise, ctxInc is provided by Table 1: TABLE 1Specification of ctxIdxMap[ copy_above_palette_indices_flag ][ binIdx ]binIdx01234>4copy_above_palette_indices_flag == 156677bypasscopy_above_palette_indices_flag == 00, 1, 23344bypass 2.2 Palette Mode in VVC 2.2.1 Palette in Dual Tree In VVC, the dual tree coding structure is used on coding the intra slices, so the luma component and two chroma components may have different palette and palette indices. In addition, the two chroma component shares same palette and palette indices. 2.2.2 Palette as a Separate Mode In JVET-N0258 and current VTM, the prediction modes for a coding unit can be MODE_INTRA, MODE_INTER, MODE_IBC and MODE_PLT. The binarization of prediction modes is changed accordingly. When IBC is turned off, on I tiles, the first one bin is employed to indicate whether the current prediction mode is MODE_PLT or not. While on P/B tiles, the first bin is employed to indicate whether the current prediction mode is MODE_INTRA or not. If not, one additional bin is employed to indicate the current prediction mode is MODE_PLT or MODE_INTER. When IBC is turned on, on I tiles, the first bin is employed to indicate whether the current prediction mode is MODE_IBC or not. If not, the second bin is employed to indicate whether the current prediction mode is MODE_PLT or MODE_INTRA. While on P/B tiles, the first bin is employed to indicate whether the current prediction mode is MODE_INTRA or not. If it is an intra mode, the second bin is employed to indicate the current prediction mode is MODE_PLT or MODE_INTRA. If not, the second bin is employed to indicate the current prediction mode is MODE_IBC or MODE_INTER. The related texts in JVET-O2001-vE are shown as follows. Coding Unit Syntax coding_unit( x0, y0, cbWidth, cbHeight, cqtDepth, treeType, modeType ) {DescriptorchType = treeType = = DUAL_TREE_CHROMA? 1 : 0if( slice_type != I ∥ sps_ibc_enabled_flag ∥ sps_palette_enabled_flag) {if(   treeType   !=   DUAL TREE CHROMA    &&!( ( ( cbWidth = = 4 && cbHeight = = 4 ) ∥ modeType = = MODE_TYPE_INTRA )&& !sps_ibc_enabled_flag ) )cu_skip_flag[ x0 ][ y0 ]ae(v)if(  cu_skip_flag[ x0 ][ y0 ]  = =  0   &&   slice_type   !=   I&& !( cbWidth = = 4 && cbHeight = = 4 ) && modeType = = MODE_TYPE_ALL )pred_mode_flagae(v)if(  ( ( slice_type  = =  I   &&   cu_skip_flag[ x0 ][ y0 ]  = =0 )   ∥( slice_type != I && ( CuPredMode[ chType ][ x0 ][ y0 ] != MODE_INTRA  ∥( cbWidth = = 4 && cbHeight = = 4 && cu_skip_flag[ x0 ][ y0 ] = = 0 ) ) ) ) &&cbWidth <= 64 && cbHeight <= 64 && modeType != MODE_TYPE_INTER &&sps_ibc_enabled_flag && treeType != DUAL_TREE_CHROMA )pred_mode_ibc_flagae(v)if( ( ( ( slice_type = = I ∥ ( cbWidth = = 4 && cbHeight = = 4) ∥ sps_ibc_enabled_flag)                                     &&CuPredMode[ x0 ][ y0 ]    = =    MODE INTRA)    ∥( slice_type != I  &&  !( cbWidth = = 4 && cbHeight = = 4 )  &&!sps_ibc_enabled_flag&& CuPredMode[ x0 ][ y0 ] != MODE_INTRA ) ) && sps_palette_enabled_flag&&cbWidth <= 64 && cbHeight <= 64 && cu_skip_flag[ x0 ][ y0 ] = = 0 &&modeType != MODE_TYPE_INTER )pred_mode_plt_flagae(v)}...} 2.2.3 Palette Mode Syntax palette_coding( x0, y0, cbWidth, cbHeight, startComp, numComps ) {DescriptorpalettePredictionFinished = 0NumPredictedPaletteEntries = 0for( predictorEntryIdx = 0; predictorEntryIdx < PredictorPaletteSize[ startComp ] &&!palettePredictionFinished                      &&NumPredictedPaletteEntries[ startComp ] < palette_max_size; predictorEntryIdx++ ) {palette_predictor_runae(v)if( palette_predictor_run != 1 ) {if( palette_predictor_run > 1 )predictorEntryIdx += palette_predictor_run − 1PalettePredictorEntryReuseFlags[ predictorEntryIdx ] = 1NumPredictedPaletteEntries++} elsepalettePredictionFinished = 1}if( NumPredictedPaletteEntries < palette_max_size )num_signalled_palette_entriesae(v)for( cIdx = startComp; cIdx < ( startComp + numComps); cIdx++ )for( i = 0; i < num_signalled_palette_entries; i++ )new_palette_entries[ cIdx ][ i ]ae(v)if( CurrentPaletteSize[ startComp ] > 0 )palette_escape_val_present_flagae(v)if( MaxPaletteIndex > 0 ) {num_palette_indices_ minus1ae(v)adjust = 0for( i = 0; i <= num_palette_indices_ minus1; i++ ) {if( MaxPaletteIndex − adjust > 0 ) {palette_idx_idcae(v)PaletteIndexIdc[ i ] = palette_idx_idc}adjust = 1}copy_above_indices_for_final_run_flagae(v)palette_transpose_flagae(v)}if( treeType != DUAL_TREE_CHROMA && palette_escape_val_present_flag ) {if( cu_qp_delta_enabled_flag && !IsCuQpDeltaCoded ) {cu_qp_delta_absae(v)if( cu_qp_delta_abs )cu_qp_delta_sign_flagae(v)}}if( treeType != DUAL_TREE_LUMA && palette_escape_val_present_flag ) {if( cu_chroma_qp_offset_enabled_flag && !IsCuChromaQpOffsetCoded ) {cu_chroma_qp_offset_flagae(v)if( cu_chroma_qp_offset_flag )cu_chroma_qp_offset_idxae(v)}}remainingNumIndices = num_palette_indices_ minus1 + 1PaletteScanPos = 0log2CbWidth = Log2( cbWidth)log2CbHeight = Log2( cbHeight)while( PaletteScanPos < cbWidth*cbHeightt ) {xC = x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 1 ]if( PaletteScanPos > 0 ) {xcPrev                                =x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 0 ]ycPrev                                =y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 1 ]}PaletteRunMinus1 = cbWidth * cbHeight − PaletteScanPos − 1RunToEnd = 1CopyAboveIndicesFlag[ xC ][ yC ] = 0if( MaxPaletteIndex > 0 )if( ( ( !palette_transpose_flag && yC > 0 ) ∥ ( palette_transpose_flag && xC > 0 ) )&& CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] = = 0 )if( remainingNumIndices > 0 && PaletteScanPos < cbWidth* cbHeight − 1 ) {copy_above_palette_indices_flagae(v)CopyAboveIndicesFlag[ xC ][ yC ] = copy_above_palette_indices_flag} else {if( PaletteScanPos = = cbWidth * cbHeight − 1 && remainingNumIndices > 0 )CopyAboveIndicesFlag[ xC ][ yC ] = 0elseCopyAboveIndicesFlag[ xC ][ yC ] = 1}if( CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {currNumIndices = num_palette_indices_ minus1 + 1 − remainingNumIndicesPaletteIndexMap[ xC ][ yC ] = PaletteIndexIdc[ currNumIndices ]}if( MaxPaletteIndex > 0 ) {if( CopyAboveIndicesFlag[ xC ][ yC ] = = 0 )remainingNumIndices − = 1if( remainingNumIndices > 0  ∥  CopyAboveIndicesFlag[ xC ][ yC ]   !=copy_above_indices_for_final_run_flag ) {PaletteMaxRunMinus1   =   cbWidth * cbHeight − PaletteScanPos − 1 −remainingNumIndices − copy_above_indices_for_final_run_flagRunToEnd = 0if( PaletteMaxRunMinus1 > 0 ) {palette_run_prefixae(v)if( ( palette_run_prefix  >  1  )  &&  ( PaletteMaxRunMinus1  !=( 1 << ( palette_run_prefix − 1 ) ) ) )palette_run_suffixae(v)}}}runPos = 0while ( runPos <= PaletteRunMinus1 ) {xR = x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 0 ]yR = y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 1 ]if( CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {CopyAboveIndicesFlag[ xR ][ yR ] = 0PaletteIndexMap[ xR ][ yR ] = PaletteIndexMap[ xC ][ yC ]} else {CopyAboveIndicesFlag[ xR ][ yR ] = 1if ( !palette_transpose_flag )PaletteIndexMap[ xR ][ yR ] = PaletteIndexMap[ xR ][ yR − 1 ]elsePaletteIndexMap[ xR ][ yR ] = PaletteIndexMap[ xR − 1 ][ yR ]}runPos++PaletteScanPos ++}}if( palette_escape_val_present_flag ) {for( cIdx = startComp; cIdx < ( startComp + numComps ); cIdx++ )for( sPos = 0; sPos < cbWidth* cbHeight; sPos++ ) {xC = x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ sPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ sPos ][ 1 ]if( PaletteIndexMap[ cIdx ][ xC ][ yC ] = = MaxPaletteIndex ) {palette_escape_valae(v)PaletteEscapeVal[ cIdx ][[ xC ][ yC ] = palette_escape_val}}}} 2.2.4 Palette Mode Semantics In the following semantics, the array indices x0, y0 specify the location (x0, y0) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The array index startComp specifies the first color component of the current palette table. startComp equal to 0 indicates the Y component; startComp equal to 1 indicates the Cb component; startComp equal to 2 indicates the Cr component. numComps specifies the number of color components in the current palette table. The predictor palette consists of palette entries from previous coding units that are used to predict the entries in the current palette. The variable PredictorPaletteSize[startComp] specifies the size of the predictor palette for the first color component of the current palette table startComp. PredictorPaletteSize is derived as specified in clause 8.4.5.3. The variable PalettePredictorEntryReuseFlags[i] equal to 1 specifies that the i-th entry in the predictor palette is reused in the current palette. PalettePredictorEntryReuseFlags[i] equal to 0 specifies that the i-th entry in the predictor palette is not an entry in the current palette. All elements of the array PalettePredictorEntryReuseFlags[i] are initialized to 0. palette_predictor_run is used to determine the number of zeros that precede a non-zero entry in the array PalettePredictorEntryReuseFlags. It is a requirement of bitstream conformance that the value of palette_predictor_run shall be in the range of 0 to (PredictorPaletteSize-predictorEntryIdx), inclusive, where predictorEntryIdx corresponds to the current position in the array PalettePredictorEntryReuseFlags. The variable NumPredictedPaletteEntries specifies the number of entries in the current palette that are reused from the predictor palette. The value of NumPredictedPaletteEntries shall be in the range of 0 to palette_max_size, inclusive. num_signalled_palette_entries specifies the number of entries in the current palette that are explicitly signalled for the first color component of the current palette table startComp. When num_signalled_palette_entries is not present, it is inferred to be equal to 0. The variable CurrentPaletteSize[startComp] specifies the size of the current palette for the first color component of the current palette table startComp and is derived as follows: CurrentPaletteSize[startComp]=NumPredictedPaletteEntries+num_signalled_palette_entries  (7-155) The value of CurrentPaletteSize[startComp] shall be in the range of 0 to palette_max_size, inclusive. new_palette_entries[cIdx][i] specifies the value for the i-th signalled palette entry for the color component cIdx. The variable PredictorPaletteEntries[cIdx][i] specifies the i-th element in the predictor palette for the color component cIdx. The variable CurrentPaletteEntries[cIdx][i] specifies the i-th element in the current palette for the color component cIdx and is derived as follows: numPredictedPaletteEntries = 0for( i = 0; i < PredictorPaletteSize[ startComp ]; i++ )if( PalettePredictorEntryReuseFlags[ i ] ) {for( cIdx =startComp; cIdx < ( startComp + numComps ); cIdx++ )CurrentPaletteEntries[ cIdx ][ numPredictedPaletteEntries ] =PredictorPaletteEntries[ cIdx ][ i ]numPredictedPaletteEntries++}for( cIdx = startComp; cIdx < (startComp + numComps); cIdx++) (7-156)for( i = 0; i < num_signalled_palette_entries[startComp]; i++ )CurrentPaletteEntries[ cIdx ][ numPredictedPaletteEntries + i ] =new_palette_entries[ cIdx ][ i ] palette_escape_val_present_flag equal to 1 specifies that the current coding unit contains at least one escape coded sample. escape_val_present_flag equal to 0 specifies that there are no escape coded samples in the current coding unit. When not present, the value of palette_escape_val_present_flag is inferred to be equal to 1. The variable MaxPaletteIndex specifies the maximum possible value for a palette index for the current coding unit. The value of MaxPaletteIndex is set equal to CurrentPaletteSize[startComp]−1+palette_escape_val_present_flag. num_palette_indices_minus1 plus 1 is the number of palette indices explicitly signalled or inferred for the current block. When num_palette_indices_minus1 is not present, it is inferred to be equal to 0. palette_idx_idc is an indication of an index to the palette table, CurrentPaletteEntries. The value of palette_idx_idc shall be in the range of 0 to MaxPaletteIndex, inclusive, for the first index in the block and in the range of 0 to (MaxPaletteIndex−1), inclusive, for the remaining indices in the block. When palette_idx_idc is not present, it is inferred to be equal to 0. The variable PaletteIndexIdc[i] stores the i-th palette_idx_idc explicitly signalled or inferred. All elements of the array PaletteIndexIdc[i] are initialized to 0. copy_above_indices_for_final_run_flag equal to 1 specifies that the palette indices of the last positions in the coding unit are copied from the palette indices in the row above if horizontal traverse scan is used or the palette indices in the left column if vertical traverse scan is used. copy_above_indices_for_final_run_flag equal to 0 specifies that the palette indices of the last positions in the coding unit are copied from PaletteIndexIdc[num_palette_indices_minus1]. When copy_above_indices_for_final_run_flag is not present, it is inferred to be equal to 0. palette_transpose_flag equal to 1 specifies that vertical traverse scan is applied for scanning the indices for samples in the current coding unit. palette_transpose_flag equal to 0 specifies that horizontal traverse scan is applied for scanning the indices for samples in the current coding unit. When not present, the value of palette_transpose_flag is inferred to be equal to 0. The array TraverseScanOrder specifies the scan order array for palette coding. TraverseScanOrder is assigned the horizontal scan order HorTravScanOrder if palette_transpose_flag is equal to 0 and TraverseScanOrder is assigned the vertical scan order VerTravScanOrder if palette_transpose_flag is equal to 1. copy_above_palette_indices_flag equal to 1 specifies that the palette index is equal to the palette index at the same location in the row above if horizontal traverse scan is used or the same location in the left column if vertical traverse scan is used. copy_above_palette_indices_flag equal to 0 specifies that an indication of the palette index of the sample is coded in the bitstream or inferred. The variable CopyAboveIndicesFlag[xC][yC] equal to 1 specifies that the palette index is copied from the palette index in the row above (horizontal scan) or left column (vertical scan). CopyAboveIndicesFlag[xC][yC] equal to 0 specifies that the palette index is explicitly coded in the bitstream or inferred. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The value of PaletteIndexMap[xC][yC] shall be in the range of 0 to (MaxPaletteIndex−1), inclusive. The variable PaletteIndexMap[xC][yC] specifies a palette index, which is an index to the array represented by CurrentPaletteEntries. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The value of PaletteIndexMap[xC][yC] shall be in the range of 0 to MaxPaletteIndex, inclusive. The variable adjustedRefPaletteIndex is derived as follows: adjustedRefPaletteIndex = MaxPaletteIndex + 1if( PaletteScanPos > 0 ) {xcPrev = x0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 0 ]ycPrev = y0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 0 ]if( CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] == 0 ) {adjustedRefPaletteIndex = PaletteIndexMap[ xcPrev ][ ycPrev ] { (7-157)}else {if( !palette_transpose_flag )adjustedRefPaletteIndex = PaletteIndexMap[ xC ][ yC − 1 ]elseadjustedRefPaletteIndex = PaletteIndexMap[ xC − 1 ][ yC ]}} When CopyAboveIndicesFlag[xC][yC] is equal to 0, the variable CurrPaletteIndex is derived as follows: if(CurrPaletteIndex>=adjustedRefPaletteIndex) CurrPaletteIndex++  (7-158) palette_run_prefix, when present, specifies the prefix part in the binarization of PaletteRunMinus1. palette_run_suffix is used in the derivation of the variable PaletteRunMinus1. When not present, the value of palette_run_suffix is inferred to be equal to 0. When RunToEnd is equal to 0, the variable PaletteRunMinus1 is derived as follows: If PaletteMaxRunMinus1 is equal to 0, PaletteRunMinus1 is set equal to 0.Otherwise (PaletteMaxRunMinus1 is greater than 0) the following applies:If palette_run_prefix is less than 2, the following applies: PaletteRunMinus1=palette_run_prefix  (7-159)Otherwise (palette_run_prefix is greater than or equal to 2), the following applies: PrefixOffset=1<<(palette_run_prefix−1) PaletteRunMinus1=PrefixOffset+palette_run_suffix  (7-160) The variable PaletteRunMinus1 is used as follows:If CopyAboveIndicesFlag[xC][yC] is equal to 0, PaletteRunMinus1 specifies the number of consecutive locations minus 1 with the same palette index.Otherwise if palette_transpose_flag equal to 0, PaletteRunMinus1 specifies the number of consecutive locations minus 1 with the same palette index as used in the corresponding position in the row above.Otherwise, PaletteRunMinus1 specifies the number of consecutive locations minus 1 with the same palette index as used in the corresponding position in the left column. When RunToEnd is equal to 0, the variable PaletteMaxRunMinus1 represents the maximum possible value for PaletteRunMinus1 and it is a requirement of bitstream conformance that the value of PaletteMaxRunMinus1 shall be greater than or equal to 0. palette_escape_val specifies the quantized escape coded sample value for a component. The variable PaletteEscapeVal[cIdx][xC][yC] specifies the escape value of a sample for which PaletteIndexMap[xC][yC] is equal to MaxPaletteIndex and palette_escape_val_present_flag is equal to 1. The array index cIdx specifies the color component. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. It is a requirement of bitstream conformance that PaletteEscapeVal[cIdx][xC][yC] shall be in the range of 0 to (1<<(BitDepthY+1))−1, inclusive, for cIdx equal to 0, and in the range of 0 to (1<<(BitDepthC+1))−1, inclusive, for cIdx not equal to 0. 1.1.1 Line-Based CG Palette Mode Line-based CG palette mode was adopted to VVC. In this method, each CU of palette mode is divided into multiple segments of m samples (m=16 in this test) based on the traverse scan mode. The encoding order for palette run coding in each segment is as follows: For each pixel, 1 context coded bin run_copy_flag=0 is signalled indicating if the pixel is of the same mode as the previous pixel, i.e., if the previous scanned pixel and the current pixel are both of run type COPY_ABOVE or if the previous scanned pixel and the current pixel are both of run type INDEX and the same index value. Otherwise, run_copy_flag=1 is signalled. If the pixel and the previous pixel are of different modes, one context coded bin copy_above_palette_indices_flag is signalled indicating the run type, i.e., INDEX or COPY_ABOVE, of the pixel. Same as the palette mode in VTM6.0, decoder does not have to parse run type if the sample is in the first row (horizontal traverse scan) or in the first column (vertical traverse scan) since the INDEX mode is used by default. Also, decoder does not have to parse run type if the previously parsed run type is COPY_ABOVE. After palette run coding of pixels in one segment, the index values (for INDEX mode) and quantized escape colors are bypass coded and grouped apart from encoding/parsing of context coded bins to improve throughput within each line CG. Since the index value is now coded/parsed after run coding, instead of processed before palette run coding as in VTM, encoder does not have to signal the number of index values num_palette_indices_minus1 and the last run type copy_above_indices_for_final_run_flag. The text of line-based CG palette mode in JVET-P0077 is shown as follows. Palette Coding Syntax palette_coding( x0, y0, cbWidth, cbHeight, startComp, numComps ) {DescriptorpalettePredictionFinished = 0NumPredictedPaletteEntries = 0for( predictorEntryIdx = 0; predictorEntryIdx < PredictorPaletteSize startComp &&!palettePredictionFinished &&NumPredictedPaletteEntries[ startComp ] < palette_max_size; predictorEntryIdx++ ) {palette_predictor_runae(v)if( palette_predictor_run != 1 ) {if( palette_predictor_run > 1 )predictorEntryIdx += palette_predictor_run − 1PalettePredictorEntryReuseFlags[ predictorEntryIdx ] = 1NumPredictedPaletteEntries++} elsepalettePredictionFinished = 1}if( NumPredictedPaletteEntries < palette_max_size )num_signalled_palette_entriesae(v)for( cIdx = startComp; cIdx < ( startComp + numComps); cIdx++ )for( i = 0; i < num_signalled_palette_entries; i++ )new_palette_entries[ cIdx ][ i ]ae(v)if( CurrentPaletteSize[ startComp ] > 0 )palette_escape_val_present_flagae(v)if( MaxPaletteIndex > 0 ) {adjust = 0palette_transpose_flagae(v)}if( treeType != DUAL_TREE_CHROMA && palette_escape_val_present_flag ) {if( cu_qp_delta_enabled_flag && !IsCuQpDeltaCoded ) {cu_qp_delta_absae(v)if( cu_qp_delta_abs )cu_qp_delta_sign_flagae(v)}}if( treeType != DUAL_TREE_LUMA && palette_escape_val_present_flag ) {if( cu_chroma_qp_offset_enabled_flag && !IsCuChromaQpOffsetCoded ) {cu_chroma_qp_offset_flagae(v)if( cu_chroma_qp_offset_flag )cu_chroma_qp_offset_idxae(v)}}PreviousRunTypePosition = 0PreviousRunType = 0for (subSetId = 0; subSetId <= (cbWidth* cbHeight − 1) >> 4; subSetId++) {minSubPos = subSetId << 4if( minSubPos + 16 > cbWidth * cbHeight)maxSubPos = cbWidth * cbHeightelsemaxSubPos = minSubPos + 16RunCopyMap[ 0 ][ 0 ] = 0log2CbWidth = Log2( cbWidth )log2CbHeight = Log2( cbHeight )PaletteScanPos = minSubPoswhile( PaletteScanPos < maxSubPos ) {xC = x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 1 ]if( PaletteScanPos > 0 ) {xcPrev =x0 +TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 0 ]ycPrev =y0 +TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 1 ]}if ( MaxPaletteIndex > 0 && PaletteScanPos > 0) {run_copy_flagae(v)RunCopyMap[ xC ][ yC ] = run_copy_flag}CopyAboveIndicesFlag[ xC ][ yC ] = 0if( MaxPaletteIndex > 0 && ! RunCopyMap[startComp][xC][yC] ) {if( ( ( !palette_transpose_flag && yC > 0 ) ∥ ( palette_transpose_flag && xC >0 ) )&& CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] = = 0 ) {copy_above_palette_indices_flagae(v)CopyAboveIndicesFlag[ xC ][ yC ] = copy_above_palette_indices_flag}PreviousRunType = CopyAboveIndicesFlag[ xC ][ yC ]PreviousRunTypePosition = curPos} else {CopyAboveIndicesFlag[ xC ][ yC ] = CopyAboveIndicesFlag[xcPrev][ ycPrev]}}PaletteScanPos ++}PaletteScanPos = minSubPoswhile( PaletteScanPos < maxSubPos ) {xC =x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 0 ]yC =y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 1 ]if( PaletteScanPos > 0 ) {xcPrev =x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 0 ]ycPrev =y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 1 ]}if ( MaxPaletteIndex > 0 ) {if ( ! RunCopyMap[ x C][ yC ] && CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {if( MaxPaletteIndex − adjust > 0 ) {palette_idx_idcae(v)}adjust = 1}}if ( ! RunCopyMap[ xC][ yC ] && CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {CurrPaletteIndex = palette_idx_idcif( CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {PaletteIndexMap[ xC ][ yC ] = CurrPaletteIndex} else {if ( !palette_transpose_flag )PaletteIndexMap[ xC ][ yC ] = PaletteIndexMap[ xC ][ yC − 1 ]elsePaletteIndexMap[ xC ][ yC ] = PaletteIndexMap[ xC − 1 ][ yC ]}}if( palette_escape_val_present_flag ) {for( cIdx = startComp; cIdx < ( startComp + numComps ); cIdx++ )for( sPos = minSubPos ; sPos < maxSubPos; sPos++ ) {xC = x0 + TraverseScanOrder[ log2CbWidth][ log2CbHeight ][ sPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth][ log2CbHeight ][ sPos ][ 1 ]if( PaletteIndexMap[ cIdx ][ xC ][ yC ] = = MaxPaletteIndex ) {palette_escape_valae(v)PaletteEscapeVal[ cIdx ][ xC ][ yC ] = palette_escape_val}}}}} 7.4.9.6. Palette Coding Semantics In the following semantics, the array indices x0, y0 specify the location (x0, y0) of the top-left luma sample of the considered coding block relative to the top-left luma sample of the picture. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The array index startComp specifies the first color component of the current palette table. startComp equal to 0 indicates the Y component; startComp equal to 1 indicates the Cb component; startComp equal to 2 indicates the Cr component. numComps specifies the number of color components in the current palette table. The predictor palette consists of palette entries from previous coding units that are used to predict the entries in the current palette. The variable PredictorPaletteSize[startComp] specifies the size of the predictor palette for the first color component of the current palette table startComp. PredictorPaletteSize is derived as specified in clause 8.4.5.3. The variable PalettePredictorEntryReuseFlags[i] equal to 1 specifies that the i-th entry in the predictor palette is reused in the current palette. PalettePredictorEntryReuseFlags[i] equal to 0 specifies that the i-th entry in the predictor palette is not an entry in the current palette. All elements of the array PalettePredictorEntryReuseFlags[i] are initialized to 0. palette_predictor_run is used to determine the number of zeros that precede a non-zero entry in the array PalettePredictorEntryReuseFlags. It is a requirement of bitstream conformance that the value of palette_predictor_run shall be in the range of 0 to (PredictorPaletteSize-predictorEntryIdx), inclusive, where predictorEntryIdx corresponds to the current position in the array PalettePredictorEntryReuseFlags. The variable NumPredictedPaletteEntries specifies the number of entries in the current palette that are reused from the predictor palette. The value of NumPredictedPaletteEntries shall be in the range of 0 to palette_max_size, inclusive. num_signalled_palette_entries specifies the number of entries in the current palette that are explicitly signalled for the first color component of the current palette table startComp. When num_signalled_palette_entries is not present, it is inferred to be equal to 0. The variable CurrentPaletteSize[startComp] specifies the size of the current palette for the first color component of the current palette table startComp and is derived as follows: CurrentPaletteSize[startComp]=NumPredictedPaletteEntries+num_signalled_palette_entries   (7-155) The value of CurrentPaletteSize[startComp] shall be in the range of 0 to palette_max_size, inclusive. new_palette_entries[cIdx][i] specifies the value for the i-th signalled palette entry for the color component cIdx. The variable PredictorPaletteEntries[cIdx][i] specifies the i-th element in the predictor palette for the color component cIdx. The variable CurrentPaletteEntries[cIdx][i] specifies the i-th element in the current palette for the color component cIdx and is derived as follows: numPredictedPaletteEntries = 0for( i = 0; i < PredictorPaletteSize[ startComp ]; i++ )if( PalettePredictorEntryReuseFlags[ i ] ) {for( cIdx =startComp; cIdx < ( startComp + numComps ); cIdx++ )CurrentPaletteEntries[ cIdx ][ numPredictedPaletteEntries ] =PredictorPaletteEntries[ cIdx ][ i ]numPredictedPaletteEntries++}for( cIdx = startComp; cIdx < (startComp + numComps); cIdx++) (7-156)for( i = 0; i < num_signalled_palette_entries[startComp]; i++ )CurrentPaletteEntries[ cIdx ][ numPredictedPaletteEntries + i ] =new_palette_entries[ cIdx ][ i ] palette_escape_val_present_flag equal to 1 specifies that the current coding unit contains at least one escape coded sample. escape_val_present_flag equal to 0 specifies that there are no escape coded samples in the current coding unit. When not present, the value of palette_escape_val_present_flag is inferred to be equal to 1. The variable MaxPaletteIndex specifies the maximum possible value for a palette index for the current coding unit. The value of MaxPaletteIndex is set equal to CurrentPaletteSize[startComp]−1+palette_escape_val_present_flag. palette_idx_idc is an indication of an index to the palette table, CurrentPaletteEntries. The value of palette_idx_idc shall be in the range of 0 to MaxPaletteIndex, inclusive, for the first index in the block and in the range of 0 to (MaxPaletteIndex−1), inclusive, for the remaining indices in the block. When palette_idx_idc is not present, it is inferred to be equal to 0. palette_transpose_flag equal to 1 specifies that vertical traverse scan is applied for scanning the indices for samples in the current coding unit. palette_transpose_flag equal to 0 specifies that horizontal traverse scan is applied for scanning the indices for samples in the current coding unit. When not present, the value of palette_transpose_flag is inferred to be equal to 0. The array TraverseScanOrder specifies the scan order array for palette coding. TraverseScanOrder is assigned the horizontal scan order HorTravScanOrder if palette_transpose_flag is equal to 0 and TraverseScanOrder is assigned the vertical scan order VerTravScanOrder if palette_transpose_flag is equal to 1. run_copy_flag equal to 1 specifies that the palette run type is the same the run type at the previously scanned position and palette run index is the same as the index at the previous position if copy_above_palette_indices_flag is equal to 0. Otherwise, run_copy_flag is equal to 0. copy_above_palette_indices_flag equal to 1 specifies that the palette index is equal to the palette index at the same location in the row above if horizontal traverse scan is used or the same location in the left column if vertical traverse scan is used. copy_above_palette_indices_flag equal to 0 specifies that an indication of the palette index of the sample is coded in the bitstream or inferred. The variable CopyAboveIndicesFlag[xC][yC] equal to 1 specifies that the palette index is copied from the palette index in the row above (horizontal scan) or left column (vertical scan). CopyAboveIndicesFlag[xC][yC] equal to 0 specifies that the palette index is explicitly coded in the bitstream or inferred. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The variable PaletteIndexMap[xC][yC] specifies a palette index, which is an index to the array represented by CurrentPaletteEntries. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. The value of PaletteIndexMap[xC][yC] shall be in the range of 0 to MaxPaletteIndex, inclusive. The variable adjustedRefPaletteIndex is derived as follows: adjustedRefPaletteIndex = MaxPaletteIndex + 1if( PaletteScanPos > 0 ) {xcPrev = x0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 0 ]ycPrev = y0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 1 ]if( CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] == 0 ) {adjustedRefPaletteIndex = PaletteIndexMap[ xcPrev ][ ycPrev ] { (7-157)}else {if( !palette_transpose_flag )adjustedRefPaletteIndex = PaletteIndexMap[ xC ][ yC − 1 ]elseadjustedRefPaletteIndex = PaletteIndexMap[ xC − 1 ][ yC ]}} When CopyAboveIndicesFlag[xC][yC] is equal to 0, the variable CurrPaletteIndex is derived as follows: if(CurrPaletteIndex>=adjustedRefPaletteIndex)CurrPaletteIndex++  (7-158) palette_escape_val specifies the quantized escape coded sample value for a component. The variable PaletteEscapeVal[cIdx][xC][yC] specifies the escape value of a sample for which PaletteIndexMap[xC][yC] is equal to MaxPaletteIndex and palette_escape_val_present_flag is equal to 1. The array index cIdx specifies the color component. The array indices xC, yC specify the location (xC, yC) of the sample relative to the top-left luma sample of the picture. It is a requirement of bitstream conformance that PaletteEscapeVal[cIdx][xC][yC] shall be in the range of 0 to (1<<(BitDepthY+1))−1, inclusive, for cIdx equal to 0, and in the range of 0 to (1<<(BitDepthC+1))−1, inclusive, for cIdx not equal to 0. 2.3 Local Dual Tree in VVC In typical hardware video encoders and decoders, processing throughput drops when a picture has more small intra blocks because of sample processing data dependency between neighboring intra blocks. The predictor generation of an intra block requires top and left boundary reconstructed samples from neighboring blocks. Therefore, intra prediction has to be sequentially processed block by block. In HEVC, the smallest intra CU is 8×8 luma samples. The luma component of the smallest intra CU can be further split into four 4×4 luma intra prediction units (PUs), but the chroma components of the smallest intra CU cannot be further split. Therefore, the worst-case hardware processing throughput occurs when 4×4 chroma intra blocks or 4×4 luma intra blocks are processed. In VTM5.0, in single coding tree, since chroma partitions always follows luma and the smallest intra CU is 4×4 luma samples, the smallest chroma intra CB is 2×2. Therefore, in VTM5.0, the smallest chroma intra CBs in single coding tree is 2×2. The worst-case hardware processing throughput for VVC decoding is only ¼ of that for HEVC decoding. Moreover, the reconstruction process of a chroma intra CB becomes much more complex than that in HEVC after adopting tools including cross-component linear model (CCLM), 4-tap interpolation filters, position-dependent intra prediction combination (PDPC), and combined inter intra prediction (CIIP). It is challenging to achieve high processing throughput in hardware decoders. In this section, a method that improves the worst-case hardware processing throughput is proposed. The goal of this method is to disallow chroma intra CBs smaller than 16 chroma samples by constraining the partitioning of chroma intra CBs. In single coding tree, a SCIPU is defined as a coding tree node whose chroma block size is larger than or equal to a threshold number (TH) of chroma samples and has at least one child luma block smaller than 4TH luma samples, where TH is set to 16 in this contribution. It is required that in each SCIPU, all CBs are inter, or all CBs are non-inter, i.e., either intra or intra block copy (IBC). In case of a non-inter SCIPU, it is further required that chroma of the non-inter SCIPU shall not be further split and luma of the SCIPU is allowed to be further split. In this way, the smallest chroma intra CB size is 16 chroma samples, and 2×2, 2×4, and 4×2 chroma CBs are removed. In addition, chroma scaling is not applied in case of a non-inter SCIPU. In addition, when luma blocks are further split and chroma blocks are not split, a local dual tree coding structure is constructed. Two SCIPU examples are shown inFIGS.5A and5B. InFIG.5A, one chroma CB of 8×4 chroma samples and three luma CBs (4×8, 8×8, 4×8 luma CBs) form one SCIPU because the ternary tree (TT) split from the 8×4 chroma samples would result in chroma CBs smaller than 16 chroma samples. InFIG.5B, one chroma CB of 4×4 chroma samples (the left side of the 8×4 chroma samples) and three luma CBs (8×4, 4×4, 4×4 luma CBs) form one SCIPU, and the other one chroma CB of 4×4 samples (the right side of the 8×4 chroma samples) and two luma CBs (8×4, 8×4 luma CBs) form one SCIPU because the binary tree (BT) split from the 4×4 chroma samples would result in chroma CBs smaller than 16 chroma samples. In the proposed method, the type of a SCIPU is inferred to be non-inter if the current slice is an I-slice or the current SCIPU has a 4×4 luma partition in it after further split one time (because no inter 4×4 is allowed in VVC); otherwise, the type of the SCIPU (inter or non-inter) is indicated by one signalled flag before parsing the CUs in the SCIPU. By applying the above method, the worst-case hardware processing throughput occurs when 4×4, 2×8, or 8×2 chroma blocks, instead of a 2×2 chroma blocks, are processed. The worst-case hardware processing throughput is the same as that in HEVC and is 4× of that in VTM5.0. 2.4 Transform Skip (TS) As in HEVC, the residual of a block can be coded with transform skip mode. To avoid the redundancy of syntax coding, the transform skip flag is not signalled when the CU level MTS_CU_flag is not equal to zero. The block size limitation for transform skip is the same to that for multiple transform set (MTS) in JEM4, which indicates that transform skip is applicable for a CU when both block width and height are equal to or less than 32. Note that implicit MTS transform is set to DCT2 when low frequency non-separable transform (LFNST) or matrix-based intra prediction (MIP) is activated for the current CU. Also, the implicit MTS can be still enabled when MTS is enabled for inter coded blocks. In addition, for transform skip block, minimum allowed Quantization Parameter (QP) is defined as 6*(internalBitDepth−inputBitDepth)+4. 2.5 Alternative Luma Half-Pel Interpolation Filters In JVET-N0309, alternative half-pel interpolation filters are proposed. The switching of the half-pel luma interpolation filter is done depending on the motion vector accuracy. In addition to the existing quarter-pel, full-pel, and 4-pel adaptive motion vector resolution (AMVR) modes, a new half-pel accuracy AMVR mode is introduced. Only in case of half-pel motion vector accuracy, an alternative half-pel luma interpolation filter can be selected. For a non-affine non-merge inter-coded CU which uses half-pel motion vector accuracy (i.e., the half-pel AMVR mode), a switching between the HEVC/VVC half-pel luma interpolation filter and one or more alternative half-pel interpolation is made based on the value of a new syntax element hpelIfIdx. The syntax element hpelIfIdx is only signalled in case of half-pel AMVR mode. In case of skip/merge mode using a spatial merging candidate, the value of the syntax element hpelIfIdx is inherited from the neighboring block. 2.6 Adaptive Color Transform (ACT) FIG.6illustrates the decoding flowchart with the ACT be applied. As illustrated inFIG.6, the color space conversion is carried out in residual domain. Specifically, one additional decoding module, namely inverse ACT, is introduced after inverse transform to convert the residuals from YCgCo domain back to the original domain. In the VVC, unless the maximum transform size is smaller than the width or height of one coding unit (CU), one CU leaf node is also used as the unit of transform processing. Therefore, in the proposed implementation, the ACT flag is signalled for one CU to select the color space for coding its residuals. Additionally, following the HEVC ACT design, for inter and IBC CUs, the ACT is only enabled when there is at least one non-zero coefficient in the CU. For intra CUs, the ACT is only enabled when chroma components select the same intra prediction mode of luma component, i.e., DM mode. The core transforms used for the color space conversions are kept the same as that used for the HEVC. Specifically, the following forward and inverse YCgCo color transform matrices, as described as follows, as applied. [C0′C1′C2′]=[2112-1-10-22][C0C1C2]/4[C0C1C2]=[1101-1-11-11][C0′C1′C2′] Additionally, to compensate the dynamic range change of residuals signals before and after color transform, the QP adjustments of (−5, −5, −3) are applied to the transform residuals. On the other hand, the forward and inverse color transforms need to access the residuals of all three components. Correspondingly, in the proposed implementation, the ACT is disabled in the following two scenarios where not all residuals of three components are available.1. Separate-tree partition: when separate-tree is applied, luma and chroma samples inside one CTU are partitioned by different structures. This results in that the CUs in the luma-tree only contains luma component and the CUs in the chroma-tree only contains two chroma components. Intra sub partition prediction (ISP): the ISP sub-partition is only applied to luma while chroma signals are coded without splitting. In the current ISP design, except the last ISP sub-partitions, the other sub-partitions only contain luma component. 3. TECHNICAL PROBLEMS SOLVED BY TECHNICAL SOLUTIONS AND EMBODIMENTS DESCRIBED HEREIN 1. The current binarization of escape symbols is not fixed length, which may be suitable for a source with a uniform distribution. 2. Current palette coding design performs an index adjustment process to remove possible redundancy, which may introduce parsing dependency, e.g., when an escape value index is wrongly derived. 3. The reference index employed to derive the current index may need an encoder constraint which is not considered in the current design and not desirable for a codec design. 4. When local dual tree is enabled, previous block and current block's palette entries may have different number of color components. How to handle such a case is not clear. 5. The local dual tree and PLT could not be applied simultaneously since some palette entries may be repeated when coding from a single tree region to a dual tree region. One example is shown inFIG.7. 6. Chroma QP table for joint_cbcr mode may be restricted. 7. Escape samples may be redundant under certain conditions. 8. The line-based CG mode could not be processed with a high throughput. 4. A LISTING OF EMBODIMENTS AND SOLUTIONS The list below should be considered as examples to explain general concepts. These items should not be interpreted in a narrow way. Furthermore, these items can be combined in any manner. The following examples may be applied on palette scheme in VVC and all other palette related schemes. In the following bullets, Qp may denote the qP in section 8.4.5.3 in JVET-P2001-vE. In the following bullets, QpPrimeTsMin is the minimum allowed quantization parameter for transform skip mode. Modulo(x, M) is defined as (x % M) when x is a positive integer; otherwise, it is defined as M−((−x) % M). In the following, a block coded in lossless mode may mean that a block is coded with tranquant_bypass_flag is equal to 1; or coded with QP is no greater than a given threshold and transform_skip_flag is equal to 1. The following examples may be applied on palette scheme in VVC and all other palette related schemes.1. Fixed-length coding may be applied to code escape symbols.a. In one example, escape symbols may be signalled with fixed length binarization.b. In one example, an escape symbol may be signalled in fixed length binarization using N bits.c. In one example, the code length (e.g., N mentioned in bullet 1.b) to signal an escape symbol may depend on internal bit depth.i. Alternatively, the code length to signal an escape symbol may depend on input bit depth.ii. Alternatively, the code length to signal an escape symbol may depend on the difference between internal bit depth and input bit depth.iii. In one example N is set equal to input/internal bit depth.d. In one example, the length of the fixed-length coding may be signalled in a video processing unit level, e.g., slice subpicture, tile, picture, video.e. In one example, the code length to signal an escape symbol (e.g., N mentioned in bullet 1.b) may depend on the quantization parameter, i.e., Qp.i. In one example, the code length for signalling an escape symbol may be a function of quantization parameter, such as denoted by f(Qp).1. In one example, the function f may be defined as (internal bitdepth−g(Qp)).2. In one example, N may be set to (internal bitdepth−max (16, (Qp−4)/6)).3. In one example, N may be set to (internal bitdepth−max (QpPrimeTsMin, (Qp−4)/6)), wherein qP is the decoded quantization parameter and QpPrimeTsMin is the minimum allowed quantization parameter for transform skip mode.4. Alternatively, furthermore, the code length N may be set to max(A, internal bitDepth−(Max(QpPrimeTsMin, Qp)−4)/6) wherein A is non-negative integer value, such as 0 or 1.ii. Qp mentioned in the above sub-bullet may refer to slice QP.1. Alternatively, Qp may refer to slice QP plus a constant value.f. In the above examples, N may be greater than or equal to 0.2. Dequantization Qp for escape symbols may be based on slice/picture/PPS level Qp.a. In one example, dequantization Qp for escape symbols may be based on slice/picture/PPS level Qp plus a given offset.i. The offset may be a constant.ii. The offset may be indicated, implicitly or explicitly, in bitstreams.b. In one example, block-level Qp difference may be skipped in the bitstream.i. In one example, cbf may be inferred as 0.3. A left shift may be applied before dequantization for escape symbols.a. In one example, N bits' left shift (N>=0) may be applied before dequantization.i. In one example, N may be equal to Min(bitDepth−1, (QpPrimeTsMin−4)/6), where bitDepth is internal bitdepth, where bitDepth is internal bitdepth.ii. Alternatively, N may be equal to bitDepth−inputBD, where inputBD is input bitdepth.1. In one example, inputBD may be indicated in the bitstream.iii. Alternatively, N may be equal to deltaBD, where deltaBD is indicated in the bitstream.4. Escape symbol dequantization may depend on (Qp−QpPrimeTsMin).a. In one example, (Qp−QpPrimeTsMin+4) may be applied for escape symbol dequantization as the dequantization Qp.b. In one example, Min(Qp−QpPrimeTsMin+4, 63+QpBdOffset) may be applied for escape symbol dequantization as the dequantization Qp.5. Escape symbol dequantization may depend on (Qp−N*6).a. In one example, N may refer to the number of left shifting in bullet 3.a.b. In one example, Max(0, Qp−N*6) may be applied as dequantization Qp.6. Escape symbol dequantization may depend on deltaBD, i.e., the difference between internal bit depth and input bit depth.a. In one example, (Qp−deltaBD*6) may be applied for escape symbol dequantization as the dequantization Qp.b. In one example, Min(Max(0, Qp−deltaBD*6), 63+QpBdOffset) may be applied for escape symbol dequantization as the dequantization Qp.7. It is proposed to disable the usage of escape symbols in one video unit (e.g., a CU).a. Alternatively, furthermore, the signalling of indication of escape symbol presence is skipped.b. In one example, whether to enable/disable the usage of escape symbols may depend on the quantization parameters and/or bit depth.i. In one example, if (internal bitDepth−(Max(QpPrimeTsMin, Qp)−4)/6) is no greater than 0, the usage of escape symbols may be disabled.8. Variable length coding excluding EG with 3rdorder may be applied to code escape symbols.a. In one example, the binarization of an escape symbol may be truncated binary (TB) with an input parameter K.b. In one example, the binarization of an escape symbol may be EG with Kth order wherein K is unequal to 3.i. In one example, the binarization of an escape symbol may be EG with 0th order.1. Alternatively, in one example, the binarization of an escape symbol may be EG with 1st order.2. Alternatively, in one example, the binarization of an escape symbol may be EG with 2nd order.c. In above examples, K may be an integer number and may depend oni. A message signalled in the SPS/video parameter set (VPS)/PPS/picture header/slice header/tile group header/largest coding unit (LCU) row/group of LCUs/bricks.ii. Internal bit depthiii. Input bit depthiv. Difference between internal bit depth and input depthv. Block dimension of current blockvi. Current quantization parameter of current blockvii. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)viii. Coding structure (such as single tree or dual tree)ix. Color component (such as luma component and/or chroma components)9. Multiple binarization methods for coding escape symbols may be applied to a video unit (e.g., a sequence/picture/slice/tile/brick/subpicture/CTU row/CTU/coding tree block (CTB)/CB/CU/a sub-region within a picture) and/or for one or multiple values of escape symbols.a. In one example, how to select one of the multiple binarization methods may be signalled for the video unit and/or for one or multiple values of escape symbols.b. In one example, how to select one of the multiple binarization methods may be derived for the video unit and/or for one or multiple values of escape symbols.c. In one example, for one video unit and/or for one or multiple values of escape symbols, two or more binarization methods may be applied.i. In one example, an index or a flag may be encoded/decoded to tell the selected binarization method. In the following bullets, p may denote the symbol value of a color component, bd may denote bit-depth (e.g., the internal bit depth or input bit depth), ibd may denote input bit depth, and Qp may denote the quantization parameter for transform skip blocks or transform blocks. In addition, QPs for luma component and chroma component may be different or same. Bit depth may be associated with a given color component.10. How to apply the quantization and/or inverse quantization process may depend on whether the block is coded with palette mode or not.a. In one example, the quantization and/or inverse quantization process for escape symbols may be different from those used for normal intra/inter coded blocks with quantization applied.11. The quantization and/or inverse quantization process for escape symbols may use bit-shifting.a. In one example, right bit-shifting may be used for quantizing escape symbols.i. In one example, the escape symbol may be signalled as f (p, Qp) wherein p is the input symbol value (e.g., input luma/chroma sample value), and Qp is the derived quantization parameter for the corresponding color component.1. In one example, the function f may be defined as p>>g(Qp).2. In one example, the function f may be defined as (p+(1<<(g(QP)−1)))>>g(Qp).3. In one example, the function f may be defined as (0, (1<<bd)−1, (p+(1<<(g(QP)−1)))>>g(Qp)).ii. In one example, the escape symbol may be signalled as h(p).1. In one example, the function h may be defined as p>>N.2. In one example, the function h may be defined as (p+(1<<(N−1)))>>N.3. In one example, when cu_transquant_bypass_flag is equal to 1, N may be set to 0.4. In one example, when cu_transquant_bypass_flag is equal to 1, N may be equal to (bd-ibd), where bd is internal bit-depth and ibd is input bit-depth.5. In one example, the function h may be defined as clip(0, (1<<(bd−N)−1, p>>N), where bd is the internal bit depth for the current color component.6. In one example, the function h may be defined as clip(0, (1<<(bd−N)−1, (p+(1<<(N−1)))>>N), where bd is the internal bit depth for the current color component.7. In the above example, N may be in the range of [0, (bd−1)].b. In one example, left bit-shifting may be used for inverse quantizing escape symbols.i. In one example, the escape symbol may be dequantized as f(p, Qp), where p is the decoded escape symbol, and Qp is the derived quantization parameter for the corresponding color component.1. In one example, f may be defined as p<<g(Qp).2. In one example, f may be defined as (p<<g(Qp))+(1<<(g(Qp)−1)).ii. In one example, the escape symbol may be reconstructed as f(p, Qp), where p is the decoded escape symbol.1. In one example, f may be defined as clip (0, (1<<bd)−1, p<<g(Qp)).2. In one example, f may be defined as clip (0, (1<<bd)−1, (p<<g(Qp))+(1<<(g(Qp)−1))).iii. In one example, the escape symbol may be reconstructed as h(p).1. In one example, the function h may be defined as p<<N.2. In one example, the function h may be defined as (p<<N)+(1<<(N−1)).3. In one example, when cu_transquant_bypass_flag is equal to 1, N may be set to 0.4. In one example, when cu_transquant_bypass_flag is equal to 1, N may be equal to (bd-ibd), where bd is internal bit-depth and ibd is input bit-depth.5. In one example, N is set to (max (QpPrimeTsMin, qP)−4)/6, wherein qP is the decoded quantization parameter and QpPrimeTsMin is the minimum allowed quantization parameter for transform skip mode.a) In the above example, if both luma and chroma have transform skip modes, different minimum allowed quantization parameters for transform skip mode may be applied for different color components.6. Alternatively, for the above examples, N may be further clipped, such as min(bd−1, N).7. In the above example, N may be in the range of [0, (bd−1)].12. When applying left-shift as dequantization, reconstruction offset of an escape symbol p may depend on bitdepth information.a. In one example, it may be dependent on the difference between internal bitdepth and input bitdepth, i.e. deltaBD=internal bitdepth−input bitdepth.b. When K is smaller or equal to deltaBD, the reconstructed value may be p<<K.c. When K is larger than deltaBD, the reconstruction value may be (p<<K)+(1<<(K−1)).d. When K is smaller or equal to T0 (e.g., T0=2), the reconstructed value may be p<<K.e. When K is larger than T1 (e.g., T1=2), the reconstruction value may be (p<<K)+(1<<(K−1)).f. In one example, T0 and T1 in bullet d and e may be signalled in the bitstream, such as in sequence/picture/slice/tile/brick/subpicture-level.g. In one example, the reconstruction value may be (p<<K)+((1<<(K−1))>>deltaBD<<deltaBD).h. In one example, the reconstruction value may be ((p<<(K+1))+(1<<K))>>(deltaBD+1)<<deltaBD.i. In one example, the deltaBD may be signalled in the bitstream, such as in sequence/picture/slice/tile/brick/subpicture-level.j. In one example, which reconstruction value shall be used (e.g., bullets b to e) may depend on the quantization parameter of current block.k. In one example, which reconstruction value shall be used (e.g., bullets b to e) may depend on the value of deltaBD.l. In one example, K may be set to g(Qp).13. In the above examples, the following may apply:a. In one example, the escape symbols may be context coded.b. In one example, the escape symbols may be bypass coded.c. In one example, g(Qp) may be defined as (Qp−4)/6 or QP/8.i. Alternatively, g(Qp) may be defined as Qp/6 or QP/8.ii. Alternatively, g(Qp) may be defined as max (16, Qp/6)).iii. Alternatively, g(Qp) may be defined as max (16, (Qp−4)/6).iv. Alternatively, g(Qp) may be defined as max ((bd−ibd)*6+4, (Qp−4)/6).v. Alternatively, g(Qp) may be defined as max (M, (Qp−4)/6).1. In one example, M may be signalled to the decoder.vi. Alternatively, g(Qp) may be defined as max ((M, Qp)−4)/6.1. In one example, M may be indicated in the SPS.2. In one example, same or different M may be applied on luma and chroma components.3. In one example, M may be equal to (bd−ibd)*6+4.vii. Alternatively, g(Qp) may be defined as Qp/6 or QP/8.viii. Alternatively, g(Qp) may be defined as (max (16, Qp)/6).ix. Alternatively, g(Qp) may be defined as (max (16, Qp)−4)/6.d. In one example, the value of g(Qp) may be in the range of [0, (bd−1)].e. In one example, the max function max (a,i) may be defined as (i<=a ? a: i).i. Alternatively, in one example, the max function max (a,i) may be defined as (i<a ? a: i).f. In one example, N may be an integer number (e.g., 8 or 10) and may depend on:i. A message signalled in the SPS/VPS/PPS/picture header/slice header/tile group header/LCU row/group of LCUs/bricks.ii. Internal bit depthiii. Input bit depthiv. Difference between internal bit depth and input depthv. Block dimension of current blockvi. Current quantization parameter of current blockvii. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)viii. Coding structure (such as single tree or dual tree)ix. Color component (such as luma component and/or chroma components)x. Slice/tile group type and/or picture typeg. In one example, N may be signalled to the decoder.14. Qp for escape values may be clipped.a. In one example, the lowest Qp applied to escape values may be equal to min_qp_prime_ts_minus4.b. In one example, the lowest Qp applied to escape values may be related to min_qp_prime_ts_minus4.i. In one example, the lowest Qp applied to escape values may be equal to min_qp_prime_ts_minus4+4.c. In one example, the lowest Qp for each color component may be indicated in the SPS/PPS/VPS/dependency parameter set (DPS)/Tile/Slice header.d. In one example, the lowest Qp applied to escape values may be (bd−ibd)*6+4, where bd is the internal bit depth and ibd denotes the input bit depth for a certain color component.e. In one example, the above examples may be applied to a certain color component.15. In the above examples, the chroma Qp for escape values may use the Qp before/after mapping.16. It is proposed to not use a reference index when deriving the current palette index in the palette mode.a. In one example, the palette index may be directly signalled without excluding the possibility of a reference index (e.g. adjustedRefPaletteIndex).i. Alternatively, in one example, the encoder may be constrained to enable the reference index always being different from the current index. In such as case, the palette index may be signalled by excluding the possibility of a reference index.b. In one example, the binarization of a palette index may be Truncated binary (TB) with using maximal palette index as a binarization input parameter.c. In one example, the binarization of a palette index may be fixed length.d. In one example, the binarization of a palette index may be EG with Kth order.i. In one example, K may be an integer number (e.g., 1, 2 or 3) and may depend on:1. A message signalled in the SPS/VPS/PPS/picture header/slice header/tile group header/LCU row/group of LCUs/bricks.2. Internal bit depth3. Input bit depth4. Difference between internal bit depth and input depth5. Block dimension of current block6. Current quantization parameter of current block7. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)8. Coding structure (such as single tree or dual tree)9. Color component (such as luma component and/or chroma components)e. In one example, the above examples may be applied only when the current block has one escape sample at least.17. Current palette index may be signalled independent from the previous palette indices.a. In one example, whether and/or how to use previous palette indices may depend on whether there is escape sample(s) in the current block.18. Derivation from an index for escape symbols to an index for non-escape symbols may be disallowed.a. In one example, when escape symbols are applied and the palette index is not equal to the index for escape symbols, it may be disallowed to decode the symbols as an escape symbol.19. Derivation from an index for non-escape symbols to an index for escape symbols may be disallowed.a. In one example, when escape symbols are applied and the palette index is equal to the index for escape symbols, it may be disallowed to decode the symbols as a non-escape symbol.20. A derived palette index may be capped by the current palette table size.a. In one example, when the palette index is larger than MaxPaletteIndex, it may be modified to equal to MaxPaletteIndex.21. A derived palette index may be capped by the current palette table size excluding the index for escape symbols.a. In one example, when escape symbols are not applied and the palette index is larger than MaxPaletteIndex, it may be modified to equal to MaxPaletteIndex.b. In one example, when escape symbols are applied and the palette index is larger than (MaxPaletteIndex−1), it may be modified to equal to (MaxPaletteIndex−1).22. The index to indicate escape symbol may be disallowed to be modified.a. In one example, index being equal to be MaxPaletteIndex may always indicate escape symbol when escape symbols are present in the current block.b. In one example, index not equal to be MaxPaletteIndex cannot be decoded as an index to indicate escape symbol.23. It is proposed to code the difference between a reference index and current indexa. In one example, the difference equal to be 0 may be disallowed to be coded.b. Alternatively, for the first index in a palette coded block, the index may be directly coded.24. It is proposed to code the modulo of the difference between a reference index (denoted as R), and the current index (denoted as C).a. In one example, I=Modulo(C−R, MaxPaletteIndex) may be coded.i. In one example, the index may be reconstructed as Modulo(I+R, MaxPaletteIndex).ii. In one example, Modulo(C−R, MaxPaletteIndex) equal to be 0 may be disallowed in the bitstream.iii. In one example, truncated binary code with cMax=MaxPaletteIndex may be used to code the value.iv. Alternatively, for the first index in a palette coded block, the index may be directly coded.b. In one example, I=Modulo(C−R, MaxPaletteIndex)−1 may be coded.i. In one example, the index may be reconstructed as Modulo(+1+R, MaxPaletteIndex).ii. In one example, Modulo(C−R, MaxPaletteIndex)−1 smaller than 0 may be disallowed in the bitstream.iii. In one example, truncated binary code with cMax=(MaxPaletteIndex−1) may be used to code the value I.iv. Alternatively, for the first index in a palette coded block, Modulo(C−R, MaxPaletteIndex) may be coded.v. Alternatively, for the first index in a palette coded block, the index may be directly coded.25. At the beginning of decoding a palette block, the reference index R may be set equal to −1a. Alternatively, the reference index R may be set equal to 0.26. It is proposed to enable the palette mode and the local dual tree exclusively.a. In one example, the local dual tree may be not allowed when the palette mode is enabled.i. Alternatively, in one example, the palette mode may be not allowed when the local dual tree is enabled.b. In one example, the local dual tree is not enabled on a specific color format, such as 4:4:4.c. In one example, palette mode may be disallowed when a coding tree is of MODE_TYPE_INTRA.d. It is proposed to reset the palette predictor based on the usage of local dual tree.i. In one example, the palette predictor may be reset when single tree is switched to local dual tree.ii. In one example, the palette predictor may be reset when local dual tree is switched to single tree.iii. Alternatively, furthermore, whether to signal usage of entries in the palette predictor (e.g., palette_predictor_run) may depend on the tree type.1. In one example, the signalling of usage of entries in the palette predictor (e.g., palette_predictor_run) is omitted when meeting the switch between local dual tree and single tree.27. It is proposed to remove repeated palette entries in the palette prediction table when local dual tree is applied.a. In one example, the palette prediction table may be reset when local dual tree is applied.i. Alternatively, in one example, the decoder may check all palette entries in the prediction table and remove repeated ones when local dual tree is applied.ii. Alternatively, in one example, the decoder may check partial palette entries in the prediction table and remove repeated ones when local dual tree is applied.iii. In one example, full pruning or partial pruning may be applied when checking the palette entries.1. In one example, a set of selected entries may be checked (e.g., the set includes all or partial palette entries in the palette predictor).a) In one example, full or partial pruning may be applied on the selected entries.2. In one example, full pruning may denote that one entry is compared to all entries that may be added.3. In one example, partial pruning may denote that one entry is compared to partial entries that may be added.iv. In one example, whether two palette entries are same may be only based on whether their luma component values are same.1. Alternatively, in one example, whether two palette entries are same may be only based on whether their chroma component values are same.2. Alternatively, in one example, whether two palette entries are same may be based on whether both of their luma and chroma component values are same.v. In one example, the above method may be applied on luma blocks only when the local dual tree starts to process the luma component.1. Alternatively, in one example, the above method may be applied on chroma blocks only when the local dual tree starts to process the chroma component.vi. Alternatively, in one example, the encoder may add a constraint that is considering two palette entries different when three components of their entries are different.28. When the current palette entry has a different number of color components from an entry the palette prediction table, the palette prediction table may be disallowed to be used.a. In one example, reused flags for all entries in the palette prediction table may be marked as true but may not be used for the current block when the current palette entry has a different number of color components from prediction.b. In one example, reused flags for all entries in the palette prediction table may be marked as false when the current palette entry has a different number of color components from prediction.29. When the prediction table and current palette table have different color component(s), the palette prediction table may be disallowed to be used.a. In one example, reused flags for all entries in the palette prediction table may be marked as true but may not be used for the current block when prediction table and current palette table have different color components.b. In one example, reused flags for all entries in the palette prediction table may be marked as false when prediction table and current palette table have different color components.30. The escape symbols may be predictively coded, such as based on previously coded escape symbols.a. In one example, an escape symbol of one component may be predicted by coded values in the same color component.i. In one example, the escape symbol may employ the previously one coded escape symbol in the same component as a predictor and the residue between them may be signalled.ii. Alternatively, the escape symbol may employ the previously Kthcoded escape symbol in the same component as a predictor and the residue between them may be signalled.iii. Alternatively, the escape symbol may be predicted from multiple (e.g., K) coded escape symbols in the same component.1. In one example, K may be an integer number (e.g., 1, 2 or 3) and may depend on:a) A message signalled in the SPS/VPS/PPS/picture header/slice header/tile group header/LCU row/group of LCUs/bricks.b) Internal bit depthc) Input bit depthd) Difference between internal bit depth and input depthe) Block dimension of current blockf) Current quantization parameter of current blockg) Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)h) Coding structure (such as single tree or dual tree)i) Color component (such as luma component and/or chroma components)b. In one example, an escape symbol of one component may be predicted by coded values of another component.c. In one example, a pixel may have multiple color components, and if the pixel is treated as escape symbol, the value of one component may be predicted by the values of samples of other components.i. In one example, the U component of an escape symbol may be predicted by the V component of that symbol.d. In one example, the above methods may be only applied to certain color component (e.g., on luma component or chroma components), or under certain conditions such as based on coded information.31. Signalling of palette related syntax elements may depend on the maximum size of palette, and/or block dimension, and/or usage of lossless mode and/or quantization parameters (QP).a. In one example, for a lossless code block and/or QP is no greater than a threshold and/or transform skip is applied, the block's palette size is inferred to be equal to block dimension.i. Alternatively, for a lossless code block and/or QP is no greater than a threshold, the block's palette size is inferred to be equal to min(block dimension, maximum palette size).b. Whether to signal the usage of escape samples in a block may depend on the block dimension and/or usage of lossless coded mode (e.g., QP is equal to given value (e.g., 4) or not; and/or transform_skip_flag is equal to 1; or transquant_bypass_flag is equal to true or not) and/or QPs.i. Alternatively, furthermore, whether to signal the usage of escape samples may depend on the relationship between the block dimension and current palette size of the current block.1. In one example, whether to signal the usage of escape samples may depend on whether the block dimension is equal to current palette size.a) Alternatively, furthermore, if block dimension is equal to current palette size, it is not signalled and inferred to be false.2. Alternatively, whether to signal the usage of escape samples may depend on whether the block dimension is no smaller than current palette size.a) Alternatively, furthermore, if block dimension is no smaller than current palette size, it is not signalled and inferred to be false.ii. Alternatively, furthermore, whether to signal the usage of escape samples may depend on the relationship between the block dimension, maximum size of palette, and/or lossless mode.1. In one example, if one block is coded with lossless mode and the block dimension is smaller than the maximum size of palette, the signalling of usage of escape samples may be omitted and it is inferred to be false.2. In one example, if one block is coded with QP no greater than a threshold and the block dimension is smaller than the maximum size of palette, the signalling of usage of escape samples may be omitted and it is inferred to be false.iii. The indication of usage of escape samples (e.g. palette_escape_val_present_flag) may be inferred under certain conditions.1. In one example, the indication of usage of escape samples may be inferred to false when the current block size is smaller than or equal to the maximally allowed palette size (e.g. palette_max_size).a) Alternatively, in one example, the indication of usage of escape samples may be signalled when the current block size is greater than the maximally allowed palette size.b) Alternatively, in one example, the indication of usage of escape samples may be inferred to false when the current block size is greater than the maximally allowed palette size.2. In one example, the above methods may be applied under the lossless coding condition.3. In one example, the above methods may be applied to CUs that are lossless coded.4. In one example, the indication of usage of escape samples may be inferred to false when the current block size is smaller than or equal to the palette size of the current block.5. In one example, when the usage flag of escape samples is inferred, the corresponding syntax element, e.g. palette_escape_val_present_flag, may be skipped in the bitstream.32. The contexts for run-length coding in palette mode may depend on the palette index for indexing the palette entries.a. In one example, the palette index after the index adjustment process at the decoder (mentioned in section 2.1.3) may be employed to derive contexts for the prefix of a length element (e.g., palette_run_prefix).b. Alternatively, in one example, the I defined in the bullet 13 may replace the palette index to derive contexts for the prefix of a length element (e.g., palette_run_prefix).33. It is proposed to align the positions of left neighboring block and/or above neighboring block employed in the derivation process for the quantization parameter predictors with the positions of left neighboring block and/or above neighboring block used in the mode/MV (e.g., MPM) derivation.a. The positions of left neighboring block and/or above neighboring block employed in the derivation process for the quantization parameter may be aligned with that used in the merge/AMVP candidate list derivation process.b. In one example, the positions of left neighboring block and/or above block employed in the derivation process for the quantization parameter may be the left/above neighboring blocks shown inFIG.8.34. Block-level QP difference may be sent independent of whether escape samples exist in the current block.a. In one example, whether and/or how to send block-level QP difference may follow blocks coded in other modes than palette.b. In one example, block-level QP difference may be always not sent for a palette block.c. In one example, block-level QP difference may be sent for a palette block when block width is larger than a threshold.d. In one example, block-level QP difference may be sent for a palette block when block height is larger than a threshold.e. In one example, block-level QP difference may be sent for a palette block when block size is larger than a threshold.f. In one example, the above examples may only apply to luma or chroma blocks.35. One or more of the coded block flags (CBFs) (e.g., cbf_luma, cbf_cb, cbf_cr) for a palette block may be set equal to 1.a. In one example, the CBF for a palette block may be always set equal to 1.b. One or more of the CBFs for a palette block may depend on whether escape pixels exist in the current block.i. In one example, when a palette block has escape samples, its cbf may be set equal to 1.ii. Alternatively, when a palette block does not have escape samples, its cbf may be set equal to 0.c. Alternatively, when accessing a neighboring palette coded block, it may be treated as an intra coded block with CBF equal to 1.36. The difference between luma and/or chroma QP applied to a palette block and QP derived for the block (e.g. QpYor Qp′Yin JVET-O2001-vE spec) may be set equal to a fixed value for palette blocks.a. In one example, the luma and/or chroma QP offset may be set equal to 0.b. In one example, the chroma QP offsets for Cb and Cr may be different.c. In one example, the luma QP offset and chroma QP offsets may be different.d. In one example, the chroma QP offset(s) may be indicated in DPS/VPS/SPS/PPS/Slice/Brick/Tile header.37. The number of palette indices explicitly signalled or inferred for the current block (e.g., num_palette_indices_minus1+1), denoted by NumPltIdx, may be restricted to be greater than or equal to K.a. In one example, K may be determined based on the current palette size, the escape flag and/or other information of palette coded blocks. Let S be current palette size of a current block and E be the value of escape present flag (e.g., palette_escape_val_present_flag). Let BlkS be the current block size.i. In one example, K may be set equal to S.ii. Alternatively, in one example, K may be set equal to S+E.iii. Alternatively, in one example, K may be set equal to (Number of Predicted Palette entries+number of signalled palette entries+palette_escape_val_present_flag) (e.g., NumPredictedPaletteEntries+num_signalled_palette_entries+palette_escape_val_present_flag).iv. Alternatively, in one example, K may be set equal to (the maximal value of palette index (e.g. MaxPaletteIndex) plus 1).v. Alternatively, in one example, K may be signalled to the decoder.i. In one example, K may be a fixed integer value.ii. In one example, K is an integer number and may be determined based on:1. Decoded information of previously coded blocks/current block2. Quantization parameters of current block/neighboring (adjacent or non-adjacent) blocks3. Video contents (e.g., screen contents or natural contents)4. A message signalled in the DPS/SPS/VPS/PPS/adaptation parameter set (APS)/picture header/slice header/tile group header/Largest coding unit (LCU)/Coding unit (CU)/LCU row/group of LCUs/transform unit (TU)/picture unit (PU) block/Video coding unit5. Position of CU/PU/TU/block/Video coding unit6. Block dimension of current block and/or its neighboring blocks7. Block shape of current block and/or its neighboring blocks8. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)9. Coding tree structure (such as dual tree or single tree)10. Slice/tile group type and/or picture type11. Color component (e.g., may be only applied on luma component and/or chroma component)12. Temporal layer ID13. Profiles/Levels/Tiers of a standardb. In one example, (NumPltIdxminus K) instead of num_palette_indices_minus1 may be signalled/parsed.i. Alternatively, furthermore, it may be signalled only when (S+E) is no smaller than 1.ii. In one example, the value of (NumPltIdxminus K) may be signalled with a binarization method that the binarized bin string may have a pre-fix (e.g., truncated unary) and/or a suffix with m-th EG code.iii. In one example, the value of (NumPltIdxminus K) may be signalled with a truncated binary binarization method.iv. In one example, the value of (NumPltIdxminus K) may be signalled with a truncated unary binarization method.v. In one example, the value of (NumPltIdxminus K) may be signalled with a m-th EG binarization method.vi. In one example, the value of BlkS−K may be used as an input parameter (e.g., cMax) in the above binarization methods, such as being used as maximum value to the truncated unary/truncated binary binarization method.c. In one example, a conformance bitstream shall satisfy that NumPltIdxis greater than or equal to K.d. In one example, a conformance bitstream shall satisfy that NumPltIdxis smaller than or equal to K′.i. In one example, K′ is set to (block width*block height).ii. In one example, K′ is set to (block width*block height−K).38. Whether and/or how apply the above methods may be based on:a. Video contents (e.g., screen contents or natural contents)b. A message signalled in the DPS/SPS/VPS/PPS/APS/picture header/slice header/tile group header/Largest coding unit (LCU)/Coding unit (CU)/LCU row/group of LCUs/TU/PU block/Video coding unitc. Position of CU/PU/TU/block/Video coding unitd. Block dimension of current block and/or its neighboring blockse. Block shape of current block and/or its neighboring blocksf. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)g. Coding tree structure (such as dual tree or single tree)h. Slice/tile group type and/or picture typei. Color component (e.g., may be only applied on luma component and/or chroma component)j. Temporal layer IDk. Profiles/Levels/Tiers of a standardl. Whether the current block has one escape sample or not.i. In one example, the above methods may be applied only when the current block has one escape sample at least.m. Whether current block is coded with lossless mode or not (e.g., cu_transquant_bypass_flag)i. In one example, the above methods may be applied only when the current block is NOT coded with lossless mode.n. Whether lossless coding is enabled or not (e.g., transquant_bypass_enabled, cu_transquant_bypass_flag)i. In one example, the above methods may be applied only when lossless coding is disabled. Line Based CG Palette Mode Related39. Whether these are escape samples may be indicated for each CG.a. In one example, for each CG, a syntax element, e.g., palette_escape_val_present_flag, may be sent in the bitstream to indicate whether escape samples are present or not.i. In one example, the palette_escape_val_present_flag may be signalled or inferred based on the CG size, the number of decoded samples in the current block, and/or the palette size of the current block.b. In one example, for the current CG, when escape samples are not present, index adjustment may be applied.c. In one example, for the current CG, when escape samples are present, index adjustment should not be applied.d. Alternatively, above methods may be applied only when the current block contains escape samples.40. In the line-based CG palette mode, the indication of the usage of copying the above index (e.g., copy_above_palette_indices_flag) may be not context coded.e. Alternatively, in one example, the indication of the usage of copying the above index (e.g., copy_above_palette_indices_flag) may be bypass coded without using any contexts.i. In one example, the indication of the usage of copying the above index (e.g., copy_above_palette_indices_flag) and the copy flags (e.g., run_copy_flag) in the current segment may be interleaved signalled.f. In one example, the indication of the usage of copying the above index (e.g., copy_above_palette_indices_flag) may be coded after the all copy flags (e.g., run_copy_flag) in the current segment are signalled.g. In one example, the indication of the usage of copying the above index (e.g., copy_above_palette_indices_flag) and the signalled index may be interleaved coded.h. The above methods may be also applied to other palette-based coding modes.41. The copy flags, the run types, the indications of the usage of copying the above index, and the escape values may be interleaved signalled.i. In one example, a first copy flag, run type, the indication of the usage of copying the above index, and escape values may be coded in order; followed by a second copy flag, run type, the indication of the usage of copying the above index, and escape values.j. Alternatively, furthermore, for a given CG, the above method may be applied.42. The line-based CG palette mode may be disabled for blocks with size being smaller than or equal to a given threshold, denoted as Th.k. In one example, Th is equal to the number of samples of a segment in the line-based CG palette mode.l. In one example, Th is a fixed number (e.g., 16) and may be based on:i. Video contents (e.g., screen contents or natural contents)ii. A message signalled in the DPS/SPS/VPS/PPS/APS/picture header/slice header/tile group header/Largest coding unit (LCU)/Coding unit (CU)/LCU row/group of LCUs/TU/PU block/Video coding unitiii. Position of CU/PU/TU/block/Video coding unitiv. Block dimension of current block and/or its neighboring blocksv. Block shape of current block and/or its neighboring blocksvi. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)vii. Coding tree structure (such as dual tree or single tree)viii. Slice/tile group type and/or picture typeix. Color component (e.g., may be only applied on luma component and/or chroma component)x. Temporal layer IDxi. Profiles/Levels/Tiers of a standardxii. The quantization parameter of the current blockxiii. Whether the current block has one escape sample or notxiv. Whether lossless coding is enabled or not (e.g., transquant_bypass_enabled, cu_transquant_bypass_flag) BDPCM Related43. When one block is coded with BDPCM and is split into multiple transform blocks or sub-blocks, the residual prediction may be done in block level, and signalling of residuals is done in sub-block/transform block level.a. Alternatively, furthermore, the reconstruction of one sub-block is disallowed in the reconstruction process of another sub-block.b. Alternatively, the residual prediction and signalling of residuals is done in sub-block/transform block level.i. In this way, the reconstruction of one sub-block may be utilized in the reconstruction process of another sub-block. Chroma QP Table Related44. For a given index, the value of the chroma QP table for joint_cb_cr mode may be constrained by both the value of the chroma QP table for Cb and the value of the chroma QP table for Cr.c. In one example, the value of the value of the chroma QP table for joint_cb_cr mode may be constrained between the value of the chroma QP table for Cb and the value of the chroma QP table for Cr, inclusive. Deblocking Related45. Motion vector (MV) comparison in deblocking may depend on whether the alternative half-pel interpolation filter is used (e.g., indicated by hpelIfIdx in the JVET-O2001-vE spec).d. In one example, blocks using different interpolation filters may be treated as having different MVs.e. In one example, a constant offset may be added to the MV difference for deblocking comparison when the alternative half-pel interpolation filter is involved. General Claims46. Whether and/or how apply the above methods may be based on:a. Video contents (e.g., screen contents or natural contents)b. A message signalled in the DPS/SPS/VPS/PPS/APS/picture header/slice header/tile group header/Largest coding unit (LCU)/Coding unit (CU)/LCU row/group of LCUs/TU/PU block/Video coding unitc. Position of CU/PU/TU/block/Video coding unitd. Block dimension of current block and/or its neighboring blockse. Block shape of current block and/or its neighboring blocksf. Quantization parameter of the current blockg. Indication of the color format (such as 4:2:0, 4:4:4, RGB or YUV)h. Coding tree structure (such as dual tree or single tree)i. Slice/tile group type and/or picture typej. Color component (e.g., may be only applied on luma component and/or chroma component)k. Temporal layer IDl. Profiles/Levels/Tiers of a standardm. Whether the current block has one escape sample or not.i. In one example, the above methods may be applied only when the current block has one escape sample at least.n. Whether current block is coded with lossless mode or not (e.g., cu_transquant_bypass_flag).ii. In one example, the above methods may be applied only when the current block is NOT coded with lossless mode.o. Whether lossless coding is enabled or not (e.g., transquant_bypass_enabled, cu_transquant_bypass_flag). 5. EMBODIMENTS The embodiment is based on JVET-O2001-vE. Newly-added text is enclosed in double bolded braces, e.g., {{a}} indicates that “a” has been added. Deleted text is enclosed in double bolded brackets, e.g., [[b]] indicates that “b” has been deleted. 5.1 Embodiment #1 Decoding Process for Palette Mode Inputs to this process are: a location (xCb, yCb) specifying the top-left luma sample of the current block relative to the top-left luma sample of the current picture,a variable startComp specifies the first color component in the palette table,a variable cIdx specifying the color component of the current block,two variables nCbW and nCbH specifying the width and height of the current block, respectively. Output of this process is an array recSamples[x][y], with x=0 . . . nCbW−1, y=0 . . . nCbH−1 specifying reconstructed sample values for the block. Depending on the value of cIdx, the variables nSubWidth and nSubHeight are derived as follows:If cIdx is equal to 0, nSubWidth is set to 1 and nSubHeight is set to 1.Otherwise, nSubWidth is set to SubWidthC and nSubHeight is set to SubHeightC. The (nCbW×nCbH) block of the reconstructed sample array recSamples at location (xCb, yCb) is represented by recSamples[x][y] with x=0 . . . nCTbW−1 and y=0 . . . nCbH−1, and the value of recSamples[x][y] for each x in the range of 0 to nCbW−1, inclusive, and each y in the range of 0 to nCbH−1, inclusive, is derived as follows:The variables xL and yL are derived as follows: xL=palette_transpose_flag?x*nSubHeight:x*nSubWidth  (8-268) yL=palette_transpose_flag?y*nSubWidth:y*nSubHeight  (8-269)The variable bIsEscapeSample is derived as follows:If PaletteIndexMap[xCb+xL][yCb+yL] is equal to MaxPaletteIndex and palette_escape_val_present_flag is equal to 1, bIsEscapeSample is set equal to 1.Otherwise, bIsEscapeSample is set equal to 0.If bIsEscapeSample is equal to 0, the following applies: recSamples[x][y]=CurrentPaletteEntries[cIdx][PaletteIndexMap[xCb+xL][yCb+yL]](8-270)Otherwise, if cu_transquant_bypass_flag is equal to 1, the following applies: recSamples[x][y]=PaletteEscapeVal[cIdx][xCb+xL][yCb+yL](8-271)Otherwise (bIsEscapeSample is equal to 1 and cu_transquant_bypass_flag is equal to 0), the following ordered steps apply:1. The quantization parameter qP is derived as follows:—If cIdx is equal to 0, qP=Max(0,Qp′Y)  (8-272)Otherwise, if cIdx is equal to 1, qP=Max(0,Qp′Cb)  (8-273)Otherwise (cIdx is equal to 2), qP=Max(0,Qp′Cr)  (8-274)2. The variables bitDepth is derived as follows: bitDepth=(cIdx==0)?BitDepthY:BitDepthC(8-275)3. [[The list levelScale[ ] is specified as levelScale[k]={40, 45, 51, 57, 64, 72} with k=0 . . . 5.]]4. The following applies: [[tmpVal = ( PaletteEscapeVal[ cIdx ][ xCb + xL ][ yCb + yL ] *levelScale[ qP%6 ] ) << ( qP / 6 ) +32 ) >> 6 (8-276)]]{{ T is set equal to (internal_bit_depth − input_bit_depth) for component cIdxNbits = max(T, (qP − 4) / 6)− If Nbits is equal to TrecSamples[ x ][ y ] = PaletteEscapeVal[ cIdx ][ xCb + xL ][ yCb + yL ] <<Nbits− OtherwiserecSamples[ x ][ y ] = (PaletteEscapeVal[ cIdx ][ xCb + xL ][ yCb + yL ] <<Nbits) +(1 << (Nbits − 1) }}[[recSamples[ x ][ y ] = Clip3( 0, ( 1 << bitDepth ) − 1, tmpVal ) (8-277)]] When one of the following conditions is true:cIdx is equal to 0 and numComps is equal to 1;cIdx is equal to 2; the variable PredictorPaletteSize[startComp] and the array PredictorPaletteEntries are derived or modified as follows: for( i = 0; i < CurrentPaletteSize[ startComp ]; i++ )for( cIdx = startComp; cIdx < (startComp + numComps); cIdx++ )newPredictorPaletteEntries[ cIdx ][ i ] = CurrentPaletteEntries[ cIdx ][ i ]newPredictorPaletteSize = CurrentPaletteSize[ startComp ]for( i = 0; i < PredictorPaletteSize && newPredictorPaletteSize < PaletteMaxPredictorSize;i++ )if( !PalettePredictorEntryReuseFlags[ i ] ) {for( cIdx = startComp; cIdx < (startComp + numComps); cIdx++ ) (8-278)newPredictorPaletteEntries[ cIdx ][ newPredictorPaletteSize ] =PredictorPaletteEntries[ cIdx ][ i ]newPredictorPaletteSize++}for( cIdx = startComp; cIdx < ( startComp + numComps ); cIdx++ )for( i = 0; i < newPredictorPaletteSize; i++ )PredictorPaletteEntries[ cIdx ][ i ] = newPredictorPaletteEntries[ cIdx ][ i ]PredictorPaletteSize[ startComp ] = newPredictorPaletteSize It is a requirement of bitstream conformance that the value of PredictorPaletteSize[startComp] shall be in the range of 0 to PaletteMaxPredictorSize, inclusive. 5.2 Embodiment #2 This embodiment describes palette index derivation. Palette Coding Semantics [[The variable adjustedRefPaletteIndex is derived as follows: adjustedRefPaletteIndex = MaxPaletteIndex + 1if( PaletteScanPos > 0 ) {xcPrev = x0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 0 ]ycPrev = y0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 1 ]if( CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] == 0 ) {adjustedRefPaletteIndex = PaletteIndexMap[ xcPrev ][ ycPrev ] { (7-157)}else {if( !palette_transpose_flag )adjustedRefPaletteIndex = PaletteIndexMap[ xC ][ yC − 1 ]elseadjustedRefPaletteIndex = PaletteIndexMap[ xC − 1 ][ yC ]}} When CopyAboveIndicesFlag[xC][yC] is equal to 0, the variable CurrPaletteIndex is derived as follows: if(CurrPaletteIndex>=adjustedRefPaletteIndex) CurrPaletteIndex++]] Binarization Process for Palette_Idx_Idc Input to this process is a request for a binarization for the syntax element palette_idx_idc and the variable MaxPaletteIndex. Output of this process is the binarization of the syntax element. The variable cMax is derived as follows:[[If this process is invoked for the first time for the current block,]]cMax is set equal to MaxPaletteIndex.[[Otherwise (this process is not invoked for the first time for the current block), cMax is set equal to MaxPaletteIndex minus 1.]] The binarization for the palette_idx_idc is derived by invoking the TB binarization process specified in clause 9.3.3.4 with cMax. 5.3 Embodiment #3 TABLE 9-77Syntax elements and associated binarizationsSyntax elementProcessInput parameterspalette_escape_val[[EG3]]{{bitDepth − (Max(QpPrimeTsMin, Qp′Y) −{{FL}}4)/6}} 8.4.5.3 Decoding Process for Palette Mode Inputs to this process are: a location (xCb, yCb) specifying the top-left luma sample of the current block relative to the top-left luma sample of the current picture,a variable startComp specifies the first color component in the palette table,a variable cIdx specifying the color component of the current block,two variables nCbW and nCbH specifying the width and height of the current block, respectively. Output of this process is an array recSamples[x][y], with x=0 . . . nCbW−1, y=0 . . . nCbH−1 specifying reconstructed sample values for the block. Depending on the value of cIdx, the variables nSubWidth and nSubHeight are derived as follows: . . .Otherwise (bIsEscapeSample is equal to 1 and cu_transquant_bypass_flag is equal to 0), the following ordered steps apply:5. The quantization parameter qP is derived as follows:If cIdx is equal to 0, qP=Max(0,Qp′Y)  (8-272)Otherwise, if cIdx is equal to 1, qP=Max(0,Qp′Cb)  (8-273)Otherwise (cIdx is equal to 2), qP=Max(0,Qp′Cr)  (8-274)6. The variables bitDepth is derived as follows: bitDepth=(cIdx==0)?BitDepthY:BitDepthC(8-275)7. [[The list levelScale[ ] is specified as levelScale[k]={40, 45, 51, 57, 64, 72} with k=0 . . . 5.]]8. The following applies: [[tmpVal=(PaletteEscapeVal[cIdx][xCb+xL][yCb+yL]*levelScale[qP%6])<<(qP/6)+32)>>6  (8-276)]] {{shift=(max(QpPrimeTsMin,qP)−4)/6 tmpVal=(PaletteEscapeVal[cIdx][xCb+xL][yCb+yL]<<shift)}} recSamples[x][y]=Clip3(0,(1<<bitDepth)−1,tmpVal)  (8-277) 5.4 Embodiment #4 copy_above_palette_indices_flag equal to 1 specifies that the palette index is equal to the palette index at the same location in the row above if horizontal traverse scan is used or the same location in the left column if vertical traverse scan is used. copy_above_palette_indices_flag equal to 0 specifies that an indication of the palette index of the sample is coded in the bitstream or inferred. . . . The variable adjustedRefPaletteIndex is derived as follows: adjustedRefPaletteIndex = MaxPaletteIndex + 1if( PaletteScanPos > 0 {{&& !palette_escape_val_present_flag}}) {xcPrev = x0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 0 ]ycPrev = y0 + TraverseScanOrder[ log2CbWidth ][ log2bHeight ][ PaletteScanPos −1 ][ 1 ]if( CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] == 0 ) {adjustedRefPaletteIndex = PaletteIndexMap[ xcPrev ][ ycPrev ] { (7-157)}else {if( !palette_transpose_flag )adjustedRefPaletteIndex = PaletteIndexMap[ xC ][ yC − 1 ]elseadjustedRefPaletteIndex = PaletteIndexMap[ xC − 1 ][ yC ]}} When CopyAboveIndicesFlag[xC][yC] is equal to 0, the variable CurrPaletteIndex is derived as follows: if(CurrPaletteIndex>=adjustedRefPaletteIndex) CurrPaletteIndex++  (7-158) 5.5 Embodiment #5 TABLE 9-77Syntax elements and associated binarizationsSyntax elementProcessInput parameterspalette_escape_val[[EG3]]{{max(1, bitDepth − (Max(QpPrimeTsMin,{{FL}}Qp′Y) − 4)/6)}} 8.4.5.3 Decoding Process for Palette Mode Inputs to this process are: a location (xCb, yCb) specifying the top-left luma sample of the current block relative to the top-left luma sample of the current picture,a variable startComp specifies the first color component in the palette table,a variable cIdx specifying the color component of the current block,two variables nCbW and nCbH specifying the width and height of the current block, respectively. Output of this process is an array recSamples[x][y], with x=0 . . . nCbW−1, y=0 . . . nCbH−1 specifying reconstructed sample values for the block. Depending on the value of cIdx, the variables nSubWidth and nSubHeight are derived as follows: . . .Otherwise (bIsEscapeSample is equal to 1 and cu_transquant_bypass_flag is equal to 0), the following ordered steps apply:9. The quantization parameter qP is derived as follows:—If cIdx is equal to 0, qP=Max(0,Qp′Y)  (8-272)Otherwise, if cIdx is equal to 1, qP=Max(0,Qp′Cb)  (8-273)Otherwise (cIdx is equal to 2), qP=Max(0,Qp′Cr)  (8-274)10. The variables bitDepth is derived as follows: bitDepth=(cIdx==0)?BitDepthY:BitDepthC(8-275)11. [[The list levelScale[ ] is specified as levelScale[k]={40, 45, 51, 57, 64, 72} with k=0 . . . 5.]]12. The following applies: [[tmpVal=(PaletteEscapeVal[cIdx][xCb+xL][yCb+yL]*levelScale[qP%6])<<(qP/6)+32)>>6  (8-276)]] {{shift=min(bitDepth−1,(max(QpPrimeTsMin,qP)−4)/6) tmpVal=(PaletteEscapeVal[cIdx][xCb+xL][yCb+yL]<<shift)}} recSamples[x][y]=Clip3(0,(1<<bitDepth)−1,tmpVal)  (8-277) 5.6 Embodiment #6 This embodiment illustrates a design to skip transform shift for transform skip, and is based on JVET-O2001-vE. 8.7.2 Scaling and Transformation Process Inputs to this process are: a luma location (xTbY, yTbY) specifying the top-left sample of the current luma transform block relative to the top-left luma sample of the current picture,a variable cIdx specifying the color component of the current block,a variable nTbW specifying the transform block width,a variable nTbH specifying the transform block height. Output of this process is the (nTbW)×(nTbH) array of residual samples resSamples[x][y] with x=0 . . . nTbW−1, y=0 . . . nTbH−1. The variables bitDepth, bdShift and tsShift are derived as follows: bitDepth=(cIdx==0)?BitDepthY:BitDepthC(8-942) bdShift=Max(20−bitDepth,0)  (8-943) [[tsShift=5+((Log 2(nTbW)+Log 2(nTbH))/2)   (8 944)]] The variable codedCIdx is derived as follows:If cIdx is equal to 0 or TuCResMode[xTbY][yTbY] is equal to 0, codedCIdx is set equal to cIdx.Otherwise, if TuCResMode[xTbY][yTbY] is equal to 1 or 2, codedCIdx is set equal to 1.Otherwise, codedCIdx is set equal to 2. The variable cSign is set equal to (1−2*slice_joint_cbcr_sign_flag). The (nTbW)×(nTbH) array of residual samples resSamples is derived as follows:1. The scaling process for transform coefficients as specified in clause 8.7.3 is invoked with the transform block location (xTbY, yTbY), the transform block width nTbW and the transform block height nTbH, the color component variable cIdx being set equal to codedCIdx and the bit depth of the current color component bitDepth as inputs, and the output is an (nTbW)×(nTbH) array of scaled transform coefficients d.2. The (nTbW)×(nTbH) array of residual samples r is derived as follows:[[If transform_skip_flag[xTbY][yTbY] is equal to 1 and cIdx is equal to 0, the residual sample array values r[x][y] with x=0 . . . nTbW−1, y=0 . . . nTbH−1 are derived as follows: r[x][y]=d[x][y]<<tsShift   (8-945)]][[Otherwise (transform_skip_flag[xTbY][yTbY] is equal to 0 or and cIdx is not equal to 0),]] the transformation process for scaled transform coefficients as specified in clause 8.7.4.1 is invoked with the transform block location (xTbY, yTbY), the transform block width nTbW and the transform block height nTbH, the color component variable cIdx and the (nTbW)×(nTbH) array of scaled transform coefficients d as inputs, and the output is an (nTbW)×(nTbH) array of residual samples r.3. The intermediate residual samples res [x][y] with x=0 . . . nTbW−1, y=0 . . . nTbH−1 are derived as follows:{{If transform_skip_flag[xTbY][yTbY] is equal to 1 and cIdx is equal to 0, the following applies: res[x][y]=d[x][y]}}{{Otherwise (transform_skip_flag[xTbY][yTbY] is equal to 0 or and cIdx is not equal to 0), the following applies:}} res[x][y]=(r[x][y]+(1<<(bdShift−1)))>>bdShift  (8-946)4. The residual samples resSamples[x][y] with x=0 . . . nTbW−1, y=0 . . . nTbH−1 are derived as follows:If cIdx is equal to codedCIdx, the following applies: resSamples[x][y]=res[x][y](8-947)Otherwise, if TuCResMode[xTbY][yTbY] is equal to 2, the following applies: resSamples[x][y]=cSign*res[x][y](8-948)Otherwise, the following applies: resSamples[x][y]=(cSign*res[x][y])>>1  (8-949) 8.7.3 Scaling Process for Transform Coefficients . . . The variable rectNonTsFlag is derived as follows: rect[[NonTx]]Flag=(((Log 2(nTbW)+Log 2(nTbH))&1)==1[[&&]] [[transform_skip_flag[xTbY][yTbY]=]]=0)  (8-955) The variables bdShift, rectNorm and bdOffset are derived as follows:{{If transform_skip_flag[xTbY][yTbY] is equal to 1 and cIdx is equal to 0, the following applies: bdShift=10}}{{Otherwise, the following applies:}} bdShift=bitDepth+((rect[[NonTx]]Flag?1:0)+(Log 2(nTbW)+Log 2(nTbH))/2)−5+dep_quant_enabled_flag  (8-956) bdOffset=(1<<bdShift)>>1  (8-957) The list levelScale[ ][ ] is specified as levelScale[j][k]={{40, 45, 51, 57, 64, 72}, {57, 64, 72, 80, 90, 102}} with j=0 . . . 1, k=0 . . . 5. The (nTbW)×(nTbH) array dz is set equal to the (nTbW)×(nTbH) array TransCoeffLevel[xTbY][yTbY][cIdx]. For the derivation of the scaled transform coefficients d[x][y] with x=0 . . . nTbW−1, y=0 . . . nTbH−1, the following applies:The intermediate scaling factor m[x][y] is derived as follows:If one or more of the following conditions are true, m[x][y] is set equal to 16:sps_scaling_list_enabled_flag is equal to 0.transform_skip_flag[xTbY][yTbY] is equal to 1.Otherwise, the following applies: m[x][y]=ScalingFactor[Log 2(nTbW)][Log 2(nTbH)][matrixId][x][y],(8-958)with matrixId as specified in Table 7-5The scaling factor ls[x][y] is derived as follows:—If dep_quant_enabled_flag is equal to 1, the following applies: ls[x][y]=(m[x][y]*levelScale[rect[[NonTx]]Flag][(qP+1)%6])<<((qP+1)/6)   (8-959)Otherwise (dep_quant_enabled_flag is equal to 0), the following applies: ls[x][y]=(m[x][y]*levelScale[rect[[NonTx]]Flag][qP %6])<<(qP/6)  (8-960)When BdpcmFlag[xTbY][yYbY] is equal to 1, dz[x][y] is modified as follows:If BdpcmDir[xTbY][yYbY] is equal to 0 and x is greater than 0, the following applies: dz[x][y]=Clip3(CoeffMin,CoeffMax,dz[x−1][y]+dz[x][y])  (8-961)Otherwise, if BdpcmDir[xTbY][yYbY] is equal to 1 and y is greater than 0, the following applies: dz[x][y]=Clip3(CoeffMin,CoeffMax,dz[x][y−1]+dz[x][y])  (8-962)The value dnc[x][y] is derived as follows: dnc[x][y]=(dz[x][y]*ls[x][y]+bdOffset)>>bdShift  (8-963)The scaled transform coefficient d[x][y] is derived as follows: d[x][y]=Clip3(CoeffMin,CoeffMax,dnc[x][y])  (8-964) 5.7 Embodiment #7 This embodiment illustrates a design to signal the number of palette indices. 7.3.8.6 Palette Coding Syntax palette_coding( x0, y0, cbWidth, cbHeight, startComp, numComps ) {Descriptor...if( MaxPaletteIndex > 0 ) {num_palette_indices{{_diff}}[[_minus1]]ae(v)adjust = 0for( i = 0; i <= [[num_palette_indices_minus 1]]{{NumPaletteIndices}}; i++ ) {if( MaxPaletteIndex − adjust > 0 ) {palette_idx_idcae(v)PaletteIndexIdc[ i ] = palette_idx_idc}adjust = 1}copy_above_indices_for_final_run_flagae(v)palette_transpose_flagae(v)}if( treeType != DUAL_TREE_CHROMA && palette_escape_val_present_flag ) {if( cu_qp_delta_enabled_flag && !IsCuQpDeltaCoded ) {cu_qp_delta_absae(v)if( cu_qp_delta_abs )cu_qp_delta_sign_flagae(v)}}if( treeType != DUAL_TREE_LUMA && palette_escape_val_present_flag ) {if( cu_chroma_qp_offset_enabled_flag && !IsCuChromaQpOffsetCoded ) {cu_chroma_qp_offset_flagae(v)if( cu_chroma_qp_offset_flag )cu_chroma_qp_offset_idxae(v)}}remainingNumIndices = [[num_palette_indices_minus1 + 1]]{{NumPaletteIndices}})...if( CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {currNumIndices = [[num_palette_indices_ minus1 + 1]]{{NumPaletteIndices}} −remainingNumIndicesPaletteIndexMap[ xC ][ yC ] = PaletteIndexIdc[ currNumIndices ]}...} num_palette_indices{{_diff}}[[_minus1]] plus [[1]] ({{MaxPaletteIndex+1}}) is the number of palette indices explicitly signalled or inferred for the current block. {{NumPaletteIndices is set to (num_palette_indices_diff+MaxPaletteIndex+1).}} When num_palette_indices{{_diff}}[[_minus1]] is not present, it is inferred to be equal to 0. {{The value of num_palette_indices_diff shall be in the range of 0 to cbWidth*cbHeight−(MaxPaletteIndex+1) inclusive.}} copy_above_indices_for_final_run_flag equal to 1 specifies that the palette indices of the last positions in the coding unit are copied from the palette indices in the row above if horizontal traverse scan is used or the palette indices in the left column if vertical traverse scan is used. copy_above_indices_for_final_run_flag equal to 0 specifies that the palette indices of the last positions in the coding unit are copied from PaletteIndexIdc[[[num_palette_indices_minus1]] {{NumPaletteIndices−1}}]. 9.5.3.13 Binarization Process for num_palette_indices{{_diff}}[[_minus1]] Input to this process is a request for a binarization for the syntax element num_palette_indices{{_diff}}[[_minus1]], and MaxPaletteIndex. Output of this process is the binarization of the syntax element. The variables cRiceParam is derived as follows: cRiceParam=3+((MaxPaletteIndex+1)>>3)   (9-26) The variable cMax is derived from cRiceParam as: cMax=4<<cRiceParam  (9-27) The binarization of the syntax element num_palette_indices{{_diff}}[[_minus1]] is a concatenation of a prefix bin string and (when present) a suffix bin string. For the derivation of the prefix bin string, the following applies:The prefix value of num_palette_indices{{_diff}}[[minus1]], prefixVal, is derived as follows: prefixVal=Min(cMax,num_palette_indices{{_diff}}[[_minus1]])  (9-28)The prefix bin string is specified by invoking the TR binarization process as specified in clause 9.3.3.3 for prefixVal with the variables cMax and cRiceParam as inputs. When the prefix bin string is equal to the bit string of length4with all bits equal to 1, the suffix bin string is present and it is derived as follows:The suffix value of num_palette_indices{{_diff}}[[_minus1]], suffixVal, is derived as follows: suffixVal=num_palette_indices{{_diff}}[[_minus1]]−cMax  (9-29)The suffix bin string is specified by invoking the k-th order EGk binarization process as specified in clause 9.3.3.5 for the binarization of suffixVal with the Exp-Golomb order k set equal to cRiceParam+1. TABLE 9-77Syntax elements and associated binarizationsBinarizationSyntax structureSyntax elementProcessInput parameterspalette_coding( )num_palette_Indices9.5.3.13MaxPaletteIndex{{_diff}}[_MINUS1]] TABLE 9-82Assignment of ctxInc to syntax elements with context coded binsbinIdxSyntax element01234>=5num_palette_indices{{_diff}}[bypassbypassbypassbypassbypassbypass[_minus1]] 5.8 Embodiment #8 This embodiment illustrates a design of interleaved signalling in the line based CG palette mode. The embodiment is based on the draft provided in JVET-P2001-v4. palette_coding( x0, y0, cbWidth, cbHeight, startComp, numComps ) {DescriptorpalettePredictionFinished = 0...PreviousRunPosition = 0PreviousRunType = 0for( subSetId = 0; subSetId <= ( cbWidth * cbHeight − 1) / 16; subSetId++ ) {minSubPos = subSetId * 16if( minSubPos + 16 > cbWidth * cbHeight)maxSubPos = cbWidth * cbHeightelsemaxSubPos = minSubPos + 16RunCopyMap[ x0 ][ y0 ] = 0PaletteScanPos = minSubPoslog2CbWidth = Log2( cbWidth )log2CbHeight = Log2( cbHeight )while( PaletteScanPos < maxSubPos ) {xC = x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 1 ]if( PaletteScanPos > 0 ) {xcPrev               =               x0 +TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 0 ]ycPrev               =               y0 +TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 1 ]}if ( MaxPaletteIndex > 0 && PaletteScanPos > 0 ) {run_copy_flagae(v)RunCopyMap[ xC ][ yC ] = run_copy_flag}CopyAboveIndicesFlag[ xC ][ yC ] = 0if( MaxPaletteIndex > 0 && !RunCopyMap[ xC ][ yC ] ) {if( ( ( !palette_transpose_flag && yC > 0 ) ∥ ( palette_transpose_flag && xC >0                   )                   )&& CopyAboveIndicesFlag[ xcPrev ][ ycPrev ] = = 0 ) {copy_above_palette_indices_flagae(v)CopyAboveIndicesFlag[ xC ][ yC ] = copy_above_palette_indices_flag}PreviousRunType = CopyAboveIndicesFlag[ xC ][ yC ]PreviousRunPosition = curPos} else {CopyAboveIndicesFlag[ xC ][ yC ] = CopyAboveIndicesFlag[ xcPrev ][ ycPrev ]}{{ if( MaxPaletteIndex > 0 ) {if( !RunCopyMap[ xC ][ yC ] && CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {if( MaxPaletteIndex − adjust > 0 ) {palette_idx_idc{{ae(v)}}}adjust = 1}}if( !RunCopyMap[ xC ][ yC ] && CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {CurrPaletteIndex = palette_idx_idcif( CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {PaletteIndexMap[ xC ][ yC ] = CurrPaletteIndex} else {if( !palette_transpose_flag )PaletteIndexMap[ xC ][ yC ] = PaletteIndexMap[ xC ][ yC − 1 ]elsePaletteIndexMap[ xC ][ yC ] = PaletteIndexMap[ xC − 1 ][ yC ]}}if( palette_escape_val_present_flag ) {for( cIdx = startComp; cIdx < ( startComp + numComps ); cIdx++ ){xC = x0 + TraverseScanOrder[ log2CbWidth][ log2CbHeight ][ sPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth][ log2CbHeight ][ sPos ][ 1 ]if( PaletteIndexMap[ cIdx ][ xC ][ yC ] = = MaxPaletteIndex ) {palette_escape_valae(v)PaletteEscapeVal[ cIdx ][ xC ][ yC ] = palette_escape_val}}} }}PaletteScanPos ++}[[PaletteScanPos = minSubPoswhile( PaletteScanPos < maxSubPos ) {xC = x0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos ][ 1 ]if( PaletteScanPos > 0 ) {xcPrev               =               x0 +TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 0 ]ycPrev               =               y0 +TraverseScanOrder[ log2CbWidth ][ log2CbHeight ][ PaletteScanPos − 1 ][ 1 ]}if( MaxPaletteIndex > 0 ) {if( !RunCopyMap[ xC ][ yC ] && CopyAboveIndicesFlag[ xC ][ yC ] = = 0 ) {if( MaxPaletteIndex − adjust > 0 ) {palette_idx_idc[[ae(v)]]}adjust = 1}}if( !RunCopyMap[ xC ][ yC ] && CopyAboveIndicesFlag[ xC ][ yC ] = = 0) {CurrPaletteIndex = palette_idx_idcif( CopyAboveIndicesFlag[ xC ][ yC ] = = 0) {PaletteIndexMap[ xC ][ yC ] = CurrPaletteIndex}else {if( !palette_transposeflag )PaletteIndexMap[ xC ][ yC ] = PaletteIndexMap[ xC ][ yC − 1 ]elsePaletteIndexMap[ xC ][ yC ] = PaletteIndexMap[ xC − 1 ][ yC ]}}if( palette_escape_val_present_flag ) {for( cIdx = startComp; cIdx < ( startComp + numComps ); cIdx++ )for( sPos = minSubPos; sPos < maxSubPos; sPos++ ) {xC = x0 + TraverseScanOrder[ log2CbWidth][ log2CbHeight ][ sPos ][ 0 ]yC = y0 + TraverseScanOrder[ log2CbWidth][ log2CbHeight ][ sPos ][ 1 ]if( PaletteIndexMap[ cIdx ][ xC ][ yC ] = = MaxPaletteIndex ) {palette_escape_val[[ae(v)]]PaletteEscapeVal[ cIdx ][ xC ][ yC ] = palette_escape_val}}}}]]} 5.9 Embodiment #9 The changes are based on JVET-P2001-vE. 8.4.5.3 Decoding Process for Palette Mode Inputs to this process are: a location (xCbComp, yCbComp) specifying the top-left sample of the current coding block relative to the top-left sample of the current picture,a variable treeType specifying whether a single or a dual tree is used and if a dual tree is used, it specifies whether the current tree corresponds to the luma or chroma components,a variable cIdx specifying the color component of the current block,two variables nCbW and nCbH specifying the width and height of the current coding block, respectively. Output of this process is an array recSamples[x][y], with x=0 . . . nCbW−1, y=0 . . . nCbH−1 specifying reconstructed sample values for the block. Depending on the value of treeType, the variables startComp and numComps are derived as follows:If treeType is equal to SINGLE_TREE: startComp=0  (444) numComps=3  (445)Otherwise, treeType is equal to DUAL_TREE_LUMA: startComp=0  (446) numComps=1  (447)Otherwise, treeType is equal to DUAL_TREE_CHROMA: startComp=1  (448) numComps=2  (449) Depending on the value of cIdx, the variables nSubWidth and nSubHeight are derived as follows:If cIdx is greater than 0 and startComp is equal to 0, nSubWidth is set to SubWidthC and nSubHeight is set to SubHeightC.Otherwise, nSubWidth is set to 1 and nSubHeight is set to 1. The (nCbW×nCbH) block of the reconstructed sample array recSamples at location (xCbComp, yCbComp) is represented by recSamples[x][y] with x=0 . . . nCTbW−1 and y=0 . . . nCbH−1, and the value of recSamples[x][y] for each x in the range of 0 to nCbW−1, inclusive, and each y in the range of 0 to nCbH−1, inclusive, is derived as follows:The variables xL, yL, xCbL, and yCbL are derived as follows: xL=x*nSubWidth  (450) yL=y*nSubHeight  (451) xCbL=xCbComp*nSubWidth  (452) yCbL=yCbComp*nSubHeight  (453)The variable bIsEscapeSample is derived as follows:If PaletteIndexMap[xCbL+xL][yCbL+yL] is equal to MaxPaletteIndex and palette_escape_val_present_flag is equal to 1, bIsEscapeSample is set equal to 1.Otherwise, bIsEscapeSample is set equal to 0.If bIsEscapeSample is equal to 0, the following applies: recSamples[x][y]=CurrentPaletteEntries[cIdx][PaletteIndexMap[xCbL+xL][yCbL+yL]](454)Otherwise (bIsEscapeSample is equal to 1), the following ordered steps apply:1. The quantization parameter qP is derived as follows:If cIdx is equal to 0, qP=Max(QpPrimeTsMin,Qp′Y)  (455)Otherwise, if cIdx is equal to 1, qP=Max(QpPrimeTsMin,Qp′Cb)  (456)Otherwise (cIdx is equal to 2), qP=Max(QpPrimeTsMin,Qp′Cr)  (457)2. The list levelScale[ ] is specified as levelScale[k]={40, 45, 51, 57, 64, 72} with k=0 . . . 5.3. The following applies: {{shift=Min(bitDepth−1,(QpPrimeTsMin−4)/6)}} [[tmpVal=(PaletteEscapeVal[cIdx][xCbL+xL][yCbL+yL]*levelScale[qP%6])<<(qP/6)+32)>>6  (458)]] {{tmpVal=((PaletteEscapeVal[cIdx][xCbL+xL][yCbL+yL]<<shift)*levelScale[(qP−QpPrimeTsMin+4)%6])<<((qP−QpPrimeTsMin+4)/6)+32)>>6   (458)}} recSamples[x][y]=Clip3(0,(1<<BitDepth)−1,tmpVal)  (459) 5.10 Embodiment #10 The changes are based on JVET-P2001-vE. 8.4.5.3 Decoding Process for Palette Mode Inputs to this process are: a location (xCbComp, yCbComp) specifying the top-left sample of the current coding block relative to the top-left sample of the current picture,a variable treeType specifying whether a single or a dual tree is used and if a dual tree is used, it specifies whether the current tree corresponds to the luma or chroma components,a variable cIdx specifying the color component of the current block,two variables nCbW and nCbH specifying the width and height of the current coding block, respectively. Output of this process is an array recSamples[x][y], with x=0 . . . nCbW−1, y=0 . . . nCbH−1 specifying reconstructed sample values for the block. Depending on the value of treeType, the variables startComp and numComps are derived as follows:If treeType is equal to SINGLE_TREE: startComp=0  (444) numComps=3  (445)Otherwise, treeType is equal to DUAL_TREE_LUMA: startComp=0  (446) numComps=1  (447)Otherwise, treeType is equal to DUAL_TREE_CHROMA: startComp=1  (448) numComps=2  (449) Depending on the value of cIdx, the variables nSubWidth and nSubHeight are derived as follows:If cIdx is greater than 0 and startComp is equal to 0, nSubWidth is set to SubWidthC and nSubHeight is set to SubHeightC.Otherwise, nSubWidth is set to 1 and nSubHeight is set to 1. The (nCbW×nCbH) block of the reconstructed sample array recSamples at location (xCbComp, yCbComp) is represented by recSamples[x][y] with x=0 . . . nCTbW−1 and y=0 . . . nCbH−1, and the value of recSamples[x][y] for each x in the range of 0 to nCbW−1, inclusive, and each y in the range of 0 to nCbH−1, inclusive, is derived as follows:The variables xL, yL, xCbL, and yCbL are derived as follows: xL=x*nSubWidth  (450) yL=y*nSubHeight  (451) xCbL=xCbComp*nSubWidth  (452) yCbL=yCbComp*nSubHeight  (453)The variable bIsEscapeSample is derived as follows:If PaletteIndexMap[xCbL+xL][yCbL+yL] is equal to MaxPaletteIndex and palette_escape_val_present_flag is equal to 1, bIsEscapeSample is set equal to 1.Otherwise, bIsEscapeSample is set equal to 0.If bIsEscapeSample is equal to 0, the following applies: recSamples[x][y]=CurrentPaletteEntries[cIdx][PaletteIndexMap[xCbL+xL][yCbL+yL]](454)Otherwise (bIsEscapeSample is equal to 1), the following ordered steps apply:4. The quantization parameter qP is derived as follows:If cIdx is equal to 0, qP=Max(QpPrimeTsMin,Qp′Y)  (455)Otherwise, if cIdx is equal to 1, qP=Max(QpPrimeTsMin,Qp′Cb)  (456)Otherwise (cIdx is equal to 2), qP=Max(QpPrimeTsMin,Qp′Cr)  (457)5. The list levelScale[ ] is specified as levelScale[k]={40, 45, 51, 57, 64, 72} with k=0 . . . 5.6. The following applies: {{shift=Min(bitDepth−1,(QpPrimeTsMin−4)/6)}} [[tmpVal=(PaletteEscapeVal[cIdx][xCbL+xL][yCbL+yL]*levelScale[qP % 6])<<(qP/6)+32)>>6  (458)]] {{qP′=Max(0,qP−6*shift) tmpVal=((PaletteEscapeVal[cIdx][xCbL+xL][yCbL+yL]<<shift)*levelScale[qP′%6])<<(qP′/6)+32)>>6  (458)}} recSamples[x][y]=Clip3(0,(1<<BitDepth)−1,tmpVal)  (459) FIG.9is a block diagram of a video processing apparatus900. The apparatus900may be used to implement one or more of the methods described herein. The apparatus900may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus900may include one or more processors902, one or more memories904and video processing hardware906. The processor(s)902may be configured to implement one or more methods described in the present document. The memory (memories)904may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware906may be used to implement, in hardware circuitry, some techniques described in the present document. In some embodiments, the hardware906may be at least partly internal to the processor902, e.g., a graphics co-processor. Some embodiments of the present disclosure include making a decision or determination to enable a video processing tool or mode. In an example, when the video processing tool or mode is enabled, the encoder will use or implement the tool or mode in the processing of a block of video, but may not necessarily modify the resulting bitstream based on the usage of the tool or mode. That is, a conversion from the block of video to the bitstream representation of the video will use the video processing tool or mode when it is enabled based on the decision or determination. In another example, when the video processing tool or mode is enabled, the decoder will process the bitstream with the knowledge that the bitstream has been modified based on the video processing tool or mode. That is, a conversion from the bitstream representation of the video to the block of video will be performed using the video processing tool or mode that was enabled based on the decision or determination. Some embodiments of the present disclosure include making a decision or determination to disable a video processing tool or mode. In an example, when the video processing tool or mode is disabled, the encoder will not use the tool or mode in the conversion of the block of video to the bitstream representation of the video. In another example, when the video processing tool or mode is disabled, the decoder will process the bitstream with the knowledge that the bitstream has not been modified using the video processing tool or mode that was enabled based on the decision or determination. FIG.10is a block diagram showing an example video processing system1000in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of the system1000. The system1000may include input1002for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8- or 10-bit multi-component pixel values, or may be in a compressed or encoded format. The input1002may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON), etc. and wireless interfaces such as Wi-Fi or cellular interfaces. The system1000may include a coding component1004that may implement the various coding or encoding methods described in the present document. The coding component1004may reduce the average bitrate of video from the input1002to the output of the coding component1004to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component1004may be either stored, or transmitted via a communication connected, as represented by the component1006. The stored or communicated bitstream (or coded) representation of the video received at the input1002may be used by the component1008for generating pixel values or displayable video that is sent to a display interface1010. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder. Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or DisplayPort, and so on. Examples of storage interfaces include serial advanced technology attachment (SATA), peripheral component interface (PCI), integrated drive electronics (IDE) interface, and the like. The techniques described in the present document may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display. FIG.11is a block diagram that illustrates an example video coding system100that may utilize the techniques of this disclosure. As shown inFIG.11, video coding system100may include a source device110and a destination device120. Source device110generates encoded video data which may be referred to as a video encoding device. Destination device120may decode the encoded video data generated by source device110which may be referred to as a video decoding device. Source device110may include a video source112, a video encoder114, and an input/output (I/O) interface116. Video source112may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources. The video data may comprise one or more pictures. Video encoder114encodes the video data from video source112to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the video data. The bitstream may include coded pictures and associated data. The coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. I/O interface116may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to destination device120via I/O interface116through network130a. The encoded video data may also be stored onto a storage medium/server130bfor access by destination device120. Destination device120may include an I/O interface126, a video decoder124, and a display device122. I/O interface126may include a receiver and/or a modem. I/O interface126may acquire encoded video data from the source device110or the storage medium/server130b. Video decoder124may decode the encoded video data. Display device122may display the decoded video data to a user. Display device122may be integrated with the destination device120, or may be external to destination device120which be configured to interface with an external display device. Video encoder114and video decoder124may operate according to a video compression standard, such as the HEVC standard, VVC standard and other current and/or further standards. FIG.12is a block diagram illustrating an example of video encoder200, which may be video encoder114in the system100illustrated inFIG.11. Video encoder200may be configured to perform any or all of the techniques of this disclosure. In the example ofFIG.12, video encoder200includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of video encoder200. In some examples, a processor may be configured to perform any or all of the techniques described in this disclosure. The functional components of video encoder200may include a partition unit201, a prediction unit202which may include a mode select unit203, a motion estimation unit204, a motion compensation unit205and an intra prediction unit206, a residual generation unit207, a transform unit208, a quantization unit209, an inverse quantization unit210, an inverse transform unit211, a reconstruction unit212, a buffer213, and an entropy encoding unit214. In other examples, video encoder200may include more, fewer, or different functional components. In an example, prediction unit202may include an intra block copy (IBC) unit. The IBC unit may perform prediction in an IBC mode in which at least one reference picture is a picture where the current video block is located. Furthermore, some components, such as motion estimation unit204and motion compensation unit205may be highly integrated, but are represented in the example ofFIG.12separately for purposes of explanation. Partition unit201may partition a picture into one or more video blocks. Video encoder200and video decoder300may support various video block sizes. Mode select unit203may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra- or inter-coded block to a residual generation unit207to generate residual block data and to a reconstruction unit212to reconstruct the encoded block for use as a reference picture. In some examples, mode select unit203may select a combination of intra and inter prediction (CIIP) mode in which the prediction is based on an inter prediction signal and an intra prediction signal. Mode select unit203may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter-prediction. To perform inter prediction on a current video block, motion estimation unit204may generate motion information for the current video block by comparing one or more reference frames from buffer213to the current video block. Motion compensation unit205may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from buffer213other than the picture associated with the current video block. Motion estimation unit204and motion compensation unit205may perform different operations for a current video block, for example, depending on whether the current video block is in an I slice, a P slice, or a B slice. In some examples, motion estimation unit204may perform uni-directional prediction for the current video block, and motion estimation unit204may search reference pictures of list 0 or list 1 for a reference video block for the current video block. Motion estimation unit204may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. Motion estimation unit204may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. Motion compensation unit205may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block. In other examples, motion estimation unit204may perform bi-directional prediction for the current video block, motion estimation unit204may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. Motion estimation unit204may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. Motion estimation unit204may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. Motion compensation unit205may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block. In some examples, motion estimation unit204may output a full set of motion information for decoding processing of a decoder. In some examples, motion estimation unit204may not output a full set of motion information for the current video. Rather, motion estimation unit204may signal the motion information of the current video block with reference to the motion information of another video block. For example, motion estimation unit204may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block. In one example, motion estimation unit204may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder300that the current video block has the same motion information as the another video block. In another example, motion estimation unit204may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD). The motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block. The video decoder300may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block. As discussed above, video encoder200may predictively signal the motion vector. Two examples of predictive signalling techniques that may be implemented by video encoder200include advanced motion vector prediction (AMVP) and merge mode signalling. Intra prediction unit206may perform intra prediction on the current video block. When intra prediction unit206performs intra prediction on the current video block, intra prediction unit206may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and various syntax elements. Residual generation unit207may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block(s) of the current video block from the current video block. The residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block. In other examples, there may be no residual data for the current video block for the current video block, for example in a skip mode, and residual generation unit207may not perform the subtracting operation. Transform processing unit208may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block. After transform processing unit208generates a transform coefficient video block associated with the current video block, quantization unit209may quantize the transform coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block. Inverse quantization unit210and inverse transform unit211may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block. Reconstruction unit212may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the prediction unit202to produce a reconstructed video block associated with the current block for storage in the buffer213. After reconstruction unit212reconstructs the video block, loop filtering operation may be performed to reduce video blocking artifacts in the video block. Entropy encoding unit214may receive data from other functional components of the video encoder200. When entropy encoding unit214receives the data, entropy encoding unit214may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data. FIG.13is a block diagram illustrating an example of video decoder300which may be video decoder124in the system100illustrated inFIG.11. The video decoder300may be configured to perform any or all of the techniques of this disclosure. In the example ofFIG.13, the video decoder300includes a plurality of functional components. The techniques described in this disclosure may be shared among the various components of the video decoder300. In some examples, a processor may be configured to perform any or all of the techniques described in this disclosure. In the example ofFIG.13, video decoder300includes an entropy decoding unit301, a motion compensation unit302, an intra prediction unit303, an inverse quantization unit304, an inverse transformation unit305, a reconstruction unit306, and a buffer307. Video decoder300may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder200(FIG.12). Entropy decoding unit301may retrieve an encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). Entropy decoding unit301may decode the entropy coded video data, and from the entropy decoded video data, motion compensation unit302may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information. Motion compensation unit302may, for example, determine such information by performing the AMVP and merge mode. Motion compensation unit302may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in the syntax elements. Motion compensation unit302may use interpolation filters as used by video encoder200during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit302may determine the interpolation filters used by video encoder200according to received syntax information and use the interpolation filters to produce predictive blocks. Motion compensation unit302may use some of the syntax information to determine sizes of blocks used to encode frame(s) and/or slice(s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence. Intra prediction unit303may use intra prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks. Inverse quantization unit304inverse quantizes, i.e., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit301. Inverse transform unit305applies an inverse transform. Reconstruction unit306may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit302or intra prediction unit303to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks are then stored in buffer307, which provides reference blocks for subsequent motion compensation/intra prediction and also produces decoded video for presentation on a display device. In some embodiments, the following methods are based on the listing of examples and embodiments enumerated above. In an example, these methods can be implemented using, but not limited to, the implementations shown inFIG.9-13. FIG.14is a flowchart of an example method for video processing. As shown therein, the method1400includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (1410), wherein the bitstream representation conforms to a format rule that the current video block is coded using a palette mode coding tool, wherein a binarization of an escape symbol for the current video block uses an exponential-Golomb (EG) code of order K, wherein K is a non-negative integer that is unequal to three, and wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.15is a flowchart of an example method for video processing. As shown therein, the method1500includes performing a conversion between a video comprising one or more video regions comprising one or more video blocks and a bitstream representation of the video (1510), wherein the bitstream representation conforms to a format rule that a current video block of the one or more video blocks that is coded using a palette mode coding tool wherein a binarization of an escape symbol for the current video block uses a fixed length binarization, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.16is a flowchart of an example method for video processing. As shown therein, the method1600includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (1610), wherein the bitstream representation conforms to a format rule that the current video block is coded using a palette mode coding tool, wherein a binarization of an escape symbol of the current video block uses a variable length coding, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.17is a flowchart of an example method for video processing. As shown therein, the method1700includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (1710), wherein the conversion comprises an application of a quantization or an inverse quantization process on the current video block, wherein the bitstream representation conforms to a format rule that configures the application of the quantization or the inverse quantization process based on whether the current video block is coded using a palette mode coding tool, and wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.18is a flowchart of an example method for video processing. As shown therein, the method1800includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (1810), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented such that an escape symbol of the current video block is quantized and/or dequantized using a binary shift operation, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.19is a flowchart of an example method for video processing. As shown therein, the method1900includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (1910), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool, wherein one or more palette indexes of the palette mode coding tool are coded without using a reference index, and wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.20is a flowchart of an example method for video processing. As shown therein, the method2000includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2010), wherein the bitstream representation conforms to a format rule that the current video block is coded using a palette mode coding tool and constrains a derivation between an index of an escape symbol and an index of a non-escape symbol, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.21is a flowchart of an example method for video processing. As shown therein, the method2100includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2110), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool, wherein a derived palette index of the palette mode coding tool has a maximum value, and wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.22is a flowchart of an example method for video processing. As shown therein, the method2200includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2210), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising an escape symbol, wherein a value of an index indicating the escape symbol is unchanged for each of the one or more video regions, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.23is a flowchart of an example method for video processing. As shown therein, the method2300includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2310), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements that are coded based on the current index and a reference index, wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.24is a flowchart of an example method for video processing. As shown therein, the method2400includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2410), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising an escape symbol that is predictively coded, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.25is a flowchart of an example method for video processing. As shown therein, the method2500includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2510), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements that are run-length coded with a context based on a palette index for indexing palette entries, wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.26is a flowchart of an example method for video processing. As shown therein, the method2600includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2610), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising a current palette index that is signalled independently of previous palette indices, wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.27is a flowchart of an example method for video processing. As shown therein, the method2700includes determining, based on an alignment rule, a first neighboring video block used for predicting a quantization parameter for a current video block of one or more video regions of a video and a second neighboring video block used for predictively determining a coding mode of the current video block (2710), and performing, based on the determining, a conversion between the video and a bitstream representation of the video (2720). FIG.28is a flowchart of an example method for video processing. As shown therein, the method2800includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2810), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising a block-level quantization parameter (QP) difference regardless of whether the current video block comprises an escape symbol, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. FIG.29is a flowchart of an example method for video processing. As shown therein, the method2900includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (2910), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising one or more coded block flags (CBFs) for a palette block, wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.30is a flowchart of an example method for video processing. As shown therein, the method3000includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (3010), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising one or more palette indices, wherein a number of the one or more palette indices (NumPltIdx) is greater than or equal to K, wherein the palette mode coding tool represents the current video block using a palette of representative color values, and wherein K is a positive integer. FIG.31is a flowchart of an example method for video processing. As shown therein, the method3100includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (3110), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements based on a maximum size of a palette for the current block, a size of the current video block, a usage of a lossless mode, or a quantization parameter (QP), wherein the palette mode coding tool represents the current video block using a palette of representative color values. FIG.32is a flowchart of an example method for video processing. As shown therein, the method3200includes determining, for a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, that the current video block is coded with a block-based differential pulse code modulation (BDPCM) mode and split into multiple transform blocks or sub-blocks (3210), and performing, as part of performing the conversion, a residual prediction at a block level and an inclusion of one or more residuals in the bitstream representation at the sub-block or transform block level based on the determining (3220). FIG.33is a flowchart of an example method for video processing. As shown therein, the method3300includes performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video (3310), wherein the bitstream representation conforms to a format rule that the current video block that is coded using a line-based coefficient group (CG) palette mode, wherein the line-based CG palette mode represents multiple segments of each coding unit (CU) of the current video block using a palette of representative color values. The following solutions may be implemented together with additional techniques described in items listed in the previous section (e.g., item1) as preferred features of some embodiments. 1. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block is coded using a palette mode coding tool, wherein a binarization of an escape symbol for the current video block uses an exponential-Golomb (EG) code of order K, wherein K is a non-negative integer that is unequal to three, and wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 2. The method of solution 1, wherein K=0. 3. The method of solution 1, wherein K=1. 4. The method of solution 1, wherein K=2. 5. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising one or more video blocks and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that a current video block of the one or more video blocks is coded using a palette mode coding tool, wherein a binarization of an escape symbol for the current video block uses a fixed length binarization, wherein the palette mode coding tool represents the current video block using a palette of representative color values, and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 6. The method of solution 5, wherein the fixed length binarization uses N bits, wherein N is an integer greater than one. 7. The method of solution 6, wherein N is based on an internal bit depth. 8. The method of solution 6, wherein a value of N is signalled in a slice subpicture, tile, picture, or video. 9. The method of solution 6, wherein N is based on a quantization parameter. 10. The method of solution 9, wherein N is based on a function (f( )) of the quantization parameter (Qp), denoted as f(Qp). 11. The method of solution 9, wherein N is set to (ibd−max(16, (Qp−4)/6)), and wherein ibd is an internal bit depth. 12. The method of solution 9, wherein N is set to (ibd−max(QpPrimeTsMin, (Qp−4)/6)), wherein ibd is an internal bit depth and QpPrimeTsMin is a minimum allowed quantization parameter for a transform skip mode. 13. The method of solution 9, wherein N is set to max(A, (ibd−max(16, (QpPrimeTsMin−4)/6))), and wherein ibd is an internal bit depth, QpPrimeTsMin is a minimum allowed quantization parameter for a transform skip mode, and A is a non-negative integer. 14. The method of solution 13, wherein A=0 or A=1. 15. The method of any of solutions 9 to 14, wherein the quantization parameter is a sum of a quantization parameter of a slice of the video and a constant value, wherein the constant value is an integer. 16. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block is coded using a palette mode coding tool, wherein a binarization of an escape symbol of the current video block uses a variable length coding, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 17. The method of solution 16, wherein the variable length coding excludes an exponential-Golomb code of order 3. 18. The method of solution 16, wherein the variable length coding is a truncated binary (TB) code with an input parameter K, wherein K is an integer. 19. The method of solution 18, wherein K is based on (a) a message signalled in a sequence parameter set (SPS), a video parameter set (VPS), a picture parameter set (PPS), a picture header, a slice header, a tile group header, a largest coding unit (LCU) row, a group of LCUs, or a brick, (b) an internal bit depth, (c) an input bit depth, (d) a different between the internal bit depth and the input bit depth, (e) a dimension of the current video block, (f) a current quantization parameter of the current video block, (g) an indication of a color format of the video, (h) a coding tree structure, or (i) a color component of the video. 20. The method of solution 5, wherein multiple values of the escape symbol are signalled using multiple binarization methods. 21. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the conversion comprises an application of a quantization or an inverse quantization process on the current video block, wherein the bitstream representation conforms to a format rule that configures the application of the quantization or the inverse quantization process based on whether the current video block is coded using a palette mode coding tool, and wherein the palette mode coding tool represents the current video block using a palette of representative color values. 22. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented such that an escape symbol of the current video block is quantized and/or dequantized using a binary shift operation, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 23. The method of solution 22, wherein the quantizing corresponds to right bit-shifting. 24. The method of solution 22 wherein the escape symbol is coded as f(p, Qp), wherein f( ) is a function, p is an input symbol value, and Qp is a derived quantization parameter for a corresponding color component representing the current video block. 25. The method of solution 24, wherein f is defined as p>>g(Qp). 26. The method of solution 24, wherein f is defined as (p+(1<<(g(QP)−1)))>>g(Qp). 27. The method of solution 24, wherein f is defined as clip(0, (1<<bd)−1, (p+(1<<(g(QP)−1)))>>g(Qp)), wherein clip(x, min, max) is a clipping function, and wherein x, min, and max are integers. 28. The method of solution 22, wherein the escape symbol is coded as h(p), wherein h( ) is a function and p is an input value symbol. 29. The method of solution 28, wherein h is defined as p>>N and N is a non-negative integer. 30. The method of solution 28, wherein his defined as (p+(1<<(N−1)))>>N, and wherein N is a non-negative integer. 31. The method of solution 29 or 30, wherein N=0 when cu_transquant_bypass_flag=1. 32. The method of solution 29 or 30, wherein N=(bd−ibd) when cu_transquant_bypass_flag=1, wherein bd is an internal bit depth and ibd is an input bit depth. 33. The method of solution 28, wherein h is defined as clip(0, (1<<(bd−N)−1, p>>N), wherein bd is an internal bit depth for a current color component of the current video block and N is a non-negative integer, wherein clip(x, min, max) is a clipping function, and wherein x, min, and max are integers. 34. The method of solution 28, wherein h is defined as clip(0, (1<<(bd−N)−1, (p+(1<<(N−1)))>>N), wherein bd is an internal bit depth for a current color component of the current video block and N is a non-negative integer, wherein clip(x, min, max) is a clipping function, and wherein x, min, and max are integers. 35. The method of any of solutions 29 to 34, wherein N is in a range [0, (bd−1)], and wherein bd is an internal bit depth for a current color component of the current video block. 36. The method of solution 22, wherein the dequantizing corresponds to left bit-shifting. 37. The method of solution 36, wherein the escape symbol is dequantized as f(p, Qp), wherein f( ) is a function, p is a decoded escape symbol, and Qp is a derived quantization parameter for a corresponding color component representing the current video block. 38. The method of solution 37, wherein f is defined as p<<g(Qp). 39. The method of solution 36, wherein the escape symbol is reconstructed as f(p, Qp), wherein f( ) is a function, p is a decoded escape symbol, and Qp is a derived quantization parameter for a corresponding color component representing the current video block. 40. The method of solution 39, wherein f is defined as clip (0, (1<<bd)−1, p<<g(Qp)), wherein bd is an internal bit depth for a current color component of the current video block, and wherein clip(x, min, max) is a clipping function, and wherein x, min, and max are integers. 41. The method of solution 27, 33, 34, or 40, wherein the clipping function clip(x, min, max) is defined as clip(x,min,max)={minx<minxmin≤x≤max.maxx>max 42. The method of solution 36, wherein the escape symbol is reconstructed as h(p), wherein h( ) is a function and p is a decoded escape symbol. 43. The method of solution 42, wherein h is defined as p<<N and N is a non-negative integer. 44. The method of solution 42 or 43, wherein N=0 when cu_transquant_bypass_flag=1. 45. The method of solution 42 or 43, wherein N=(bd-ibd) when cu_transquant_bypass_flag=1, wherein bd is an internal bit depth and ibd is an input bit depth. 46. The method of solution 42 or 43, wherein N=(max(QpPrimeTsMin, qP)−4)/6, wherein qP is a decoded quantization parameters and QpPrimeTsMin is a minimum allowed quantization parameter for a transform skip mode. 47. The method of any of solutions 43 to 46, wherein N is further clipped as min(bd−1, N), and wherein bd is an internal bit depth for a current color component of the current video block. 48. The method of any of solutions 43 to 47, wherein N is in a range [0, (bd−1)], and wherein bd is an internal bit depth for a current color component of the current video block. 49. The method of solution 36, wherein a reconstruction offset of the escape symbol is based on bit depth information. 50. The method of solution 49, wherein the bit depth information comprises a difference between an internal bit depth and an input bit depth (denoted ΔBD). 51. The method of solution 50, wherein the reconstructed offset is equal to p<<K when K≤ΔBD, wherein p is a decoded escape symbol and K is an integer. 52. The method of solution 49, wherein the reconstructed offset is equal to p<<K when K≤TO, wherein p is a decoded escape symbol and K and TO are integers. 53. The method of solution 50, wherein T0=2. 54. The method of solution 50, wherein the reconstructed offset is equal to (p<<K)+((1<<(K−1))>>ΔBD<<ΔBD), wherein p is a decoded escape symbol and K is an integer. 55. The method of solution 50, wherein ΔBDis signalled in the bitstream representation in a sequence level, picture level, slice level, tile level, brick level, or subpicture level. 56. The method of any of solutions 22 to 55, wherein the escape symbol is context coded. 57. The method of any of solutions 22 to 55, wherein the escape symbol is bypass coded. 58. The method of any of solutions 25-27, 38 or 40, wherein g(Qp) is defined as (Qp−4)/6. 59. The method of any of solutions 25-27, 38 or 40, wherein g(Qp) is defined as (max(M, Qp)−4)/6, wherein M is an integer. 60. The method of solution 59, wherein M is signalled in a sequence parameter set (SPS). 61. The method of any of solutions 58 to 60, wherein g(Qp) is in a range [0, (bd−1)], and wherein bd is an internal bit depth for a current color component of the current video block. 62. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool, wherein one or more palette indexes of the palette mode coding tool are coded without using a reference index, and wherein the palette mode coding tool represents the current video block using a palette of representative color values. 63. The method of solution 62, wherein a binarization of the one or more palette indexes is a truncated binary (TB) code with a maximal palette index as a binarization input parameter. 64. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block is coded using a palette mode coding tool and constrains a derivation between an index of an escape symbol and an index of a non-escape symbol, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 65. The method of solution 64, wherein deriving the index of the escape symbol from the index of the non-escape symbol is disallowed. 66. The method of solution 64, wherein deriving the index of the non-escape symbol from the index of the escape symbol is disallowed. 67. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool, wherein a derived palette index of the palette mode coding tool has a maximum value, and wherein the palette mode coding tool represents the current video block using a palette of representative color values. 68. The method of solution 67, wherein the maximum value is a current palette table size. 69. The method of solution 67, wherein the maximum value is a current palette table size that excludes an index for one or more escape symbols, and wherein an escape symbol of the one or more escape symbols is used for a sample of the current video block coded without using the representative color values. 70. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising an escape symbol, wherein a value of an index indicating the escape symbol is unchanged for each of the one or more video regions, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 71. The method of solution 70, wherein the index is equal to MaxPaletteIndex. 72. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements that are coded based on the current index and a reference index, wherein the palette mode coding tool represents the current video block using a palette of representative color values. 73. The method of solution 72, wherein a difference between the current index and the reference index is coded. 74. The method of solution 73, wherein a coded representation of the difference excludes zero-valued differences. 75. The method of solution 72, wherein a modulo of a difference between the current index and the reference index is coded. 76. The method of solution 75, wherein the modulo is represented as I=modulo(C—R, MaxPaletteIndex), wherein C is the current index, R is the reference index, and MaxPaletteIndex is a predefined non-negative integer. 77. The method of solution 72, wherein the reference index is set to —1at a beginning of a palette block of the palette mode coding tool. 78. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising an escape symbol that is predictively coded, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 79. The method of solution 78, wherein the escape symbol is predictively coded based on previously coded escape symbols. 80. The method of solution 78, wherein the escape symbol in a color component of the video is predictively coded based on values in the same color component. 81. The method of solution 78, wherein the escape symbol in a first color component of the video is predictively coded based on values in a second color component of the video that is different from the first color component. 82. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements that are run-length coded with a context based on a palette index for indexing palette entries, wherein the palette mode coding tool represents the current video block using a palette of representative color values. 83. The method of solution 82, wherein the context for a prefix of a length element is based on the palette index after an index adjustment process at a decoder. 84. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising a current palette index that is signalled independently of previous palette indices, wherein the palette mode coding tool represents the current video block using a palette of representative color values. 85. The method of solution 84, wherein using the previous palette indices is based on whether the current video block comprises one or more escape symbols, and wherein an escape symbol is used for a sample of the current video block coded without using the representative color values. 86. A method of video processing, comprising determining, based on an alignment rule, a first neighboring video block used for predicting a quantization parameter for a current video block of one or more video regions of a video and a second neighboring video block used for predictively determining a coding mode of the current video block; and performing, based on the determining, a conversion between the video and a bitstream representation of the video. 87. The method of solution 86, wherein the first neighboring video block is an above left neighboring video block or an above neighboring video block. 88. The method of solution 86 or 87, wherein the second neighboring video block is an above left neighboring video block or an above neighboring video block. 89. The method of any of solutions 86 to 88, wherein the coding mode comprises a most probable mode (MPM) for the current video block. 90. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising a block-level quantization parameter (QP) difference regardless of whether the current video block comprises an escape symbol, wherein the palette mode coding tool represents the current video block using a palette of representative color values and wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 91. The method of solution 90, wherein the QP difference is coded for a palette block with a width greater than a threshold. 92. The method of solution 90, wherein the QP difference is coded for a palette block with a height greater than a threshold. 93. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising one or more coded block flags (CBFs) for a palette block, wherein the palette mode coding tool represents the current video block using a palette of representative color values. 94. The method of solution 93, wherein each of the CBFs is set equal to one. 95. The method of solution 93, wherein a value of the one or more CBFs is based on whether the current video block comprises an escape symbol, wherein the escape symbol is used for a sample of the current video block coded without using the representative color values. 96. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements comprising one or more palette indices, wherein a number of the one or more palette indices (NumPltIdx) is greater than or equal to K, wherein the palette mode coding tool represents the current video block using a palette of representative color values, and wherein K is a positive integer. 97. The method of solution 96, wherein K is based on a current palette size (S), an escape flag (E), or a size of the current video block (BlkS). 98. The method of solution 97, wherein K=S+E. 99. The method of solution 96, wherein K is equal to a maximal value of a palette index (MaxPaletteIndex) plus one. 100. The method of solution 96, wherein one of the syntax elements comprises NumPltIdx−K. 101. The method of solution 100, wherein a binarization of a value of (NumPltIdx−K) is a truncated binary code. 102. The method of solution 100, wherein a binarization of a value of (NumPltIdx−K) is a truncated unary code. 103. The method of solution 101 or 102, wherein (BlkS−K) is a binarization input parameter, and wherein BlkS is a size of the current video block. 104. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a palette mode coding tool is represented using syntax elements based on a maximum size of a palette for the current block, a size of the current video block, a usage of a lossless mode, or a quantization parameter (QP), wherein the palette mode coding tool represents the current video block using a palette of representative color values. 105. The method of solution 104, wherein a size of the palette for the current block is inferred to be equal to the size of the current video block upon a determination that the lossless mode has bene applied, the QP is greater than a threshold, or transform skip has been applied. 106. The method of any of solutions 1 to 105, wherein performing the conversion is further based on one or more of a video content of the video, a message signalled in a decoder parameter set (DPS), a sequence parameter set (SPS), a video parameter set (VPS), a picture parameter set (PPS), an adaptation parameter set (APS), a picture header, a slice header, a tile group header, a largest coding unit (LCU), a coding unit (CU), an LCU row, a group of LCUs, a transform unit (TU), a prediction unit (PU) block, or a video coding unit, an indication of a color format of the video, a coding tree structure, a temporal ID layer, or a profile, level, or tier of a standard. 107. A method of video processing, comprising determining, for a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, that the current video block is coded with a block-based differential pulse code modulation (BDPCM) mode and split into multiple transform blocks or sub-blocks; and performing, as part of performing the conversion, a residual prediction at a block level and an inclusion of one or more residuals in the bitstream representation at the sub-block or transform block level based on the determining. 108. A method of video processing, comprising performing a conversion between a video comprising one or more video regions comprising a current video block and a bitstream representation of the video, wherein the bitstream representation conforms to a format rule that the current video block that is coded using a line-based coefficient group (CG) palette mode, wherein the line-based CG palette mode represents multiple segments of each coding unit (CU) of the current video block using a palette of representative color values. 109. The method of solution 108, wherein the bitstream representation comprises an indication of whether an escape sample is present for each coefficient group, and wherein an escape sample is used for a sample of the current video block coded without using the representative color values. 110. The method of solution 108, wherein the bitstream representation comprises an indication of a usage of copying an above index that is not context coded. 111. The method of solution 110, wherein the indication is bypass coded. 112. The method of solution 108, wherein one or more copy flags, one or more run types, one or more indications of a usage of copying an above index, and escape values are signalled in the bitstream representation in an interleaved manner. 113. The method of solution 108, wherein the line-based CG palette mode is disabled upon a determination that a size of the current video block is less than or equal to a threshold (Th). 114. The method of any of solutions 107 to 113, wherein performing the conversion is further based on one or more of a video content of the video, a message signalled in a decoder parameter set (DPS), a sequence parameter set (SPS), a video parameter set (VPS), a picture parameter set (PPS), an adaptation parameter set (APS), a picture header, a slice header, a tile group header, a largest coding unit (LCU), a coding unit (CU), an LCU row, a group of LCUs, a transform unit (TU), a prediction unit (PU) block, or a video coding unit, an indication of a color format of the video, a coding tree structure, a temporal ID layer, or a profile, level, or tier of a standard. 115. The method of any of solutions 1 to 114, wherein performing the conversion comprises generating the bitstream representation from the one or more video regions. 116. The method of any of solutions 1 to 114, wherein performing the conversion comprises generating the one or more video regions from the bitstream representation. 117. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of solutions 1 to 116. 118. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of solutions 1 to 116. The disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and compact disc, read-only memory (CD-ROM) and digital versatile disc, read-only memory (DVD-ROM) disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular techniques. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments. Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
194,966
11943483
DETAILED DESCRIPTION Section headings are used in the present disclosure for ease of understanding and do not limit the applicability of embodiments disclosed in each section only to that section. Furthermore, H.266 terminology is used in some description only for ease of understanding and not for limiting scope of the disclosed embodiments. As such, the embodiments described herein are applicable to other video codec protocols and designs also. 1. INTRODUCTION This disclosure is related to video coding technologies. Specifically, it is about defining levels and bitstream conformance for a video codec that supports both single-layer video coding and multi-layer video coding. It may be applied to any video coding standard or non-standard video codec that supports single-layer video coding and multi-layer video coding, e.g., versatile video coding (VVC) that is being developed. 2. ABBREVIATIONS APS Adaptation Parameter SetAU Access UnitAUD Access Unit DelimiterAVC Advanced Video CodingBLA Broken Link AccessCLVS Coded Layer Video SequenceCLVSS Coded Layer Video Sequence StartCPB Coded Picture BufferCRA Clean Random AccessCTU Coding Tree UnitCVS Coded Video SequenceDCI Decoding Capability InformationDPB Decoded Picture BufferEOB End Of BitstreamEOS End Of SequenceGDR Gradual Decoding RefreshHEVC High Efficiency Video CodingHRD Hypothetical Reference DecoderIDR Instantaneous Decoding RefreshILP Inter-Layer PredictionILRP Inter-Layer Reference PictureIRAP Intra Random Access PointsJEM Joint Exploration ModelLSB Least Significant BitLTRP Long-Term Reference PictureMCTS Motion-Constrained Tile SetsMSB Most Significant BitNAL Network Abstraction LayerOLS Output Layer SetPH Picture HeaderPOC Picture Order CountPPS Picture Parameter SetPTL Profile, Tier and LevelPU Picture UnitRAP Random Access PointRBSP Raw Byte Sequence PayloadSEI Supplemental Enhancement InformationSLI Subpicture Level InformationSPS Sequence Parameter SetSTRP Short-Term Reference PictureSVC Scalable Video CodingVCL Video Coding LayerVPS Video Parameter SetVTM VVC Test ModelVUI Video Usability InformationVVC Versatile Video Coding 3. INITIAL DISCUSSION Video coding standards have evolved primarily through the development of the well-known International Telecommunication Union (ITU) Telecommunication Standardization Sector (ITU-T) and International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standards. The ITU-T produced H.261 and H.263, ISO/IEC produced Moving Picture Experts Group (MPEG)-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, the Joint Video Exploration Team (JVET) was founded by Video Coding Experts Group (VCEG) and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM). The JVET meeting is concurrently held once every quarter, and the new coding standard is targeting a 50% bitrate reduction as compared to HEVC. The new video coding standard was officially named as Versatile Video Coding (VVC) in the April 2018 JVET meeting, and the first version of VVC test model (VTM) was released at that time. As there are continuous effort contributing to VVC standardization, new coding techniques are being adopted to the VVC standard in every JVET meeting. The VVC working draft and test model VTM are then updated after every meeting. The VVC project is now aiming for technical completion (FDIS) at the July 2020 meeting. 3.1. Random Access and its Supports in HEVC and VVC Random access refers to starting access and decoding a bitstream from a picture that is not the first picture of the bitstream in decoding order. To support tuning in and channel switching in broadcast/multicast and multiparty video conferencing, seeking in local playback and streaming, as well as stream adaptation in streaming, the bitstream needs to include frequent random access points, which are typically intra-coded pictures but may also be inter-coded pictures (e.g., in the case of gradual decoding refresh). HEVC includes signalling of intra random access points (IRAP) pictures in the NAL unit header, through NAL unit types. Three types of IRAP pictures are supported, namely instantaneous decoder refresh (IDR), clean random access (CRA), and broken link access (BLA) pictures. IDR pictures constrain the inter-picture prediction structure to not reference any picture before the current group-of-pictures (GOP), conventionally referred to as closed-GOP random access points. CRA pictures are less restrictive by allowing certain pictures to reference pictures before the current GOP, all of which are discarded in case of a random access. CRA pictures are conventionally referred to as open-GOP random access points. BLA pictures usually originate from splicing of two bitstreams or part thereof at a CRA picture, e.g., during stream switching. To enable better systems usage of IRAP pictures, altogether six different NAL units are defined to signal the properties of the IRAP pictures, which can be used to better match the stream access point types as defined in the ISO base media file format (ISOBMFF), which are utilized for random access support in dynamic adaptive streaming over hypertext transfer protocol (HTTP) (DASH). VVC supports three types of IRAP pictures, two types of IDR pictures (one type with, or the other type without, associated RADL pictures) and one type of CRA picture. These are basically the same as in HEVC. The BLA picture types in HEVC are not included in VVC, mainly due to two reasons: i) the basic functionality of BLA pictures can be realized by CRA pictures plus the end of sequence NAL unit, the presence of which indicates that the subsequent picture starts a new CVS in a single-layer bitstream; and 11) there was a desire in specifying fewer NAL unit types than HEVC during the development of VVC, as indicated by the use of five instead of six bits for the NAL unit type field in the NAL unit header. Another key difference in random access support between VVC and HEVC is the support of GDR in a more normative manner in VVC. In GDR, the decoding of a bitstream can start from an inter-coded picture and, although at the beginning, not the entire picture region can be correctly decoded, but after a number of pictures the entire picture region would be correct. AVC and HEVC also support GDR, using the recovery point SEI message for signalling of GDR random access points and the recovery points. In VVC, a new NAL unit type is specified for indication of GDR pictures and the recovery point is signalled in the picture header syntax structure. A CVS and a bitstream are allowed to start with a GDR picture. This means that it is allowed for an entire bitstream to contain only inter-coded pictures without a single intra-coded picture. The main benefit of specifying GDR support this way is to provide a conforming behavior for GDR. GDR enables encoders to smooth the bit rate of a bitstream by distributing intra-coded slices or blocks in multiple pictures as opposed to intra coding entire pictures, thus allowing significant end-to-end delay reduction, which is considered more important nowadays than before as ultralow delay applications like wireless display, online gaming, and drone-based applications become more popular. Another GDR-related feature in VVC is virtual boundary signalling. The boundary between the refreshed region (i.e., the correctly decoded region) and the unrefreshed region at a picture between a GDR picture and its recovery point can be signalled as a virtual boundary and, when signalled, in-loop filtering across the boundary would not be applied, thus a decoding mismatch for some samples at or near the boundary would not occur. This can be useful when the application determines to display the correctly decoded regions during the GDR process. IRAP pictures and GDR pictures can be collectively referred to as random access point (RAP) pictures. 3.2. Picture Resolution Change within a Sequence In AVC and HEVC, the spatial resolution of pictures cannot change unless a new sequence using a new SPS starts, with an IRAP picture. VVC enables picture resolution change within a sequence at a position without encoding an TRAP picture, which is always intra-coded. This feature is sometimes referred to as reference picture resampling (RPR), as the feature needs resampling of a reference picture used for inter prediction when that reference picture has a different resolution than the current picture being decoded. The scaling ratio is restricted to be greater than or equal to 1/2 (2 times downsampling from the reference picture to the current picture), and less than or equal to 8 (8 times upsampling). Three sets of resampling filters with different frequency cutoffs are specified to handle various scaling ratios between a reference picture and the current picture. The three sets of resampling filters are applied respectively for the scaling ratio ranging from 1/2 to 1/1.75, from 1/1.75 to 1/1.25, and from 1/1.25 to 8. Each set of resampling filters has 16 phases for luma and 32 phases for chroma which is same to the case of motion compensation interpolation filters. Actually, the normal MC interpolation process is a special case of the resampling process with scaling ratio ranging from 1/1.25 to 8. The horizontal and vertical scaling ratios are derived based on picture width and height, and the left, right, top and bottom scaling offsets specified for the reference picture and the current picture. Other aspects of the VVC design for support of this feature that are different from HEVC include: i) the picture resolution and the corresponding conformance window are signalled in the PPS instead of in the SPS, while in the SPS the maximum picture resolution is signalled; and 11) for a single-layer bitstream, each picture store (a slot in the DPB for storage of one decoded picture) occupies the buffer size as required for storing a decoded picture having the maximum picture resolution. 3.3. Scalable Video Coding (SVC) in General and in VVC Scalable video coding (SVC, sometimes also referred to as scalability in video coding) refers to video coding in which a base layer (BL), sometimes referred to as a reference layer (RL), and one or more scalable enhancement layers (ELs) are used. In SVC, the base layer can carry video data with a base level of quality. The one or more enhancement layers can carry additional video data to support, for example, higher spatial, temporal, and/or signal-to-noise (SNR) levels. Enhancement layers may be defined relative to a previously encoded layer. For example, a bottom layer may serve as a BL, while a top layer may serve as an EL. Middle layers may serve as either ELs or RLs, or both. For example, a middle layer (e.g., a layer that is neither the lowest layer nor the highest layer) may be an EL for the layers below the middle layer, such as the base layer or any intervening enhancement layers, and at the same time serve as a RL for one or more enhancement layers above the middle layer. Similarly, in the multiview or three-dimensional (3D) extension of the HEVC standard, there may be multiple views, and information of one view may be utilized to code (e.g., encode or decode) the information of another view (e.g., motion estimation, motion vector prediction and/or other redundancies). In SVC, the parameters used by the encoder or the decoder are grouped into parameter sets based on the coding level (e.g., video-level, sequence-level, picture-level, slice level, etc.) in which they may be utilized. For example, parameters that may be utilized by one or more coded video sequences of different layers in the bitstream may be included in a video parameter set (VPS), and parameters that are utilized by one or more pictures in a coded video sequence may be included in a sequence parameter set (SPS). Similarly, parameters that are utilized by one or more slices in a picture may be included in a picture parameter set (PPS), and other parameters that are specific to a single slice may be included in a slice header. Similarly, the indication of which parameter set(s) a particular layer is using at a given time may be provided at various coding levels. Thanks to the support of reference picture resampling (RPR) in VVC, support of a bitstream containing multiple layers, e.g., two layers with standard definition (SD) and high definition (HD) resolutions in VVC can be designed without the need any additional signal-processing-level coding tool, as upsampling needed for spatial scalability support can just use the RPR upsampling filter. Nevertheless, high-level syntax changes (compared to not supporting scalability) are needed for scalability support. Scalability support is specified in VVC version 1. Different from the scalability supports in any earlier video coding standards, including in extensions of AVC and HEVC, the design of VVC scalability has been made friendly to single-layer decoder designs as much as possible. The decoding capability for multi-layer bitstreams are specified in a manner as if there were only a single layer in the bitstream. For example, the decoding capability, such as DPB size, is specified in a manner that is independent of the number of layers in the bitstream to be decoded. Basically, a decoder designed for single-layer bitstreams does not need much change to be able to decode multi-layer bitstreams. Compared to the designs of multi-layer extensions of AVC and HEVC, the hypertext transfer protocol live streaming (HLS) aspects have been significantly simplified at the sacrifice of some flexibilities. For example, an IRAP AU is required to contain a picture for each of the layers present in the CVS. 3.4. Parameter Sets AVC, HEVC, and VVC specify parameter sets. The types of parameter sets include SPS, PPS, APS, and VPS. SPS and PPS are supported in all of AVC, HEVC, and VVC. VPS was introduced since HEVC and is included in both HEVC and VVC. APS was not included in AVC or HEVC but is included in the latest VVC draft text. SPS was designed to carry sequence-level header information, and PPS was designed to carry infrequently changing picture-level header information. With SPS and PPS, infrequently changing information need not to be repeated for each sequence or picture, hence redundant signalling of this information can be avoided. Furthermore, the use of SPS and PPS enables out-of-band transmission of the important header information, thus not only avoiding the need for redundant transmissions but also improving error resilience. VPS was introduced for carrying sequence-level header information that is common for all layers in multi-layer bitstreams. APS was introduced for carrying such picture-level or slice-level information that needs quite some bits to code, can be shared by multiple pictures, and in a sequence there can be quite many different variations. 4. TECHNICAL PROBLEMS ADDRESSED BY DISCLOSED TECHNICAL SOLUTIONS The latest designs of POC, GDR, EOS, and still picture profiles in VVC have the following problems:1) It is required that ph_poc_msb_cycle_present_flag shall be equal to 0 when vps_independent_layer_flag[GeneralLayerIdx[nuh_layer_id]] is equal to 0 and there is a picture in the current AU in a reference layer of the current layer. However, such a picture in a reference layer could be removed by the general sub-bitstream extraction process specified in clause C.6. Consequently, the POC derivation will not be correct.2) The value of ph_poc_msb_cycle_present_flag is used in the POC derivation process, while the flag may be not present and there is no value inferred in that case.3) The GDR feature mainly useful used low end-to-end delay applications. Therefore, it would make sense to disallow its use when the bitstream is encoded in a way that is not suitable for low end-to-end delay applications.4) When EOS NAL unit for a layer is present in an AU of a multi-layer bitstream, that means there has been a seeking operation to jump to this AU, or this AU is a bitstream splicing point. For either of the two situations, it does not make sense that this layer is not continuous for the same content, while in another layer of the same bitstream, the content is continuous, regardless of whether there is inter-layer dependency between the layers.5) It is possible to have a bitstream that has no picture to output. That should be disallowed, either generally for all profiles, or just for the still picture profiles. 5. A LISTING OF EMBODIMENTS AND SOLUTIONS To solve the above problems, and others, methods as summarized below are disclosed. The items should be considered as examples to explain the general concepts and should not be interpreted in a narrow way. Furthermore, these items can be applied individually or combined in any manner.1) To solve problem 1, instead of requiring ph_poc_msb_cycle_present_flag to equal to 0 when vps_independent_layer_flag[GeneralLayerIdx[nuh_layer_id]] is equal to 0 and there is a picture in the current AU in a reference layer of the current layer, the value of ph_poc_msb_cycle_present_flag may be required to be equal to 0 under a tighter condition.a. In one example, the value of ph_poc_msb_cycle_present_flag is be required to be equal to 0 when vps_independent_layer_flag[GeneralLayerIdx[nuh_layer_id]] is equal to 0 and there is an ILRP entry in RefPicList[0] or RefPicList[1] of a slice of the current picture.b. In one example, the value of ph_poc_msb_cycle_present_flag is be required to be equal to 0 when vps_independent_layer_flag[GeneralLayerIdx[nuh_layer_id]] is equal to 0 and there is a picture with nuh_layer_id equal to refpicLayerId that is in the current AU in a reference layer of the current layer and has TemporalId less than or equal to Max(0, vps_max_tid_il_ref_pics_plus1[currLayerIdx][refLayerIdx]−1), where currLayerIdx and refLayerIdx are equal to GeneralLayerIdx[nuh_layer_id] and GeneralLayerIdx[refpicLayerId], respectively.c. In one example, the value of ph_poc_msb_cycle_present_flag is never required to be equal to 0.2) To solve problem 2, instead of use “ph_poc_msb_cycle_present_flag is equal to 1 (0)” in the POC derivation process, use “ph_poc_msb_cycle_val is present (not present)”.3) To solve problem 3, it is assumed that GDR pictures are only used for in low end-to-end delay applications, and GDR pictures may be disallowed when the output order and decoding order of AUs are different.a. In one example, it is required that, when sps_gdr_enabled_flag is equal to 1, the decoding order and the output order of all pictures in the CLVS shall be the same. Note that this constraint would also mandate that the decoding order and output order of AUs are the same in multi-layer bitstreams, because all pictures within an AU are required to be contiguous in decoding order, and all pictures within an AU have the same output order.b. In one example, it is required that, when sps_gdr_enabled_flag is equal to 1 for an SPS referenced by a picture in a CVS, the decoding order and the output order of all AUs in the CVS shall be the same.c. In one example, it is required that, when sps_gdr_enabled_flag is equal to 1 for an SPS referenced by a picture, the decoding order and the output order of all AUs in the bitstream shall be the same.d. In one example, it is required that, when sps_gdr_enabled_flag is equal to 1 for an SPS present in the bitstream, the decoding order and the output order of all AUs in the bitstream shall be the same.e. In one example, it is required that, when sps_gdr_enabled_flag is equal to 1 for an SPS for the bitstream (provided by being in the bitstream or through an external means), the decoding order and the output order of all AUs in the bitstream shall be the same.4) To solve problem 4, when EOS NAL unit for a layer is present in an AU of a multi-layer bitstream, it is required that the next picture in each of all or certain higher layers be a CLVSS picture.a. In one example, it is specified that, when an AU auA contains an EOS NAL unit in a layer layerA, for each layer layerB that is present in the CVS and has layerA as a reference layer, the first picture in layerB in decoding order in an AU following auA in decoding order shall be a CLVSS picture.b. In one example, alternatively, it is specified that, when an AU auA contains an EOS NAL unit in a layer layerA, for each layer layerB that is present in the CVS and is a higher layer than layerA, the first picture in layerB in decoding order in an AU following auA in decoding order shall be a CLVSS picture.c. In one example, alternatively, it is specified that, when one picture in an AU auA is a CLVSS picture that is a CRA or GDR picture, for each layer layerA present in the CVS, if there is a picture picA for layerA in auA, picA shall be a CLVSS picture, otherwise (there is no picture for layerA in auA), the first picture in decoding order for layerA in an AU following auA in decoding order shall be a CLVSS picture.d. In one example, alternatively, it is specified that, when a picture in a layer layerB in an AU auA is a CLVSS picture that is a CRA or GDR picture, for each layer layerA present in the CVS that is higher than layerB, if there is a picture picA for layerA in auA, picA shall be a CLVSS picture, otherwise (there is no picture for layerA in auA), the first picture in decoding order for layerA in an AU following auA in decoding order shall be a CLVSS picture.e. In one example, alternatively, it is specified that, when a picture in a layer layerB in an AU auA is a CLVSS picture that is a CRA or GDR picture, for each layer layerA present in the CVS that has layerB as a reference layer, if there is a picture picA for layerA in auA, picA shall be a CLVSS picture, otherwise (there is no picture for layerA in auA), the first picture in decoding order for layerA in an AU following auA in decoding order shall be a CLVSS picture.f. In one example, alternatively, it is specified that, when there is an EOS NAL unit in an AU, there shall be an EOS NAL unit in the AU for each layer present in the CVS.g. In one example, alternatively, it is specified that, when there is an EOS NAL unit in layer layerB in an AU, there shall be an EOS NAL unit in the AU for each layer present in the CVS that is higher than layerB.h. In one example, alternatively, it is specified that, when there is an EOS NAL unit in layer layerB in an AU, there shall be an EOS NAL unit in the AU for each layer present in the CVS that has layerB as a reference layer.i. In one example, alternatively, it is specified that, when a picture in an AU is a CLVSS picture that is a CRA or GDR picture, all pictures in the AU shall be CLVSS pictures.j. In one example, alternatively, it is specified that, when a picture in a layer layerB in an AU is a CLVSS picture that is a CRA or GDR picture, the pictures in the AU in all layers that are higher than layerB shall be CLVSS pictures.k. In one example, alternatively, it is specified that, when a picture in a layer layerB in an AU is a CLVSS picture that is a CRA or GDR picture, the pictures in the AU in all layers that have layerB as a reference layer shall be CLVSS pictures.l. In one example, alternatively, it is specified that, when a picture in an AU is a CLVSS picture that is a CRA or GDR picture, the AU shall have a picture for each layer present in the CVS, and all pictures in the AU shall be CLVSS pictures.m. In one example, alternatively, it is specified that, when a picture in a layer layerB in an AU is a CLVSS picture that is a CRA or GDR picture, the AU shall have a picture for each layer higher than layerB present in the CVS, and all pictures in the AU shall be CLVSS pictures.n. In one example, alternatively, it is specified that, when a picture in a layer layerB in an AU is a CLVSS picture that is a CRA or GDR picture, the AU shall have a picture for each layer having layerB as a reference layer present in the CVS, and all pictures in the AU shall be CLVSS pictures.5) To solve problem 5, it is specified that a bitstream shall have at least one picture that is output.a. In one example, it is specified that, when a bitstream contains only one picture, the picture shall have ph_pic_output_flag equal to 1.b. In one example, it is specified that a bitstream shall have at least one picture that is in an output layer and has ph_pic_output_flag equal to 1.c. In examples, either of the above constraints is specified as part of the definition of one or more still picture profiles, e.g., the Main 10 Still Picture profile and the Main 4:4:4 10 Still Picture profile.d. In examples, either of the above constraints is specified not part of the definition of a profile, such that it applies to any profile. 6. EMBODIMENTS Below are some example embodiments for some of the invention aspects summarized above in Section 5, which can be applied to the VVC specification. The changed texts are based on the latest VVC text in JVET-S0152-v5. Most relevant parts that have been added or modified are bolded, underlined, and italicized, e.g., “using Aand some of the deleted parts are italicized and enclosed with bolded double square brackets, e.g., “based on [[and]]B”. 6.1. First Embodiment This embodiment is for items 1 to 5 and some of their sub-items. 7.4.3.7 Picture Header Structure Semantics . . . ph_poc_msb_cycle_present_flag equal to 1 specifies that the syntax element ph_poc_msb_cycle_val is present in the PH. ph_poc_msb_cycle_present_flag equal to 0 specifies that the syntax element ph_poc_msb_cycle_val is not present in the PH. When vps_independent_layer_flag[GeneralLayerIdx[nuh_layer_id]] is equal to 0 and there isa picture in the current AU in a reference layer of the current layer]], the value of ph_poc_msb_cycle_present_flag shall be equal to 0. . . . ph_pic_output_flag affects the decoded picture output and removal processes as specified in Annex C. When ph_pic_output_flag is not present, it is inferred to be equal to 1. NOTE 5—There is no picture in the bitsteam that has ph_non_refpic_flag equal to 1 and ph_pic_output_flag equal to 0. 8.3.1 Decoding Process for Picture Order Count When[[ph_poc_msb_cycle_present_flag is equal to 0]] and the current picture is not a CLVSS picture, the variables prevPicOrderCntLsb and prevPicOrderCntMsb are derived as follows: Let prevTid0Pic be the previous picture in decoding order that has nuh_layer_id equal to the nuh_layer_id of the current picture, has TemporalId and ph_non_refpic_flag both equal to 0, and is not a RASL or RADL picture.The variable prevPicOrderCntLsb is set equal to ph_pic_order_cnt_lsb of prevTid0Pic.The variable prevPicOrderCntMsb is set equal to PicOrderCntMsb of prevTid0Pic. The variable PicOrderCntMsb of the current picture is derived as follows:If[[ph_poc_ms_cycle_present_flag is equal to 1]], PicOrderCntMsb is set equal to ph_poc_msb_cycle_val*MaxPicOrderCntLsb.Otherwise[[ph_poc_msb_cycle_present_flag is equal to 0]]), if the current picture is a CLVSS picture, PicOrderCntMsb is set equal to 0. . . . 7.4.3.3 Sequence Parameter Set RBSP Semantics . . . sps_gdr_enabled_flag equal to 1 specifies that GDR pictures are enabled and may be present in the CLVS. sps_gdr_enabled_flag equal to 0 specifies that GDR pictures are disabled and not present in the CLVS. . . . 7.4.3.10 End of Sequence RBSP Semantics . . . FIG.1is a block diagram showing an example video processing system1000in which various embodiments disclosed herein may be implemented. Various embodiments may include some or all of the components of the system1000. The system1000may include input1002for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8- or 10-bit multi-component pixel values, or may be in a compressed or encoded format. The input1002may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON), etc., and wireless interfaces such as Wi-Fi or cellular interfaces. The system1000may include a coding component1004that may implement the various coding or encoding methods described in the present disclosure. The coding component1004may reduce the average bitrate of video from the input1002to the output of the coding component1004to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component1004may be either stored, or transmitted via a communication connected, as represented by the component1006. The stored or communicated bitstream (or coded) representation of the video received at the input1002may be used by the component1008for generating pixel values or displayable video that is sent to a display interface1010. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder. Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or DisplayPort, and so on. Examples of storage interfaces include serial advanced technology attachment (SATA), peripheral component interface (PCI), integrated drive electronics (IDE) interface, and the like. The embodiments described in the present disclosure may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display. FIG.2is a block diagram of a video processing apparatus2000. The apparatus2000may be used to implement one or more of the methods described herein. The apparatus2000may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus2000may include one or more processors2002, one or more memories2004and video processing hardware2006. The processor(s)2002may be configured to implement one or more methods described in the present disclosure (e.g., inFIGS.6-9). The memory (memories)2004may be used for storing data and code used for implementing the methods and embodiments described herein. The video processing hardware2006may be used to implement, in hardware circuitry, some embodiments described in the present disclosure. In some embodiments, the hardware2006may be partly or entirely in the one or more processors2002, e.g., a graphics processor. FIG.3is a block diagram that illustrates an example video coding system100that may utilize the embodiments of this disclosure. As shown inFIG.3, video coding system100may include a source device110and a destination device120. Source device110generates encoded video data which may be referred to as a video encoding device. Destination device120may decode the encoded video data generated by source device110which may be referred to as a video decoding device. Source device110may include a video source112, a video encoder114, and an input/output (I/O) interface116. Video source112may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources. The video data may comprise one or more pictures. Video encoder114encodes the video data from video source112to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the video data. The bitstream may include coded pictures and associated data. The coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. I/O interface116may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to destination device120via I/O interface116through network130a. The encoded video data may also be stored onto a storage medium/server130bfor access by destination device120. Destination device120may include an I/O interface126, a video decoder124, and a display device122. I/O interface126may include a receiver and/or a modem. I/O interface126may acquire encoded video data from the source device110or the storage medium/server130b. Video decoder124may decode the encoded video data. Display device122may display the decoded video data to a user. Display device122may be integrated with the destination device120, or may be external to destination device120which be configured to interface with an external display device. Video encoder114and video decoder124may operate according to a video compression standard, such as the HEVC standard, VVC standard, and other current and/or further standards. FIG.4is a block diagram illustrating an example of video encoder200, which may be video encoder114in the system100illustrated inFIG.3. Video encoder200may be configured to perform any or all of the embodiments of this disclosure. In the example ofFIG.4, video encoder200includes a plurality of functional components. The embodiments described in this disclosure may be shared among the various components of video encoder200. In some examples, a processor may be configured to perform any or all of the embodiments described in this disclosure. The functional components of video encoder200may include a partition unit201, a prediction unit202, which may include a mode select unit203, a motion estimation unit204, a motion compensation unit205, and an intra prediction unit206; a residual generation unit207; a transform unit208; a quantization unit209; an inverse quantization unit210; an inverse transform unit211; a reconstruction unit212; a buffer213; and an entropy encoding unit214. In other examples, video encoder200may include more, fewer, or different functional components. In an example, prediction unit202may include an intra block copy (IBC) unit. The IBC unit may perform prediction in an IBC mode in which at least one reference picture is a picture where the current video block is located. Furthermore, some components, such as motion estimation unit204and motion compensation unit205may be highly integrated, but are represented in the example ofFIG.4separately for purposes of explanation. Partition unit201may partition a picture into one or more video blocks. Video encoder200and video decoder300may support various video block sizes. Mode select unit203may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra- or inter-coded block to a residual generation unit207to generate residual block data and to a reconstruction unit212to reconstruct the encoded block for use as a reference picture. In some examples, mode select unit203may select a combination of intra and inter prediction (CIIP) mode in which the prediction is based on an inter prediction signal and an intra prediction signal. Mode select unit203may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter prediction. To perform inter prediction on a current video block, motion estimation unit204may generate motion information for the current video block by comparing one or more reference frames from buffer213to the current video block. Motion compensation unit205may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from buffer213other than the picture associated with the current video block. Motion estimation unit204and motion compensation unit205may perform different operations for a current video block, for example, depending on whether the current video block is in an I slice, a P slice, or a B slice. In some examples, motion estimation unit204may perform uni-directional prediction for the current video block, and motion estimation unit204may search reference pictures of list 0 or list 1 for a reference video block for the current video block. Motion estimation unit204may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. Motion estimation unit204may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. Motion compensation unit205may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block. In other examples, motion estimation unit204may perform bi-directional prediction for the current video block, motion estimation unit204may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. Motion estimation unit204may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. Motion estimation unit204may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. Motion compensation unit205may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block. In some examples, motion estimation unit204may output a full set of motion information for decoding processing of a decoder. In some examples, motion estimation unit204may not output a full set of motion information for the current video. Rather, motion estimation unit204may signal the motion information of the current video block with reference to the motion information of another video block. For example, motion estimation unit204may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block. In one example, motion estimation unit204may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder300that the current video block has the same motion information as the another video block. In another example, motion estimation unit204may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD). The motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block. The video decoder300may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block. As discussed above, video encoder200may predictively signal the motion vector. Two examples of predictive signalling techniques that may be implemented by video encoder200include advanced motion vector prediction (AMVP) and merge mode signalling. Intra prediction unit206may perform intra prediction on the current video block. When intra prediction unit206performs intra prediction on the current video block, intra prediction unit206may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and various syntax elements. Residual generation unit207may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block(s) of the current video block from the current video block. The residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block. In other examples, there may be no residual data for the current video block for the current video block, for example in a skip mode, and residual generation unit207may not perform the subtracting operation. Transform processing unit208may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block. After transform processing unit208generates a transform coefficient video block associated with the current video block, quantization unit209may quantize the transform coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block. Inverse quantization unit210and inverse transform unit211may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block. Reconstruction unit212may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the prediction unit202to produce a reconstructed video block associated with the current block for storage in the buffer213. After reconstruction unit212reconstructs the video block, loop filtering operation may be performed reduce video blocking artifacts in the video block. Entropy encoding unit214may receive data from other functional components of the video encoder200. When entropy encoding unit214receives the data, entropy encoding unit214may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data. FIG.5is a block diagram illustrating an example of video decoder300which may be video decoder124in the system100illustrated inFIG.3. The video decoder300may be configured to perform any or all of the embodiments of this disclosure. In the example ofFIG.5, the video decoder300includes a plurality of functional components. The embodiments described in this disclosure may be shared among the various components of the video decoder300. In some examples, a processor may be configured to perform any or all of the embodiments described in this disclosure. In the example ofFIG.5, video decoder300includes an entropy decoding unit301, a motion compensation unit302, an intra prediction unit303, an inverse quantization unit304, an inverse transform unit305, a reconstruction unit306, and a buffer307. Video decoder300may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to video encoder200(FIG.4). Entropy decoding unit301may retrieve an encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). Entropy decoding unit301may decode the entropy coded video data, and from the entropy decoded video data, motion compensation unit302may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information. Motion compensation unit302may, for example, determine such information by performing the AMVP and merge mode. Motion compensation unit302may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in the syntax elements. Motion compensation unit302may use interpolation filters as used by video encoder200during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit302may determine the interpolation filters used by video encoder200according to received syntax information and use the interpolation filters to produce predictive blocks. Motion compensation unit302may use some of the syntax information to determine sizes of blocks used to encode frame(s) and/or slice(s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence. Intra prediction unit303may use intra prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks. Inverse quantization unit304inverse quantizes, i.e., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit301. Inverse transform unit305applies an inverse transform. Reconstruction unit306may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit302or intra-prediction unit303to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks are then stored in buffer307, which provides reference blocks for subsequent motion compensation/intra prediction and also produces decoded video for presentation on a display device. FIGS.6-11show example methods that can implement the embodiments described above in and in relation to, for example,FIGS.1-5. FIG.6shows a flowchart for an example method600of video processing. The method600includes, at operation610, performing a conversion between a video comprising one or more pictures and a bitstream of the video, the bitstream conforming to a format rule that specifies a constraint on a value of a first syntax element that specifies whether a second syntax element is present in a picture header syntax structure of a current picture, and the second syntax element specifying a value of a picture order count (POC) most significant bit (MSB) cycle of the current picture. FIG.7shows a flowchart for an example method700of video processing. The method700includes, at operation710, performing a conversion between a video comprising one or more pictures and a bitstream of the video, the bitstream conforming to a format rule that specifies a derivation of a picture order count (POC) in an absence of a syntax element, and the syntax element specifying a value of the POC most significant bit (MSB) cycle of a current picture. FIG.8shows a flowchart for an example method800of video processing. The method800includes, at operation810, performing a conversion between a video and a bitstream of the video, the bitstream comprising access units, AUs, comprising pictures according to a rule that specifies that gradual decode refresh (GDR) pictures are disallowed in the bitstream in response to an output order of the AUs being different from a decoding order of the AUs. FIG.9shows a flowchart for an example method900of video processing. The method900includes, at operation910, performing a conversion between a video and a bitstream of the video, the bitstream comprising multiple layers in multiple access units, AUs, comprising one or more pictures according to a format rule that specifies that, responsive to an end of sequence (EOS) network abstraction layer (NAL) unit for a first layer being present in a first access unit (AU) in the bitstream, a subsequent picture of each of one or more higher layers of the first layer in an AU following the first AU in the bitstream is a coded layer video sequence start (CLVSS) picture. FIG.10shows a flowchart for an example method1000of video processing. The method1000includes, at operation1010, performing a conversion between a video and a bitstream of the video, the bitstream comprising multiple layers in multiple access units, AUs, comprising one or more pictures according to a format rule that specifies that, responsive to a first picture in a first access unit being a coded layer video sequence start (CLVSS) picture that is a clean random access (CRA) picture or a gradual decoding refresh (GDR) picture, a second picture is a CLVSS picture. FIG.11shows a flowchart for an example method1100of video processing. The method1100includes, at operation1110, performing a conversion between a video comprising one or more pictures and a bitstream of the video according to a rule that specifies that the bitstream comprises at least a first picture that is output, the first picture being in an output layer, the first picture comprising a syntax element equaling one, and the syntax element affecting a decoded picture output and a removal process associated with a hypothetical reference decoder (HRD). The following solutions show example embodiments discussed in the previous section (e.g., items 1-5). A listing of solutions preferred by some embodiments is provided next. A1. A method of video processing, comprising performing a conversion between a video comprising one or more pictures and a bitstream of the video, wherein the bitstream conforms to a format rule, wherein the format rule specifies a constraint on a value of a first syntax element that specifies whether a second syntax element is present in a picture header syntax structure of a current picture, and wherein the second syntax element specifies a value of a picture order count (POC) most significant bit (MSB) cycle of the current picture. A2. The method of solution A1, wherein the value of the first syntax element is equal to zero in response to a value of a flag equaling zero and an inter-layer reference picture (ILRP) entry being in a reference picture list of a slice of the current picture, and wherein the flag specifies whether an indexed layer uses inter-layer prediction. A3. The method of solution A2, wherein the reference picture list comprises a first reference picture list (RefPicList[0]) or a second reference picture list (RefPicList[1]). A4. The method of solution A2, wherein the value of the first syntax element equaling zero specifies that the second syntax element is not present in the picture header syntax structure. A5. The method of solution A2, wherein the value of the flag equaling zero specifies that the indexed layer is allowed to use the inter-layer prediction. A6. The method of solution A1, wherein the value of the first syntax element is equal to zero in response to a value of a flag equaling zero and a picture having (i) a first identifier that is equal to a second identifier that is in a current access unit (AU) in a reference layer of a current layer and (ii) a third identifier that is less than or equal to a threshold, wherein the flag specifies whether an indexed layer uses inter-layer prediction, wherein the first identifier specifies a layer to which a video coding layer (VCL) network abstraction layer (NAL) unit belongs, wherein the second identifier specifies a layer to which a reference picture belongs, wherein the third identifier is a temporal identifier, and wherein the threshold is based on a second syntax element that specifies whether pictures in an indexed layer that are neither intra random access pictures (IRAP) pictures nor gradual decoding refresh (GDR) pictures are used as an inter-layer reference picture (IRLP) for decoding a picture in the indexed layer. A7. The method of solution A6, wherein the first identifier is nuh_layer_id, the second identifier is refpicLayerId, and the third identifier is TemporalId, and wherein the second syntax element is vps_max_tid_il_ref_pics_plus1. A8. The method of solution A1, wherein the first syntax element is never required to be zero. A9. The method of any of solutions A2 to A8, wherein the first syntax element is ph_poc_msb_cycle_present_flag, the flag is vps_independent_layer_flag, and wherein the second syntax element is ph_poc_msb_cycle_val. A10. A method of video processing, comprising performing a conversion between a video comprising one or more pictures and a bitstream of the video, wherein the bitstream conforms to a format rule, wherein the format rule specifies a derivation of a picture order count (POC) in an absence of a syntax element, and wherein the syntax element specifies a value of the POC most significant bit (MSB) cycle of a current picture. All. The method of solution A10, wherein the syntax element is ph_poc_msb_cycle_val. A12. A method of video processing, comprising performing a conversion between a video and a bitstream of the video, wherein the bitstream comprises access units (AUs) comprising pictures according to a rule, wherein the rule specifies that gradual decode refresh (GDR) pictures are disallowed in the bitstream in response to an output order of the AUs being different from a decoding order of the AUs. A13. The method of solution A12, wherein an output order and a decoding order of all pictures in a coded layer video sequence (CLVS) are identical in response to a flag being equal to one, and wherein the flag specifies whether GDR pictures are enabled. A14. The method of solution A12, wherein the output order and the decoding order of the AUs are identical in response to a flag being equal to one for a sequence parameter set (SPS) referenced by a picture in a coded video sequence (CVS), and wherein the flag specifies whether GDR pictures are enabled. A15. The method of solution A12, wherein the output order and the decoding order of the AUs are identical in response to a flag being equal to one for a sequence parameter set (SPS) referenced by a picture, and wherein the flag specifies whether GDR pictures are enabled. A16. The method of solution A12, wherein the output order and the decoding order of the AUs are identical in response to a flag being equal to one for a sequence parameter set (SPS) in the bitstream, and wherein the flag specifies whether GDR pictures are enabled. A17. The method of any of solutions A13 to A16, wherein the flag is sps_gdr_enabled_flag. Another listing of solutions preferred by some embodiments is provided next. B1. A method of video processing, comprising performing a conversion between a video and a bitstream of the video, wherein the bitstream comprises multiple layers in multiple access units (AUs) comprising one or more pictures according to a format rule, wherein the format rule specifies that, responsive to an end of sequence (EOS) network abstraction layer (NAL) unit for a first layer being present in a first access unit (AU) in the bitstream, a subsequent picture of each of one or more higher layers of the first layer in an AU following the first AU in the bitstream is a coded layer video sequence start (CLVSS) picture. B2. The method of solution B1, wherein the format rule further specifies that a first picture in a decoding order for a second layer, which is present in a coded video sequence (CVS) that includes the first layer, that uses the first layer as a reference layer is a CLVSS picture. B3. The method of solution B1, wherein the one or more higher layers comprises all or certain higher layers. B4. The method of solution B1, wherein the format rule further specifies that a first picture in a decoding order for a second layer, which is present in a coded video sequence (CVS) that includes the first layer, that is a higher layer than the first layer is a CLVSS picture. B5. The method of solution B1, wherein the format rule further specifies that the EOS NAL unit is present in each layer of a coded video sequence (CVS) in the bitstream. B6. The method of solution B1, wherein the format rule further specifies that a second layer, which is present in a coded video sequence (CVS) that includes the first layer, that is a higher layer than the first layer comprises the EOS NAL unit. B7. The method of solution B1, wherein the format rule further specifies that a second layer, which is present in a coded video sequence (CVS) that includes the first layer, that uses the first layer as a reference layer comprises the EOS NAL unit. B8. A method of video processing, comprising performing a conversion between a video and a bitstream of the video, wherein the bitstream comprises multiple layers in multiple access units (AUs) comprising one or more pictures according to a format rule, wherein the format rule specifies that, responsive to a first picture in a first access unit being a coded layer video sequence start (CLVSS) picture that is a clean random access (CRA) picture or a gradual decoding refresh (GDR) picture, a second picture is a CLVSS picture. B9. The method of solution B8, wherein the second picture is a picture for a layer in the first access unit. B10. The method of solution B8, wherein a first layer comprises the first picture, and wherein the second picture is a picture in a second layer that is higher than the first layer. B11. The method of solution B8, wherein a first layer comprises the first picture, and wherein the second picture is a picture in a second layer that uses the first layer as a reference layer. B12. The method of solution B8, wherein the second picture is a first picture in a decoding order in a second access unit that follows the first access unit. B13. The method of solution B8, wherein the second picture is any picture in the first access unit. B14. The method of any of solutions B1 to B13, wherein the CLVSS picture is a coded picture that is an (IRAP) picture or a (GDR) picture with a flag that is equal to one, wherein the flag equaling one indicates that an associated picture is not output by a decoder upon a determination that the associated picture comprises references to pictures that are not present in the bitstream. Yet another listing of solutions preferred by some embodiments is provided next. C1. A method of video processing, comprising performing a conversion between a video comprising one or more pictures and a bitstream of the video according to a rule, wherein the rule specifies that the bitstream comprises at least a first picture that is output, wherein the first picture is in an output layer, wherein the first picture comprises a syntax element equaling one, and wherein the syntax element affects a decoded picture output and a removal process associated with a hypothetical reference decoder (HRD). C2. The method of solution C1, wherein the rule applies to all profiles and the bitstream is allowed to conform to any profile. C3. The method of solution C2, wherein the syntax element is ph_pic_output_flag. C4. The method of solution C2, wherein the profile is a Main 10 Still Picture profile or a Main 4:4:4 10 Still Picture profile. The following listing of solutions applies to each of the solutions enumerated above. O1. The method of any of the preceding solutions, wherein the conversion comprises decoding the video from the bitstream. O2. The method of any of the preceding solutions, wherein the conversion comprises encoding the video into the bitstream. O3. A method of storing a bitstream representing a video to a computer-readable recording medium, comprising generating the bitstream from the video according to a method described in any one or more of the preceding solutions, and storing the bitstream in the computer-readable recording medium. O4. A video processing apparatus comprising a processor configured to implement a method recited in any one or more of claims A1 to B14. O5. A computer-readable medium having instructions stored thereon, the instructions, when executed, causing a processor to implement a method recited in one or more of the preceding solutions. O6. A computer readable medium that stores the bitstream generated according to any one or more of the preceding solutions. O7. A video processing apparatus for storing a bitstream, wherein the video processing apparatus is configured to implement a method recited in any one or more of the preceding solutions. Yet another listing of solutions preferred by some embodiments is provided next. P1. A video processing method, comprising performing a conversion between a video comprising one or more pictures and a coded representation of the video, wherein the coded representation conforms to a format rule, wherein the format rule specifies a constraint on a value of a syntax element indicative of presence of a most significant bit cycle for a picture order count in a picture of the video. P2. The method of solution P1, wherein the format rule specifies that value of syntax element is 0 when an independent value flag is set to a zero value and at least one slice of the picture uses an inter-layer reference picture in a reference list thereof. P3. The method of any of solutions P1 to P2, wherein the format rule specifies that a zero value of the syntax element is indicated by not including the syntax element in the coded representation. P4. A video processing method, comprising performing a conversion between a video comprising one or more pictures and a coded representation of the video, wherein the conversion conforms to a rule that specifies that gradual decode refresh pictures are disallowed in case that an output order of an access unit is different from a decoding order of the access unit. P5. A video processing method, comprising performing a conversion between a video comprising video layers comprising one or more video pictures and a coded representation of the video, wherein the coded representation conforms to a format rule, wherein the format rule specifies that in case that a first network abstraction layer unit (NAL) indicating an end of a video sequence is present in an access unit of a layer, then next pictures of each of higher layers in the coded representation must have a coded layer video sequence start type. P6. The method of solution P5, wherein the format rule further specifies that a first picture in decoding order for a second layer that uses the layer as a reference layer shall have the coded layer video sequence start type. P7. The method of any of solutions P1 to P5, wherein the performing the conversion comprises encoding the video to generate the coded representation. P8. The method of any of solutions P1 to P5, wherein the performing the conversion comprises parsing and decoding the coded representation to generate the video. P9. A video decoding apparatus comprising a processor configured to implement a method recited in one or more of solutions P1 to P8. P10. A video encoding apparatus comprising a processor configured to implement a method recited in one or more of solutions P1 to P8. P11. A computer program product having computer code stored thereon, the code, when executed by a processor, causes the processor to implement a method recited in any of solutions P1 to P8. In the present disclosure, the term “video processing” may refer to video encoding, video decoding, video compression or video decompression. For example, video compression algorithms may be applied during conversion from pixel representation of a video to a corresponding bitstream representation or vice versa. The bitstream representation (or simply, the bitstream) of a current video block may, for example, correspond to bits that are either co-located or spread in different places within the bitstream, as is defined by the syntax. For example, a macroblock may be encoded in terms of transformed and coded error residual values and also using bits in headers and other fields in the bitstream. The disclosed and other solutions, examples, embodiments, modules and the functional operations described in this disclosure can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this disclosure and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this disclosure can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and compact disc, read-only memory (CD-ROM) and digital versatile disc, read-only memory (DVD-ROM) disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. While the present disclosure contains many specifics, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of the present disclosure. Certain features that are described in the present disclosure in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in the present disclosure should not be understood as requiring such separation in all embodiments. Only a few embodiments and examples are described and other embodiments, enhancements and variations can be made based on what is described and illustrated in the present disclosure.
70,200
11943484
DESCRIPTION OF EMBODIMENTS (Underlying Knowledge Forming Basis of the Present Disclosure) Regarding the image coding method and the image decoding method described in the Background section, the inventors have found the following problem. First, an image coding apparatus and an image decoding apparatus in HEVC will be described. A video signal inputted to an image coding apparatus is a sequence of images called frames (pictures). Each frame includes a two-dimensional matrix of pixels. All the above-mentioned standards based on hybrid video coding include partitioning each individual video frame into smaller blocks including a plurality of pixels. The size of the blocks may vary, for instance, in accordance with the content of the image. The coding method may be typically varied on a per block basis. The largest possible size for such a block, for instance in HEVC, is 64×64 pixels. It is called the largest coding unit (LCU). The LCU can be recursively partitioned into 4 CUs. In H.264/MPEG-4 AVC, a macroblock (usually denoting a block of 16×16 pixels) was the basic image element, for which the coding is performed. The macroblock is further divide it in smaller subblocks. The coding steps included in the coding method and/or the decoding steps included in the decoding method are performed on a per subblock basis. [1-1. Hybrid Video Coding] The following simply describes a hybrid video coding. Typically, the coding steps of a hybrid video coding include a spatial and/or a temporal prediction (space prediction and/or time prediction). Accordingly, each block to be coded is first predicted using either the blocks in its spatial neighborhood or blocks in its temporal neighborhood, that is, from previously coded video frames. A residual block which is a difference between the block to be coded and its prediction result is then calculated. Next, the residual block is transformed from the spatial (pixel) domain into a frequency domain. The transformation aims at reducing the correlation of the input block. Furthermore, the transform coefficients obtained from the transformation are quantized. This quantization is the lossy (irreversible) compression. Usually, the compressed transform coefficient values are further losslessly compressed by an entropy coding. In addition, auxiliary information necessary for reconstruction of the coded video signal is coded and provided together with the coded video signal. This is for example information about the spatial prediction, the temporal prediction, and/or quantization. [1-2. Configuration of Image Coding Apparatus] FIG.1is an example of a typical H.264/MPEG-4 AVC and/or HEVC image coding apparatus (encoder100). As shown inFIG.1, the encoder100includes a subtractor105, a transformation unit110, a quantization unit120, an inverse transformation unit130, an adder140, a deblocking filter150, an adaptive loop filter160, a frame memory170, a predicting unit180, and an entropy coder190. The predicting unit180drives a prediction signal s2by temporal prediction or spatial prediction. The type of prediction used in the predicting unit180may be varied on a per frame basis or on a per block basis. Temporal prediction is called inter prediction, and spatial prediction is called intra prediction. The coding using a prediction signal s2by temporal prediction is called inter coding, and the coding using a prediction signal s2by spatial prediction is called intra coding. In the derivation of a prediction signal using temporal prediction, coded images stored in a memory are used. In the derivation of a prediction signal using spatial prediction, a boundary pixel value of coded or decoded neighboring block stored in a memory is used. The number of prediction directions in intra prediction depends on the size of the coding unit (CU). It should be noted that details of prediction will be described later. The subtractor105first determines a difference (prediction error signal e) between a current block to be coded of an input image (=input signal s1) and a corresponding prediction block (=prediction signal s2). The difference is used for prediction of the current block to be coded. It should be noted that the prediction error signal e is also called a prediction residual signal. The transformation unit110transforms a prediction error signal e into coefficients. Generally, the transformation unit110uses an orthogonal transformation such as a two-dimensional discrete cosine transformation (DCT) or an integer version thereof. The orthogonal transformation can reduce the correlation of the input signal s1(the video signal before coding) efficiently. After the transformation, lower frequency components are usually more important for image quality than high frequency components so that more bits can be spent for coding the low frequency components than the high frequency components. The quantization unit120quantizes coefficients and derives quantized coefficients. The entropy coder190performs entropy coding on the quantized coefficients. The quantized coefficients are losslessly compressed by the entropy coding. Furthermore, by the entropy coding, data volume stored in the memory and data volume (bitstream) to be transmitted can be further reduced. The entropy coding is performed by mainly applying coding using variable length codeword. The length of a codeword is chosen based on the probability of its occurrence. The entropy coder190transforms the two-dimensional matrix of quantized coefficients into a one-dimensional array. Typically, the entropy coder190performs this conversion through a so-called zigzag scanning. The zigzag scanning starts with the DC coefficient in the upper left corner of the two-dimensional array and scans the two-dimensional array in a predetermined sequence ending with an AC coefficient in the lower right corner. The energy is typically concentrated in the left upper part of the two-dimensional matrix of coefficients. Generally, when coefficients are located in the upper left corner, they are low frequency component coefficients. When coefficients are located in the lower right corner, they are high frequency component coefficients. Therefore, the zigzag scanning results in an array where usually the last values are consecutively a plurality of ones or zeros. This allows for efficient encoding using run-length codes as a part of/before the actual entropy coding. H.264/MPEG-4 AVC and HEVC use different types of entropy coding. Although some syntax elements are coded with fixed length, most of the syntax elements are coded with variable length codes. In particular, among syntaxes, context adaptive variable length codes (CABAC) are used for coding of prediction error signals (prediction residual signals). Generally, various other integer codes different from context adaptive variable length codes are used for coding of other syntax elements. However, context adaptive binary arithmetic coding may be used. Variable length codes allows for lossless compression of the coded bitstream. However, since the codewords have variable length, decoding must be performed sequentially on the codewords. In other words, it is not possible to code or decode codewords before encoding or decoding the previous codewords without restarting (initializing) the entropy coding or without indicating separately a position of the codeword (starting point) to start with when decoding. Arithmetic coding codes a sequence of bits into a single codeword based on a predetermined probability model. The predetermined probability model is determined according to the content of the video sequence in case of CABAC. Arithmetic coding, and thus also CABAC, are more efficient when the length of the bitstream to be coded is larger. In other words, CABAC applied to sequences of bits is efficient for larger blocks. At the beginning of each sequence, CABAC is restarted. In other words, at the beginning of each video sequence, its probability model is initialized with some predefined or predetermined values. The entropy coder109transmits, to a decoder side, a bitstream including coded quantized coefficients (coded video signals) and coded auxiliary information. The H.264/MPEG-4 and H.264/MPEG-4 AVC as well as HEVC include two functional layers, a Video Coding Layer (VCL) and a Network Abstraction Layer (NAL). The VCL provides the coding functionality as described above. The NAL encapsulates information elements into standardized units called NAL units according to their further application such as transmission over a channel or storing in a storage device. The information elements encapsulated by the NAL are, for instance, (1) the encoded prediction error signal (compressed video data) or (2) other information necessary for the decoding of the video signal such as type of prediction, quantization parameter, motion vectors, etc. There are VCL NAL units containing the compressed video data and the related information, as well as non-VCL units encapsulating additional data such as parameter set relating to an entire video sequence, or a Supplemental Enhancement Information (SEI) providing additional information that can be used to improve the decoding performance. Some non-VCL NAL units include, for instance, parameter sets. A parameter set is a set of parameters relating to coding and decoding of a certain portion of the video sequence. For instance, there is a sequence parameter set (SPS) which includes parameter relevant for the coding and decoding of the entire sequence of pictures. In particular, sequence parameter set is a syntax structure including syntax elements. In particular, syntax elements are applied to zero or more entire coded video sequences as determined by the content of a seq_parameter_set_id. The seq_parameter_set_id is a syntax element included in the picture parameter set (described below) referred to by the pic_parameter_set_id. The pic_parameter_set_id is syntax element included in each slice header. The picture parameter set (PPS) is a parameter set which defines parameters applied to coding and decoding of a picture of picture sequence (video sequence). In particular, the PPS is a syntax structure including syntax elements. The syntax elements are applied to zero or more entire coded pictures as determined by the pic_parameter_set_id which is a syntax element found in each slice header. Accordingly, it is simpler to keep track of an SPS than of the PPS. This is because PPS changes for every picture, whereas the SPS is constant for the entire video sequence which may be even minutes or hours long. The encoder100includes a reconstruction unit (so called decoding unit) which derives a reconstructed signal (so called a decoded signal) s3. By the reconstruction unit, a reconstructed image obtained by reconstructing (decoding) the coded image is generated and is stored in the frame memory170. The reconstruction unit includes the inverse transformation unit130, the adder140, the deblocking filter150, and the adaptive loop filter160. The inverse transformation unit130, according to the above described coding steps, performs inverse quantization and inverse transformation. It should be noted the prediction error signal e′ derived from the inverse transformation unit130is different from the prediction error signal e due to the quantization error, called also quantization noise. The adder140derives a reconstructed signal s′ by adding a reconstructed prediction error signal e′ reconstructed by the inverse transformation unit130to a prediction signal s2. The deblocking filter150performs deblocking filter processing to reduce quantization noise which superimposes the reconstructed signal s′ due to quantization. Here, since the above described coding steps are performed on a per block basis, there is a case where a block boundary is visible when noise is superimposed (blocking characteristics of noise). The superimposed noise is called a blocking noise. In particular, when strong quantization is performed by the quantization unit120, there are more visible block boundaries in the reconstructed image (decoded image). Such blocking noise has a negative effect upon human visual perception, which means that a person feels that image quality is deteriorated. In order to reduce the blocking noise, the deblocking filter150performs deblocking filter processing on every reconstructed signal s′ (reconstructed block). For instance, in the deblocking filter processing of H.264/MPEG-4 AVC, for each area, a filter processing suitable for the area is selected. In the case of a high degree of blocking noise, a strong (narrow-band) low pass filter is applied, whereas for a low degree of blocking noise, a weaker (broad-band) low pass filter is applied. The strength of the low pass filter is determined by the prediction signal e2and by the prediction error signal e′. Deblocking filter processing generally smoothes the block edges. This leads to an improved subjective image quality of the decoded signals. The filtered image is used for the motion compensated prediction of the next image. Since the filter processing also reduces the prediction errors, coding efficiency can be improved. The adaptive loop filter160applies sample adaptive offset (SAO) processing and/or adaptive loop filter (ALF) processing to the reconstructed image s″ after the deblocking filter processing in the deblocking filter150, to derive a reconstructed signal (decoded signal) s3. The deblocking filter processing in the deblocking filter150is aimed at improving the subjective quality. Meanwhile, the ALF processing and SAO processing in the adaptive loop filter160are aimed at improving the pixel-wise fidelity (“objective” quality). The SAO processing is used to add an offset value to a pixel value for each pixel using a pixel value of the immediately neighboring pixel. The ALF processing is used to compensate image distortion caused by the compression. Typically, the filter used in the ALF processing is a Wiener filter with filter coefficients determined such that the mean square error (MSE) between the reconstructed signal s′ and the input signal s1is minimized. The filter coefficients of the ALF processing is calculated and transmitted on a frame-by-frame basis, for example. The ALF processing may be applied to the entire frame (image) or to local areas (blocks). Auxiliary information indicating which areas are to be filtered may be transmitted on a block-by-block basis, a frame-by-frame basis, or a quadtree-by-quadtree basis. The frame memory (frame buffer)170stores part of the coded and reconstructed (decoded) image (reconstructed signal s3). The stored reconstructed image is used for decoding an inter-coded block. The predicting unit180derives a prediction signal s2using the (same) signal that can be used at both the encoder side and the decoder side, in order to maintain compatibility between the encoder side and the decoder side. The signal that can be used at both the encoder side and the decoder side is a reconstructed signal s3(video signal after filter processing by the adaptive loop filter160) on the encoder side that is coded and then reconstructed (decoded), and a reconstructed signal s4(video signal after the filter processing by the adaptive loop filter inFIG.2) on the decoder side that is decoded from a bitstream. The predicting unit180, when generating a prediction signal s2by inter coding, predicts using motion compensation prediction. A motion estimator of the predicting unit180(not illustrated) finds a best matching block for the current block from the blocks within the previously coded and reconstructed video frames. The best matching block then becomes a prediction signal. The relative displacement (motion) between the current block and its best matching block is then signalized as motion data included in the auxiliary information in the form of three-dimensional motion vectors. The signal is transmitted along with the coded video data. The three-dimension motion vector includes two spatial dimension motion vector and one temporal dimension motion vector. In order to optimize the prediction accuracy, motion vectors may be determined with a spatial sub-pixel resolution, for example, half pixel or quarter pixel resolution. A motion vector with spatial sub-pixel resolution may point to a spatial position within an already reconstructed frame where no real pixel value is available, that is, a sub-pixel position. Hence, spatial interpolation of such pixel values is needed in order to perform motion compensated prediction. This may be achieved by an interpolation filter (integrated within predicting unit180inFIG.1). [1-3. Configuration of Image Decoding Apparatus] A configuration of a decoder (image decoding apparatus) will be described with reference toFIG.2. FIG.2is a block diagram showing an example of a decoder200according to the H.264/MPEG-4 AVC or HEVC video coding standard. As shown inFIG.2, the decoder200includes an entropy decoder290, an inverse transformation unit230, an adder240, a deblocking filter250, an adaptive loop filter260, a frame memory270, and a predicting unit280. A bitstream inputted to the decoder200(encoded video signal) is first transmitted to the entropy decoder290. The entropy decoder290extracts the quantized coefficients coded from the bitstream and the coded auxiliary information, and decodes the coded quantized coefficients and the coded auxiliary information. The auxiliary information, as described above, information necessary for decoding such as motion data (motion vector) and mode of prediction (type of prediction). The entropy decoder290transforms the decoded quantized coefficients in a one-dimensional array into those in a two-dimensional array by inverse scanning. The entropy decoder290inputs, to the inverse transformation unit230, the quantized coefficients after transformed into those in a two-dimensional array. The inverse transformation unit230performs inverse quantization and inverse transformation on the quantized coefficients transformed into those in a two-dimensional array, to derive a prediction error signal e′. The prediction error signal e′ corresponds to the differences obtained by subtracting the prediction signal from the signal inputted to the encoder in the case no quantization noise is introduced and no error occurred. The predicting unit280drives a prediction signal s2by temporal prediction or spatial prediction. The information such as prediction type included in the auxiliary information is used in the case of intra prediction (spatial prediction). Moreover, the information such as motion data included in the auxiliary information is used in the case of motion compensated prediction (inter prediction, temporal prediction). The adder240adds a prediction error signal e′ obtained from the inverse transformation unit230and a prediction signal e2obtained from the predicting unit280, to derive a reconstructed signal s′. The deblocking filter250performs deblocking filter processing on a reconstructed signal s′. The adaptive loop filter260applies the SAO processing and the ALF processing to the reconstructed signal s″ to which the deblocking filter processing is applied by the deblocking filter250. A decoded signal S4obtained from the application of the SAO processing and the ALF processing in the adaptive loop filter260is stored in the frame memory270. The decoded signal S4stored in the frame memory270is used, in the predicting unit280, for predicting the next current block to be decoded or the current image to be decoded. [1-4. Processing Efficiency] Generally, parallelization of processing is considered in order to enhance the processing efficiency of the coding processing and the decoding processing. Compared with H.264/MPEG-4 AVC, HEVC has a function for supporting a high-level parallel processing (parallelization processing) of the coding and decoding. In HEVC, it is possible to divide a frame into slices, similarly to H.264/MPEG-4 AVC. Here, slices are groups of LCUs in the scan order. In H.264/MPEG-4 AVC, slices are independently decodable, and no spatial prediction is applied between the slices. Therefore, the parallel processing can be performed on a slice-by-slice basis. However, since slices possess significantly large headers and there is the lack of dependencies between the slices, the efficiency of the compression is reduced. Moreover, CABAC coding loses efficiency when applied to small data blocks. In order to enable a more efficient parallel processing, wavefront parallel processing (WPP) is proposed. WPP holds a constant dependency which is different from the parallel processing in which each of the slices is independent. The following description will be made by referring to the case where a picture comprises the LCUs each in which pictures are disposed in a matrix and each LCU row comprises one slice (refer toFIG.3). In WPP, among the LCUs comprising the current LCU row32, as the CABAC probability model for resetting the CABAC state of the first LCU (head LCU), the CABAC probability model just after the processing on the second LCU of the previous LCU row31is completed is used. All inter-block dependencies are maintained. This allows for parallelization of decoding of the LCU rows. The timing for starting each LCU-row processing is delayed by two LCUs with respect to the previous one. The information about the starting points for starting the LCU row decoding are included in the slice header. The WPP is described in detail in Non Patent Literature 1. Another approach for improving the parallelization is called tiles. Accordingly, a frame (picture) is partitioned into tiles. Tiles are rectangular groups of LCUs. The boundaries between the tiles are set such that the entire picture is partitioned in matrix. Tiles are processed in the raster scan order. All dependencies are broken at the tile boundaries. The entropy coding such as CABAC is also reset at the beginning of each tile. Only the deblocking filter processing and the sample adaptive offset processing may be applied over the tile boundaries. Thus, tiles can be coded and decoded in parallel. Tiles are described in detail in Non Patent Literature 2 and Non Patent Literature 3. Moreover, in order to improve the concept of slices and make it suitable for parallelization rather than for error resilience which was the original purpose of slices in H.264/MPEG-4 AVC, the concept of dependent slices and entropy slices has been proposed. In other words, in HEVC, there are three types of slices supported: (1) normal slices; (2) entropy slices; and (3) dependent slices. The normal slices denote slices known already from H.264/MPEG-4 AVC. No spatial prediction is allowed between the normal slices. In other words, prediction over the slice boundaries is not allowed. This means that a normal slice is coded without referring to any other slice. In order to enable independent decoding of such slices, the CABAC is restarted at the beginning of each slice. When the slice to be processed is a normal slice, the restart of the CABAC includes end processing (terminate processing) arithmetic coding processing or arithmetic decoding processing in the precedent slice end, and processing of initializing the context table (probability table) to a default value at the beginning of the normal slice. Normal slices are used at the beginning of each frame. In other words, every frame has to start with a normal slice. A normal slice has a header including parameters necessary for decoding the slice data. The term “entropy slices” denotes slices in which spatial prediction is allowed between the parent slice and the entropy slice. The parsing of the parent slice and the entropy slice is independently performed. However, the parent slice is, for instance, be a normal slice just before the entropy slice. The parent slice is required for the reconstruction of the pixel values of the entropy slice. In order to enable independent parsing of the entropy slices, the CABAC is also restarted at the beginning of the slice. As the slice header of entropy slices, it is possible to use a slice header which is shorter than the slice header of the normal slice. The slice header of the entropy slices includes a subset of coding parameters with respect to the information transmitted within the header of a normal slice. The missing elements in the header of the entropy slice are copied from the header of the parent slice. When the slice to be processed is an entropy slice, the restart of the CABAC, as similarly to the normal slice, includes end processing (terminate processing) in the precedent slice end, and processing of initializing the context table (probability table) to a default value at the beginning of the current slice. (3) The dependent slice is similar to an entropy slice, but is partially different in the processing in which the CABAC is restarted. When the slice to be processed is a dependent slice and WPP is not effective, the restart of the CABAC includes end processing in the precedent slice (terminate processing) and processing of initializing the context table to a state value of the end of the precedent slice. When the slice to be processed is a dependent slice and WPP is not effective, the restart of the CABAC includes end processing in the precedent slice (terminate processing) and processing of initializing the context table to a state value after the LCU processing which belongs to the precedent slice and is the second from the left end at the beginning of the current slice. As described above, the restart of the CABAC always includes the terminate processing. Conversely, in the restart of the CABAC, the state of the CABAC is often carried over. The dependent slices cannot be parsed without a parent slice. Therefore, the dependent slices cannot be decoded when the parent slice is not received. The parent slice is usually a precedent slice of the dependent slices in a coding order, and a slice which includes a complete slice header. This is the same for the parent slice of the entropy slice. As described above, dependent an entropy slices use the slice header (in particular, the information of the slice header which is missing in the dependent slice's header) of the immediately preceding slice according to the coding order of the slices. This rule is applied recursively. The parent slice the current dependent slice depends on is recognized as available for reference. Reference includes use of spatial prediction between the slices, sharing CABAC states, and the like. A dependent slice uses the CABAC context tables that are generated at the end of the immediately preceding slice. Thus, a dependent slice does not initialize the CABAC tables to the default values, but, instead keeps on using the already developed context tables. Further details regarding the entropy and dependent slices can be found in Non Patent Literature 3. The HEVC provides several profiles. A profile includes some settings of the image coding apparatus and the image decoding apparatus suitable for a particular application. For instance, the “main profile” only includes normal and dependent slices, but not entropy slices. As described above, the coded slices are further encapsulated into NAL units, which are further encapsulated, for instance, into a Real Time Protocol (RTP) and finally into Internet Protocol (IP) packets. Either this, or other protocol stacks, enables transmitting of the coded video in packet-oriented networks, such as Internet or some proprietary networks. Networks typically include at least one or more routers, which employ special hardware operating very fast. The function of the router is to receive IP packets, to analyze their IP packet headers and, accordingly, to forward the IP packets to their respective destinations. Since the routers need to handle traffic from many sources, the packet handling logic needs to be as simple as possible. The minimum requirement for the router is checking the destination address field in the IP header in order to determine which route to take for forwarding them. In order to further provide support for quality of service (QoS), smart (media-aware) routers additionally check specialized fields in the network protocol headers, such as IP header, RTP header, and even the header of a NALU. As can be seen from the above description of the video coding, the different types of slices defined for the purpose of parallel processing, such as dependent slices and entropy slices, are of different importance with respect to the quality distortion upon their damage. In particular, the dependent slices cannot be parsed and decoded without a parent slice. This is because at their beginning of the dependent slice, the entropy coder or decoder cannot be restarted. Accordingly, the parent slice is more important for the reconstructing of the image or video. In HEVC, the dependent and entropy slices introduce an additional dimension of dependency, namely, the inter-slice dependency (a dependency within the frame). This kind of dependency is not considered by the routers. The above described dependencies and, in particular, the inter-slice dependency is not considered at the network level. However, it would be desirable to take the above described dependency into account at the network level in order to provide a better support for quality of service. Accordingly, it is necessary to improve the flexibility of packet handling at the network level by considering the dependencies of the slices. (Details of Problem) [1-5. WPP and Dependent Slice] The dependent slices can be used together with parallel processing tools such as waveform parallel processing (WPP) and tiles. In particular, dependent slices make wavefront (substream) capable of decreasing the transmission delay without causing a coding loss. Moreover, dependent slices serve as starting points for CABAC substreams since the CABAC is not restarted at the dependent slices. Moreover, the information indicating the starting points may be transmitted in the bitstream in order to provide the starting points for possibly independent parsing. In particular, if more than two CABAC substreams are encapsulated in a normal or dependent slice, starting points are signaled explicitly in the form of the number of bytes per substream. Here, the substream denotes a portion of the stream which is parseable independently thanks to the starting points. Additionally, dependent slices can be used as starting point “markers”, since every dependent slice needs to have a NAL unit header. This means that the starting points can be signaled with respect to such markers. The two approaches, namely the explicit starting point signaling and the marking of the starting points via dependent slices are used jointly. As a rule, the starting point of every NAL unit (beginning of every NAL header) has to be identifiable. There is no requirement about the exact identification operation. For example, the following two methods may be applied. The first method is a method of putting a start code (for instance, 3 bytes long) at the beginning of each NAL header. The second method is a method of putting every NAL unit in a separate packet. Due to the dependency of the slices, the slice header size may be reduced. Regarding entropy slices, the method enable parallel CABAC parsing. This is because CABAC is truly restarted at the beginning of the entropy slices. In the case of parallel processing of CABAC, CABAC represents a bottleneck which can be overcome by parallel CABAC parsing followed by sequential pixel reconstruction operations. In particular, the WPP parallelization tool enables decoding of each LCU row by one processing core (intellectual property code (IP core), a function block). It should be noted that the assignment of the LCU rows to the cores may be different. For instance, two rows may be assigned to one core, and one row may be assigned to two cores. FIG.3is a diagram showing an example of a configuration of a picture300. InFIG.3, a picture300is subdivided into31to3m(m is the ordinal number of LCU) rows of largest coding units (LCU). Each of the LCU rows3i(I=1 to m) comprises LCUs3i1to3in(n is the ordinal number of LCU column) that are disposed in a row. The LCU row3icorresponds to “Wavefront i”. Parallel processing can be performed for wavefronts. The arrow of the CABAC state inFIG.3denotes a relationship between LCU referring to the CABAC state and the reference destination. Specifically, inFIG.3, first, among the LCUs included in the LCU row31, the processing (coding or decoding) starts for the head LCU311. The processing on the LCUs is performed in an order from the LCU311to31n. After the processing on the first two LCUs311and312in the LCU row31is performed, the processing on the LCU row32is started. In the processing of the first LCU row321of the LCU column32, as shown in the arrow of the CABAC state inFIG.3, the CABAC state just after the processing on the LCU312in the LCU row31in the first row is used as the initial state of the CABAC state. In other words, there is a delay of two LCUs between the two parallel processings. FIG.4is a diagram showing an example of the case where a dependent slice using WPP is used. The LCU rows41to43correspond to “Wavefront1”, “Wavefront2” and Wavefront3″, respectively. The LCU rows41to43are processed by their respective independent cores. InFIG.4, the LCU row41is a normal slice, and the LCU rows42to4mare dependent slices. The dependent slices make WPP capable of reducing the delay. The dependent slices have no complete slice header. Moreover, the dependent slices can be decoded independently of the other slices as long as the starting points (or the starting point of dependent slices, which is known as a rule as described above) are known. In particular, the dependent slices can make WPP suitable also for the low-delay applications without incurring a coding loss. In the usual case of encapsulating the substreams (LCU rows) into slices, it is mandatory to insert the explicit starting points into the slice header for ensuring the parallel entropy coding and decoding. As a result a slice is ready for transmission only after the last substream of the slice is encoded completely. The slice header is completed only after the coding of all of the substreams in the slice is completed. This means that the transmitting of the beginning of a slice cannot be started via packet fragmentation in the RTP/IP layer until the whole slice is finished. However since the dependent slices can be used as starting point markers, explicit starting point signaling is not required. Therefore, it is possible to split a normal slice into many dependent slices without coding loss. Dependent slices can be transmitted as soon as the encoding of the encapsulated sub-stream is completed (or even earlier in the case of packet fragmentation). The dependent slices do not break the spatial prediction dependency. The dependent slices do not even break the parsing dependency. This is because ordinarily the parsing of the current dependent slice requires the CABAC states from the previous slice. When dependent slices were not allowed, then each LCU row can be configured to be a slice. Such a configuration lowers the transmission delay, but at the same time, leads to a rather high coding loss as discussed in the Background section above. Alternatively, the entire frame (picture) is encapsulated into a single slice. In this case, the starting points for the substreams (LCU rows) needs to be signaled in the slice header in order to enable their parallel parsing. As a result, there is a transmission delay at the frame level. In other words, the header needs to be modified after the entire frame is coded. Having an entire picture encapsulated in a single slice by itself does not increase the transmission delay. For example, the transmission of some parts of the slice may start already before the whole coding is finished. However, if WPP is used, then slice header needs to be modified later in order to write the starting points. Therefore, the entire slice needs to be delayed for transmission. The use of dependent slices thus enables a reduction of the delay. As shown inFIG.4, a picture400is divided into a LCU row41that is a normal slice, and the LCU rows42to4mthat are dependent slices. When each LCU row is one dependent slice, a transmission delay of one LCU row can be achieved without any coding loss. This is caused by the fact that the dependent slices do not break any spatial dependency and do not restart the CABAC engine. [1-6. Configuration of Packet] As described above, the network routers have to analyze headers of packets in order to enable the providing of quality of service. The quality of service is different according to the type of application and/or priority of the service and/or of the relevance of the packet for the distortion caused by its packet loss. FIG.5is a diagram showing an example of an encapsulation (packetization) of a bitstream. Generally, real time protocol (RTP) is used for packetization. The RTP is usually used for real time media transmission. The header lengths of the respective involved protocols are basically fixed. The protocol headers have extension fields. The extension fields can extend the length of the headers by 4 bytes. For example, the IP header can be extended by up to 20 bytes. The syntax elements in the headers of IP, User Datagram Protocol (UDP) and RTP are also fixed in their length. FIG.5shows a packet header500included in an IP packet. The packet header shown inFIG.5includes an IP header510, a UDP header530, a RTP header540, a RTP H264 payload header560, and a NAL header570. The IP header510is a header with a length of 20 bytes with an extension field520of 4 bytes. The payload of the IP packet is a UDP packet. The UDP packet includes a UDP header530with a length of 8 bytes and the UDP payload. The UDP payload is formed by the RTP packet. The RTP packet includes a RTP header540with a length of head 12 bytes and an extension field550of 4 bytes. The RTP packet can be selectively extended by the extension field. The payload of the RTP packet includes a special RTP H264 payload header560with a length of 0 to 3 bytes followed by a NAL header570of the HEVC which is 2 bytes in length. The payload of the NALU including the coded video packet follows the packet headers500(not shown inFIG.5). The routers which are capable of providing an enhanced quality of service are called Media Aware Network Elements (MANE). The Media Aware Network Elements check some of the fields of the packet headers shown inFIG.5. For example, MANE is called “temporal_id” and included in the NAL header570or the decoding order number included in the RTP header540may be checked in order to detect losses and presentation order of the received packet contents. The routers (network elements) handle the packets as fast as possible in order to enable a high throughput in the network. The logic is required to access the fields in the packet headers rapidly and simply, in order to keep the complexity of the network element processing low. The NALU is encapsulated by the header500. The NALU may include slice data when a slice header is present. FIG.6is a diagram showing an example of a slice header syntax600. The syntax element dependent_slice_flag601is a syntax element indicating whether or not whether a slice is a dependent slice. This syntax element can be used to identify the inter-slice dependency. However, the slice header is the content of a NALU. Parsing of the syntax elements before the dependent_slice_flag601requires a rather complicated logic. This is a level which cannot be efficiently considered by ordinary routers as will be shown below. As described above, a NALU includes information common for a plurality of slices such as parameter sets, or includes directly coded slices with information necessary for decoding included in the slice header. Syntax of a slice header used for an entropy or a dependent slice is exemplified inFIG.6.FIG.6shows a table with a slice header structure. When the syntax element “dependent_slice_flag” is set to 1, all of the slices up to the first normal slice (a slice which is not an entropy slice and not a dependent slice) preceding the current slice in the decoding order are required. When the slices are not decoded, in general, the current dependent slice cannot be decoded. In some special cases, for example, the dependent slice can be decodable when some other side information signaled or derived is available. The syntax element dependent_slice_flag601is included approximately in the middle of the slice header. Moreover, the slice header includes the number of CABAC substreams within the current slice signaled by the information element num_entry_points_offsets602and the number of bytes in a substream603signaled by the syntax element entry_points_offsets [i]. Here, the information element num_entry_points_offsets602corresponds to the number of entry points. Furthermore, i is an integer and an index denoting the particular entry points (offsets of the entry points). The number of bytes in a substream denoted by entry_point_offset [i]603enables an easy navigation within the bitstream. [1-7. Dependency of Picture] As described above, there are several types of dependencies resulting from the HEVC coding approach. FIG.7is a diagram showing the dependencies and their signaling in the case in which only normal slices, that is, no dependent and no entropy slices, are used.FIG.7shows three pictures710,720, and730. The picture710is a base layer picture carried in two VCL NALUs, namely VCL NAL Unit1and VCL NAL Unit2. POC indicates the order in which the pictures are to be rendered. VCL NALU includes a syntax element indicating whether a picture belongs to a base layer or to an enhancement layer, and a syntax element temporal_id. The syntax element indicating whether a picture belongs to a base layer or to an enhancement layer is transmitted under a state of being within the NAL header570of the packet header500shown inFIG.5. The syntax element “temporal_id” is also transmitted under a state of being within the NAL header570. The syntax element “temporal_id” indicates the degree of dependency of the other pictures. For instance, pictures or slices coded with temporal_id=0 are decodable independently of other pictures/slices which have a higher temporal_id. It should be noted that in HEVC, temporal_id is signaled in the NAL header as nuh_temporal_id_plus1 (refer toFIG.9A). In particular, the following Expression 1 can be applied to the relationship between the temporal_id used in these examples and the syntax element nuh_temporal_id_plus1. [Math. 1] temporal_id=nuh_temporal_id_plus1−1  (Expression 1) Slices with temporal_id=1 depend on slices of temporal_id with a lower value. In other words, the value of temporal_id in this case is 0. In particular, the temporal_id syntax element refers to the prediction structure of the picture. In general, slices with a particular value of temporal_id depend only on slices with a lower or equal value of temporal_id. Accordingly, a picture710inFIG.7can be decoded first. A picture720is an enhancement layer to the base layer of the picture710. Thus, there is a dependency which requires picture720to be decoded after decoding picture710. The picture720includes two NALUs, namely VCL NAL Unit3and VCL NAL Unit4. Both pictures710and720have their POC value of 0. This means that the pictures710and720belong to the same image to be displayed at once. The images comprise the base and the enhancement layer. The picture730is a base layer which includes two NALUs, namely VCL NAL Unit5and VCL NAL Unit6. The picture730has the POC value of 1. This means that picture (portion)730is to be displayed after the pictures720and710. Moreover, the picture730has the value of temporal_id=1. This means that the picture730temporally depends on a picture with temporal_id=0. Accordingly, based on the dependency signaled in the NAL header, the picture730depends on the picture710. FIG.8is a diagram showing the dependencies (degree of dependency) and their signaling in the case in which dependent and no entropy slices are used.FIG.8shows three pictures810,820, and830.FIG.8differs fromFIG.7described above in that dependencies of the dependent and entropy slices signaled within the slice header are added. InFIG.7, the inter-layer dependency is shown with the example of the pictures710and720. Moreover, the temporal dependency is shown in the example of the pictures710and730. These dependencies are both signaled in the NAL header. The inter-slice dependency as shown inFIG.8is inherent to dependent and entropy slices. In particular, the base layer frame810and the enhancement layer frame820both have two slices. Of the two slices, one is a parent slice (normal slice) and the other is a child slice (dependent slice). In frame810, VCL NAL Unit1slice is the parent slice of the VCL NAL Unit2. In frame820, VCL NAL Unit3slice is the parent slice of the VCL NAL Unit4. As described above, the term “parent slice” of a dependent slice refers to a slice from which the dependent slice depends, that is, the slice of which the slice header information is used by the dependent slice. This is as a rule that the first preceding slice is a slice that has a complete header. The slice that has a complete header is a normal slice, not another dependent slice, for example. The corresponding syntax of the NAL unit header and the slice header as currently used in the HEVC and in particular in HM8.0 will be described with reference toFIG.9A. FIG.9Ais a diagram showing the syntax of a NAL unit header910and the syntax of the slice header920. In particular, the inter-layer dependencies are planned (in the current standardization) to be signaled within the NAL unit header using the syntax element nuh_reserved_zero_6 bits. The temporal dependencies are signaled using the syntax element nuh_temporal_id_plus1. The slice header920includes a signal indicating the inter-slice dependency indicator. The inter-slice dependency indicator is indicated by the syntax element dependent_slice_flag. In other words, the inter-slice dependency (for example, temporal dependency) is signaled within the slice header, somewhere in the slice header. In order to parse this syntax element, all the syntax elements preceding dependent_slice_flag must be parsed as well as the parameter set syntax elements necessary for parsing the slice header syntax elements preceding the dependent_slice_flag. [1-8. Processing in Router] As described above, in traffic shaping determination, it is desirable to take the dependencies introduced by the dependent and entropy slices into account, in addition to the dependencies signaled in the NAL header. For instance, a router may be implemented as a media aware mobile base station. The bandwidth in the downlink is very limited and needs to be managed very carefully. Let us assume the following example case. Assume that a packet is randomly dropped in the upstream by a normal router. In this case, a media aware network element (MAME) discovers the packet loss by checking the packet number. After checking the packet loss, MANE drops all the packets which are dependent on the dropped packet and which follow. This is a feature desirable for media aware network elements. In this way, packets can be dropped more intelligently. When a router is determined to drop a NAL unit, it would immediately deduce that the following dependent slices need to be dropped as well. In the current syntax introduced inFIG.9A, the accessing of the dependent_slice_flag requires parsing of a considerable amount of information. This is not essential for the packet routing or traffic shaping operations in the routers. All of the information that is necessary for discovering the inter-layer and inter-temporal relations are present in the video parameter set. The video parameter set is the highest set in the parameter set hierarchy. Accordingly, the above described information is signaled within the NAL header570. However, in the case of the NAL header and the slice header shown inFIG.9A, accessing slice dependency information requires keeping track of additional parameter sets such as PPS and SPS. This, on the other hand, reuses the capability of media-aware gateways or routers. As seen fromFIG.9A, the slice header920has to be parsed up to the dependent_slice_flag and the parsed parameters are useless for network operation. In order to be able to parse the slice address which precedes the dependent_slice_flag, the following syntax elements are required from the syntax elements included in the SPS930as shown inFIG.9B.FIG.9Bis a diagram showing an example of syntax included in SPS.pic_width_in_luma_samples (reference sign931inFIG.9B)pic_height_in_luma_samples (reference sign932inFIG.9B)log 2_min_coding_block_size_minus3 (reference sign933inFIG.9B)log 2_diff_max_min_coding_block_size (reference sign934inFIG.9B) These parameters are shown in the right table ofFIG.9Band are necessary to obtain the slice_address parameter. The syntax element slice_address is variable length coded (as can be seen when looking at the length “v” in the descriptor, second column, of slice_address and slice header920inFIG.9A). In order to know the length of this variable length coded parameter, those syntax elements from SPS are needed. In fact, in order to be able to parse the dependent_slice_flag, the actual value of the slice_address syntax element is not necessary. Only the length of the syntax element which is variable must be known so that the parsing process may continue. Therefore, the SPS needs to be parsed up to the point935from the syntax elements within SPS930shown inFIG.9B. The four syntax elements are required to be stored. They are later used in a formula for calculating the length of the slice_address syntax element. Moreover, in order to access the dependent_slice_enabled_flag also preceding the dependent_slice_flag, the PPS needs to be parsed up to the point945from the syntax elements within PPS shown inFIG.9C.FIG.9Cis a diagram showing an example of syntax included in PPS. It should be noted that the syntax elements whose parsing methods are described with reference toFIGS.9A to9Cand which are located within the slice header and the SPS and the PPS are not required for common router operations. Moreover, some of the syntax elements cannot simply be skipped since some of the syntax elements are coded with variable length codes. Accordingly, even if jumping is performed in the bit stream by a predefined amount of bits, jumping until the dependent_slice_enabled_flag is not possible. In other words, in order to read the dependent_slice_flag (dependency indication), the MANE needs to go further in the slicing header (refer to slice header920) whose parsing is rather complicated. Specifically, the flag first_slice_in_pic_flag has to be parsed. The flag first_slice_in_pic_flag is a flag indicating whether or not a slice is the first slice within the picture. Then, no_output_of_prior_pics_flag whose presence is conditional on the NALU type has to be parsed. Moreover, the variable length coded pic_parameter_set_id has to be decoded. The syntax element pic_parameter_set_id is a syntax element indicating which of the parameter sets is used (a syntax element which identifies the parameter set). By parsing pic_parameter_set_id, the parameter set to be used can be identified. Finally, the slice_address syntax element is required. The syntax element slice_address is a syntax element indicating the starting position of the slice. The syntax element further requires parsing the PPS and SPS as well as additional computation. As the last step, the value of dependent_slice_enabled_flag (dependent slice enabled flag) has to be obtained from the PPS, in order to know whether the dependent_slice_flag is present in the bitstream or not. When dependent_slice_enabled_flag=0, it means that the current slice is a normal slice since the dependent slices are not enabled. In order to obtain the value of dependent_slice_enabled_flag, the PPS is required to be parsed up to approximately its middle. Unfortunately, the syntax elements before dependent_slice_flag cannot be skipped and need to be parsed unlike the case of RTP and NAL header data, in which the position of the data is predefined. This is caused by the fact that the syntax elements in the slice header are variable length coded. Therefore, the presence and length of the element needs to be computed for every VCL NAL unit. In addition, the additional session data needs to be stored because they are needed later (refer to PPS and SPS). Moreover, presence of some syntax elements depends on presence or value of other syntax elements included possibly in other parameter structures (the syntax elements are conditionally coded). In the current standardization, there is a proposal to signal the dependency structure of the video sequence in the Video Parameter Set (VPS) that describes how many layers are contained in the bit stream and the dependency indicators to indicate the various inter-layer dependencies. The VPS is signaled in the very beginning of the video, before the first SPS. Multiple SPSs can refer to a single VPS. This means that a VPS carries information that is valid for multiple video sequences. The main goal of the VPS is to inform a router or decoder about the content of the video including information. How many video sequences are included and how are they interrelated. SPS is valid only within a video sequence whereas VPS carries information related to multiple video sequences. Moreover, the characteristic of the information carried in the VPS is informative especially for routers. For example, the VPS might carry information that is required for streaming session setup since the design is not finalized. The router parses the information in the VPS. The router, without the need to other parameter sets (by just looking at the NAL headers), can determine which data packets to forward to the decoder and which packets to drop. However, in order to discover the currently active VPS, the following ordered steps need to be performed: parsing the PPS_id in the slice header; parsing the SPS_id in the active PPS determined by the PPS_id; and parsing the VPS_id in the active SPS determined by the SPS_id. In order to solve the above described problem, an image coding method according to an aspect of the present disclosure is an image coding method of performing coding processing by partitioning a picture into a plurality of slices, the image coding method comprising transmitting a bitstream which includes: a dependent slice enabling flag indicating whether or not the picture includes a dependent slice on which the coding processing is performed depending on a result of the coding processing on a slice different from a current slice; a slice address indicating a starting position of the current slice; and a dependency indication (dependent_slice_flag) indicating whether or not the current slice is the dependent slice, wherein the dependent slice enabling flag is disposed in a parameter set common to the slices, the slice address is disposed in a slice header of the current slice, and the dependency indication is disposed in the slice header, and is disposed before the slice address and after a syntax element (pic_parameter_set_id) identifying the parameter set. In the above described image coding method, a dependency indication of inter-slice dependency is located in a position suitable for parsing by the router. With this, it is possible to code the dependency indication the syntax independently, in other words, unconditionally, from other syntax elements. For example, the dependency indication may be included in the bitstream when the dependent slice enabling flag indicates inclusion of the dependent slice. For example, the dependent slice enabling flag may be disposed at a beginning of the parameter set. For example, each of the slices may include a plurality of macroblocks, and the coding processing on the current slice may be started after the coding processing is performed on two of the macroblocks included in an immediately preceding current slice. For example, the dependency indication may not be included in a slice header of a slice which is processed first for the picture, among the slices. In order to solve the above described problem, an image decoding method according to an aspect of the present disclosure is an image decoding method of performing decoding processing by partitioning a picture into a plurality of slices, the image decoding method comprising extracting, from a coded bitstream, a dependent slice enabling flag indicating whether or not the picture includes a dependent slice on which the decoding processing is performed depending on a result of the decoding processing on a slice different from a current slice, a slice address indicating a starting position of the current slice, and a dependency indication indicating whether or not the current slice is the dependent slice, wherein the dependent slice enabling flag is disposed in a parameter set common to the slices, the slice address is disposed in a slice header of the current slice, and the dependency indication is disposed in the slice header, and is disposed before the slice address and after a syntax element identifying the parameter set. For example, the dependency indication may be extracted from the bitstream when the dependent slice enabling flag indicates inclusion of the dependent slice. For example, the dependent slice enabling flag may be disposed at a beginning of the parameter set. For example, each of the slices may include a plurality of macroblocks, and the decoding processing on the current slice may be started after the decoding processing is performed on two of the macroblocks included in an immediately preceding current slice. For example, the dependency indication may not be included in a slice header of a slice which is processed first for the picture, among the slices. In order to solve the problem, an image coding apparatus according to an aspect of the present disclosure is an image coding apparatus which performs coding processing by partitioning a picture into a plurality of slices, the image coding apparatus comprising a coder which transmits a bitstream which includes: a dependent slice enabling flag indicating whether or not the picture includes a dependent slice on which the coding processing is performed depending on a result of the coding processing on a slice different from a current slice; a slice address indicating a starting position of the current slice; and a dependency indication indicating whether or not the current slice is the dependent slice, wherein the dependent slice enabling flag is disposed in a parameter set common to the slices, the slice address is disposed in a slice header of the current slice, and the dependency indication is disposed in the slice header, and is disposed before the slice address and after a syntax element identifying the parameter set. In order to solve the problem, an image decoding apparatus according to an aspect of the present disclosure is an image decoding apparatus which performs decoding processing by partitioning a picture into a plurality of slices, the image decoding apparatus comprising a decoder which extracts, from a coded bitstream, a dependent slice enabling flag indicating whether or not the picture includes a dependent slice on which the decoding processing is performed depending on a result of the decoding processing on a slice different from a current slice, a slice address indicating a starting position of the current slice, and a dependency indication indicating whether or not the current slice is the dependent slice, wherein the dependent slice enabling flag is disposed in a parameter set common to the slices, the slice address is disposed in a slice header of the current slice, and the dependency indication is disposed in the slice header, and is disposed before the slice address and after a syntax element identifying the parameter set. In order to solve the above described problem, an image coding and decoding apparatus according to an aspect of the pre sent disclosure includes the above described image coding apparatus and the above described image decoding apparatus. According to the image coding method, the image decoding method, and the like that are configured above, an indication of inter-slice dependency is located within the syntax of the bitstream related to a slice independently from other elements. The dependency indication is located, without unnecessarily parsing other elements, separately from the other elements. In the above HEVC examples, the indicator of the inter-slice dependency dependent_slice_flag is signaled at a location in which it is not necessary to parse syntax elements irrelevant for the network operation. Specifically, the present disclosure provides an apparatus for parsing a bitstream of a video sequence of images encoded at least partially with a variable length code and including data units carrying coded slices of video sequence. The apparatus comprises a parser for extracting from the bitstream a dependency indication which is a syntax element indicating for a slice whether or not the variable length decoding or parsing of the slice depends on other slices, wherein the dependency indication is extracted from the bitstream independently of and without need for extracting other syntax elements beforehand. Such apparatus may be included, for instance, within the entropy decoder290inFIG.2. When referring to the extracting from the bitstream, the extraction and, where necessary for the extraction, an entropy decoding is meant. The entropy coding is a variable length coding, for instance, the arithmetic coding such as CABAC. This is, in HEVC, applied to coding of the image data. Data units here refer, for instance, to NAL units or access units. The expression “without need for extracting other syntax elements” refers to a situation in which the dependency indication is preceded only by elements, of which the length is known and of which the presence is known or conditioned on elements already parsed or not conditionally coded at all. The present disclosure further provides an apparatus for generating a bitstream of a video sequence encoded at least partially with a variable length code and including data units carrying coded slices of video images The apparatus comprises a bitstream generator for embedding into the bitstream a dependency indicator which is a syntax element indicating for a slice whether or not the variable length decoding of the slice depends on other slices, wherein the dependency indicator is embedded into the bitstream independently of and without need for embedding other syntax elements beforehand. Such apparatus may be included, for instance, within the entropy coder190inFIG.1. According to the image coding method, the image decoding method, and the like that are configured above, the bitstream includes coded slice data and header data regarding the slice, and the dependency indicator is located within the bitstream at the beginning of the slice header. This means that the slice header begins with the syntax elements indicating the slice dependency. It should be noted that the dependency indication does not have to be located at the very beginning of the slice header. However, it is advantageous when no other conditionally coded and/or no variable length coded syntax element precedes the dependency indicator within the slice header. For instance, the current position of the dependent_slice_flag is changed with respect to the prior art described above so as to be located at the beginning of the slice header. With this change, the reduction of the amount of syntax elements that need to be parsed is achieved. Complicated parsing operations of the routers are avoided, such as variable length decoding and parsing of information that requires additional computations and/or storing of additional parameters for future use and/or parsing of other parameter sets. Moreover, the number of parameter sets that are required to be tracked is reduced. These general and specific aspects may be implemented using a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or any combination of systems, methods, integrated circuits, computer programs, or computer-readable recording media. Hereinafter, embodiments are specifically described with reference to the Drawings. Each of the embodiments described below shows a general or specific example. The numerical values, shapes, materials, structural elements, the arrangement and connection of the structural elements, steps, the processing order of the steps etc. shown in the following embodiments are mere examples, and therefore do not limit the scope of the Claims. Therefore, among the structural elements in the following embodiments, structural elements not recited in any one of the independent claims are described as arbitrary structural elements. Embodiment 1 FIG.10shows an example of bitstream syntax according to the present embodiment. A NAL header1010shown inFIG.10is the same as the NAL header910shown inFIG.9A. In other words, there is no change. However, the syntax structure of the slice header1020is different from the syntax structure of the slice header920inFIG.9A. In particular, in the slice header1020, the dependent_slice_flag is moved up within the slice header in such a way that there is no syntax element preceding the dependent_slice_flag. The dependent_slice_flag is conditionally coded, is coded using a variable length code, or receives the parsing that requires additional computation. The syntax elements first_slice_in_pic_flag and dependent_slice_flag actually both determine the spatial dependencies. The syntax elements are coded immediately after the NAL header in such a way that no other syntax element needs to be parsed. Since the first_slice_in_pic_flag also carries information which is related to inter-slice dependencies, it may precede dependent_slice_flag. The first_slice_in_pic_flag syntax element is a flag which is set according to the rule that every frame has to start with a normal slice. Accordingly, when the flag first_slice_in_pic_flag is set, it means that the slice is a normal slice and thus independent. Thus, the dependent_slice_flag and the first_slice_in_pic_flag can both be jointly seen as an indicator of inter-slice dependencies. In other words, the dependency indicator can be defined to include a first slice indication indicating whether or not the slice is a first slice in a picture and a dependent slice flag indicating whether or not the variable length decoding of the slice depends on other slices. The first slice in a picture is always a slice for which the variable length decoding does not depend on other slices. Advantageously, the bitstream includes a dependent slice enabling flag indicating whether or not dependent slices can be included within the bitstream. The dependency indication is included in the bitstream only when the dependent slice enabling flag indicates that dependent slices can be included in the bitstream. The dependent slice enabling flag is located within the bitstream in a parameter set common for a plurality of slices and located at the beginning of the parameter set. The parameter set may be, for instance, the picture parameter set which carries parameters for a single picture. Alternatively, the dependent slice enabling flag is located within a sequence parameter set which carries parameters for the entire image (video) sequence. However, in the present disclosure, the dependent_slice_flag (dependency indication) is coded not conditioned on the syntax element dependent_slice_enabled_flag (dependent slice enabling flag). In the present embodiment, since the picture parameter set id is located after the dependency indication, it is advantageous to avoid a possible parsing error in the case of where the picture parameter set id is signaled within the slice header. This change may also be seen as and/or interpolated by changing the position of the other required syntax elements in the parameter sets or headers for reducing the amount of syntax elements that are required to be parsed for determining the dependencies between slices. For instance, the syntax element dependent_slice_flag in the slice header of the present syntax of HM8.0 is only present when the syntax element “dependent_slice_enabled_flag” value indicates that usage of dependent slices within the bitstream is enabled. The enabling of the dependent slices and thus, also the syntax element “dependent_slice_enabled_flag” is included in the PPS as shown inFIG.9C. Accordingly, the syntax element “dependent_slice_enabled_flag” in the PPS is moved up within the syntax of the PPS in order to simplify its parsing necessary to parse the dependent_slice_flag (for example, the beginning of the parameter set). This can be also useful when the dependent_slice_flag is coded after the pic_parameter_set_id (the syntax element identifying the parameter set). This is because by doing so, the parsing error is avoided even when dependent slice enabling flag is conditioning the presence of the dependency indication. Instead of moving the “dependent_slice_enabled_flag” up within the PPS, the “dependent_slice_enabled_flag” may be moved from the PPS to the SPS and/or VPS so that the parameters sets which are lower in the hierarchy are not required to be tracked. In other words, according to the present embodiment, the position of the required syntax elements is changed in order to reduce the amount of parameter sets that need to be kept track of. This also reduces the parsing complexity. The “required parameters” in this context means the parameters meant which contribute to determining whether or not a slice is a inter-dependent slice. A first possibility applicable directly to HEVC is to provide the dependency indication in the beginning of the dependent slice header and unconditioned on the dependent slice enabling flag which is included in a parameter set different from the slice header. A second possibility applicable directly to HEVC is to provide the dependency indication in the dependent slice header after the parameter set indication identifying the parameter set in which the dependent slice enabling flag is included. The dependency indication may be conditioned on the dependent slice enabling flag. Moving up of the dependent slice enabling flag within the PPS or moving the dependent slice enabling flag into SPS may be beneficial for any of these possibilities. In particular, this is beneficial for the second possibility, in which the dependent slice enabling flag is needed to parse the dependency indication. As can be seen inFIG.10, the NAL unit header, together with the relevant portion of the slice header, has 18 bits (14 bits of the NALU header and 2 bits of the slice header). According to this example, a media aware network element may operate for a current slice packet as follows. If a previous slice is dropped, which is a normal, an entropy or a dependent slice, the network element checks the first two bits of the current slice header, which are the first_slice_in_pic_flag and (in the case where the dependent slices are allowed for the bitstream) the dependent_slice_flag. When the NAL unit type is a VCL NAL unit type and the last two bits of the 18 bits checked are “01”, the NAL unit is dropped. In particular, when the first bit of the slice header is “1”, then it is the first slice in the picture that is (according to rules) not a dependent slice. When the first bit of the slice header is “0” and the next bit of the slice header is also “0”, the slice is not dependent. Accordingly, only when the two first bits of the slice header are “01”, the slice is dependent. Furthermore, the slice should be dropped since it cannot be decoded when the parent slice was already dropped. Accordingly, the flags first_slice_in_pic_flag and dependent_slice_flag can be seen as an extension of the NAL header, even if they belong to the slice header syntax. Accordingly, the present embodiment also provides as one of its aspects a network router for receiving, analyzing and forwarding network packets to their destinations. The router includes a receiving unit for receiving a network packet including a packet destination address and a bitstream portion with coded video data; a parser including the apparatus for parsing a bitstream of an encoded video sequence according to any of the above and below cited embodiments, in order to determine dependency of the coded video data from other packets; and a packet analyzer for analyzing the received packet destination address and the determined dependency and for judging how to handle the network packet. Embodiment 2 According to Embodiment 2, dependent_slice_enabled_flag is dropped from PPS. It should be noted that dependent_slice_enabled_flag may be moved to SPS, instead of being dropped. FIG.11shows an example in which the dependent_slice_enabled_flag does not need to be parsed before accessing first_slice_in_pic_flag and dependent_slice_flag In this example, dependent_slice_enabled_flag is not used because it is not conditioned on the presence of the dependency indication. This example provides the possibility of having the dependency indication at the beginning of the slice header without causing parsing problems due to unknown identification of the current PPS set. Effect of Embodiment 2, etc. In Embodiment 1, in order to parse the dependent_slice_flag, the dependent_slice_enabled_flag has to be parsed. The dependent_slice_enabled_flag is signaled in a PPS. This may cause some parsing overhead as discussed above, when the dependent_slice_enabled_flag is located far from the PPS start and the preceding syntax elements are coded conditionally. Moreover, signaling of the dependent_slice_flag syntax element before the syntax element pic_parameter_set_id in the PPS is parsed, can create parsing errors as follows. The presence of the dependent_slice_flag depends on the dependent_slice_enabled_flag which is signaled in the PPS. However, the id of the currently active PPS is signaled after the dependent_slice_flag. Therefore, it is not possible to parse the dependent_slice_flag before accessing the previous elements. Accordingly, it is advantageous to remove the parsing condition on the dependent_slice_enabled_flag. It may be more beneficial, when the following restriction is applied. Namely, if dependent_slice_enabled_flag in PPS is zero, then dependent_slice_flag shall be equal to zero. However, these advantageous implementations are not to limit the scope of the present disclosure. (Modification 1 of Embodiments 1 and 2) As an alternative or additionally to removing the conditioning on the dependent_slice_enabled_flag, the dependent_slice_enabled_flag may be moved from PPS to either SPS and/or VPS. Moreover, instead of just moving the dependent_slice_enabled_flag, the dependent_slice_enabled_flag may be duplicated in the SPS. In this case, the indicator in the SPS and PPS might be forced to have the same value. Or, the PPS might be allowed to overwrite the indicator in the SPS. For instance, when sps_dependent_slice_enabled_flag is equal to 1, then the pps_dependent_slice_enabled_flag can be 0 or 1. Then, sps_dependent_slice_enabled_flag is an indication of enabling the dependent slices for a sequence of pictures signaled in the SPS, and pps_dependent_slice_enabled_flag is an indication of enabling the dependent slices for a picture signaled in the PPS. However, when the value of the dependent_slice_enabled_flag can change in the PPS, this means that parsing of the PPS is still needed and the advantage of less frequent tracking and parsing of PPS is prevented. These modifications provide the advantage that VPS and SPS carry dependency structures. The carrying of dependency structures by VPS and SPS enables network elements to shape the bit streams, that is, to determine for discarding the dependent packets which cannot be decoded anyhow or for discarding the dependent slices rather than independent slices. Therefore, dependent_slice_enabled_flag in VPS would trigger the router to check the slice header additionally or not. It is noted that these modifications do not further reduce the parsing complexity if the example ofFIGS.10and11is applied. However, it provides a more beneficial structure of the syntax for carrying the dependency structures. Summarizing, according to this example, an indicator for indicating whether or not dependent slices are enabled for the bitstream is signaled in a video parameter set. The video parameter set is a parameter set applying to more than one slices in more than one pictures. There are two different advantage of signaling dependent_slice_enabled_flag is VPS and/or SPS. When the flag is just moved or duplicated, PPS is not required to be parsed, reducing the parsing overhead. The other benefit is letting the routers know about the prediction structure of the video sequence. This advantage is present all the time. Usually, a router may check the content of a VPS/SPS in order to know what it will receive. The VPS is the highest parameter in the hierarchy. The VPS can include information about multiple video sequences, whereas the SPS and PPS are specific to a single video sequence and a picture, respectively. The information in the VPS includes bitrate, temporal_layering structure of the video sequences, and the like. It also includes information about the inter-layer dependencies (dependencies between different video sequences). Accordingly, VPS can be seen as a container for multiple video sequences, and it gives a general overview about each sequence. In the current HEVC version, the dependency between slices in a frame is established by both dependent_slice_flag and first_slice_in_pic_flag. According to current specifications, network entities cannot use inter-slice dependencies without applying a highly complex parsing. A straightforward solution would be, if there is a packet loss discovered via a missing packet number, to drop all packets until the first_slice_in_pic_flag which is equal to value 1 is encountered. This is because the first slice in a picture is always a normal slice. However, this solution leads to reduction in the coding efficiency. Therefore, as described above an inter-slice dependency signaling enabling efficient parsing may be used. This is achieved by signaling dependent_slice_flag and first_slice_in_pic_flag within the slice header immediately after the NAL header. Alternatively or in addition, the syntax elements relating to the inter-slice dependencies are coded unconditionally, that is, independently of the other syntax elements which may be in the slice header or in the PPS. (Modification 2 of Embodiments 1 and 2) FIG.12illustrates Modification 2 alternative to Modification 1 discussed above. In particular, the NAL unit header1210is the same as the NAL unit header shown inFIG.10(NAL unit header910shown inFIG.9A). However, the slice header1220and the slice header1020shown inFIG.10are different in that the slice header syntax elements dependent_slice_flag and first_slice_in_pic_flag are reversed in their order. In particular, the slice header1220includes the dependent_slice_flag as a first syntax element, and the syntax element first_slice_in_pic_flag as a second syntax element, conditioned on dependent_slice_flag presence. As can be seen from this example, a first slice indication indicating whether or not the slice is a first slice in a picture is included in the syntax. A first slice in a picture is always a slice for which the variable length decoding does not depend on other slices. Moreover, the dependent slice flag is included in the bitstream in front of the first slice indication. The first slice indication is included in the bitstream only when the dependent slice flag does not indicate a dependent slice. This arrangement provides the same advantages as the conditioning. In other words, the dependency flag is conditioned on the first slice indication. As can be seen inFIG.12, both elements may be understood as the dependency indication and are included at the beginning of the slice header. Embodiment 3 In Embodiment 3, compared with Embodiments 1 and 2, the arranging method of the syntax elements is changed in order to reduce parsing of unnecessary syntax elements. In the above described embodiments, dependent_slice_flag is described in the case where first_slice_in_pic_flag is included as condition for the presence of dependent_slice_flag. However, the first_slice_in_pic_flag and dependent_slice_flag may be both included in the bitstream without being conditioned one on the presence of the other. For instance, the coding method of the dependent_slice_flag is changed in order to be independent of the syntax element dependent_slice_enabled_flag according to one of the modifications described above. FIG.13is a diagram showing an example of a slice header according to the present embodiment.FIG.13illustrates the case of still including the conditioning of the dependency indication on the dependent slice enabling flag. Specifically, in the slice header according to the present embodiment, the dependent_slice_flag is disposed before the slice_address compared with the existing slice header shown inFIG.6. Furthermore, in the slice header according to the present embodiment, compared with the examples inFIGS.10to12, dependent_slice_flag is disposed after pic_parameter_set_id. In the present embodiment, since dependent_slice_flag is disposed before the slice_address, at least the SPS does not need to be parsed for the parsing of dependent_slice_flag. As described above, the slice_address is a syntax element indicating the start of a slice. Furthermore, the slice_address can only be parsed with the help of syntax elements signaled within the SPS (pic_parameter_set_id). Alternatively or in addition, the dependent_slice_enabled_flag is either moved upward within the PPS or it is moved to the SPS and/or VPS. If the enabled flag is in the VPS and/or in the SPS, it may not be required to parse and keep track of the PPS and the SPS. Modification of Embodiment 3, Effect, and the Like (1) The example ofFIG.13may lead to providing an apparatus for parsing a bitstream of a video sequence coded at least partially with a variable length code and including data units carrying coded slices of video images. In this case, the apparatus is configured to include a parser which extracts from the bitstream the following syntax elements:a dependency indication which is a syntax element indicating for a slice in the slice header whether or not the variable length decoding of the slice depends on other slices;a dependent slice enabling flag included within a parameter set for a plurality of slices and indicating whether or not dependent slices can be included within the bitstream; anda slice address indicating the position within the bitstream at which the slice starts. (2) Moreover, in the present embodiment, the dependency indication is signaled within the slice header before the slice address and after a syntax element identifying the parameter set. With this embodiment, it is possible without causing parsing errors to configure that the dependency indication is included in the bitstream only when the dependent slice enabling flag indicates that dependent slices can be included in the bitstream. (3) In the present embodiment, the dependent slice enabling flag is located within the bitstream in a parameter set (PPS) common for a plurality of slices forming the same picture frame and located at the beginning of the parameter set. However, it is not limited to such. Alternatively (or in addition), the dependent slice enabling flag is located within the bitstream in a parameter set (SPS) common for a plurality of slices forming the same sequence of pictures. Still alternatively (or in addition), the dependent slice enabling flag is located within the bitstream in a parameter set (VPS) common for a plurality of slices forming a plurality of sequences of picture frames. (4) Moreover, in the present embodiment, the VPS_id and the SPS_id may be signaled explicitly in a SEI message. When the dependent_slice_enabled_flag is signaled in the SPS, the dependent_slice_flag must still follow pic_parameter_set_id. Otherwise, parsing dependency is introduced because the SPS_id is signaled in the PPS. With signaling the identification of the current SPS or VPS which carry the dependent_slice_enabled_flag, the dependency indication may be included also before the pic_parameter_set_id since then the picture parameter set parsing is not necessary. Moreover, such SEI message, carrying the VPS_id or SPS_id is not necessary for the decoding operation since these IDs are also determined by parsing the PPS. The SEI message can thus be discarded without affecting the decoding operation after being used by the network elements Embodiment 4 In Embodiment 4, the inter-slice dependency information is duplicated (supplementary to the information signaled in the slice header and/or in a parameter set) in another NAL unit such as an SEI message. For instance, an SEI message may be defined which conveys the inter-slice dependency information in every access unit or before each dependent slice. The term “access unit” refers to a data unit which is made up of a set of NAL units. An access unit includes coded picture slices, that is, VCL NALUs. In particular, the access units may define points for random access and may include NALUs of a single picture. However, the access unit is not necessarily a random access point. In the current HEVC specifications, the access unit is defined as a set of NAL units that are consecutive in decoding order and contain exactly one coded picture. In addition to the coded slice NAL units of the coded picture, the access unit may also contain other NAL units not containing slices of the coded picture. The decoding of an access unit always results in a decoded picture. However in a future extension of the HEVC (like Multi-View Coding, (MVC) or Scalable Video Coding, (SVC)), the definition of the access unit may be relaxed/modified. In accordance with the current specifications, the access unit is formed by an access unit delimiter, SEI messages, and VCL NALUs. According to the present embodiment, the dependency indication is located within the bitstream out of the header of a slice to which the dependency indication relates. Moreover, it may be beneficial when the dependency indication is located within the bitstream in a supplementary enhancement information message included in the bitstream before the dependent slice or once per access unit. Embodiment 5 According to Embodiment 5, the inter-slice dependency information is signaled in the NAL header as a flag or implicitly as a NAL unit type with which it is associated. As a rule, the parsing of syntax elements in the NAL header does not depend on any other syntax elements. Every NAL unit header can be parsed independently. The NAL header is the usual place for signaling dependency information. Accordingly, in accordance with the present embodiment, also the inter-slice dependency is signaled therewithin. In other words, the parsing apparatus may be adopted in a router or in a decoder. The parsing apparatus further includes a network adaptation layer unit for adding to a slice of coded video data and to the header of the slice a network adaptation layer, and NAL header. Advantageously, the dependency indication is located within the bitstream in the NAL header and is coded independently of the other syntax elements. The dependency indicator may be placed within the NAL header since the NAL header in current HEVC specifications envisages some reserved bits which can be used therefor. A single bit would be enough to signal the dependency indication. Alternatively, the dependency indication is indicated by a NAL unit type and a predefined NAL unit type is reserved to carry dependency information. Embodiment 6 It is noted that the above five embodiments may be arbitrarily combined in order to enable an efficient parsing of the dependency information in the network elements. Even when their usage is redundant, the embodiments are combinable. Accordingly, the duplicating of the dependency indication can be applied even when the dependency indication is also signaled at the beginning of the slice header. FIG.14shows an example of the NAL unit header1410in which the NAL unit header910shown inFIG.9Ais modified. The NAL unit header1410includes dependent_slice_flag. Moreover, in order to move the dependent_slice_flag into the NAL header and to keep the size of the NAL header fixed due to backward compatibility, the one bit necessary for the dependent_slice_flag is taken from the syntax element nuh_reserved_zero_6 bits of the NAL unit header. Accordingly, the syntax element nuh_reserved_zero_6 bits now only has 5 bits. The syntax element nuh_reserved_zero_6 bits includes bits reserved for future use so that the reduction does not cause any problems and does not require any further modifications. In general, a current VCL NAL unit depends on the previous VCL NAL unit which has the same temporal_layer_id. When the dependent_slice_flag is signalled in the NAL header, one bit will be wasted for both VCL and non-VCL NAL units since every data unit such as picture slice or parameter set has the same NAL header. Accordingly, although it seems that the dependent_slice_flag would also be signaled for parameter sets or for SEI messages, this is unnecessary. Moreover, dependent_slice_flag always needs to be signaled even if the dependent slices are disabled in the sequence parameter set. This leads to an unnecessary overhead. In all above embodiments, the dependency indication may be a one-bit flag. Embodiment 7 According to Embodiment 7, the dependency indication is indicated by a NAL unit type and a predefined NAL unit type is reserved to carry dependency information. Accordingly, a new (separate) VCL NAL type is defined with a similar semiotics as the existing VCL NAL units. For instance, when NAL_unit_type is equal to 15 (or to another predefined type or NALU which is not reserved for another particular type of NALU), then the current VCL NAL unit depends on the previous VCL NAL unit that has the same temporal_layer_id. The dependency relates to the dependency of the current slice on the slice header of a preceding slice, as described above, that is, dependency in parsing. It may be advantageous in these cases to include the bit in the NAL header to the additional NAL unit types. This can be used to indicate whether or not the current slice is a dependent slice. When the dependency information is signaled in the slice header in addition to the NAL header, the signaling in the NAL header becomes optional. Specifically, when the NAL_unit_type field in the NAL header is configured to signal that the current slice is a dependent slice, then it is not possible to signal any other “type” information. For instance, in some cases it might be more beneficial to convey the information that a current slice is a “first picture in the sequence” (NAL_unit_type equal to 10 or 11). When the inter-slice dependency information in the NAL header is optional (since it is duplicated in the slice header), it may be chosen to signal the more valuable information. It may moreover be advantageous to add two or more VCL NAL unit types, such as “dependent slice RAP picture” (required for parsing) or “dependent slice not RAP picture”. “RAP” denotes the random access picture. Random access picture is a picture coded independently (in terms of prediction) of other pictures so that such picture may be used as a starting point for coding and decoding. With this, it is thus suitable as a random access point. In the dependent slice header, the syntax element RapPicFlag is used in the parsing process. Specifically, the syntax element RapPicFlag is an indication indicating whether or not the current picture is a random access picture. The value of the RAPPicFlag depends on the NAL unit type like the following Expression 2. [Math. 2] RapPicFlag=(nal_unit_type ≥7 && nal_unit_type ≤12)  (Expression 2) In other words, in the example shown inFIG.15, the random access pictures are carried by NALUs with NALU type between 7 and 12. In order to enable correct parsing and to provide a possibility of slice dependency for the random access pictures, therefore, in the present disclosure, two different NAL unit types are defined in order to guarantee correct parsing of the slice header. As a general rule, even when a new VCL NAL unit type is defined, the parsing of the slice header should still be possible without any problem. Either of multiple NAL types is defined as above or the dependent slice header is changed in such a way that there is no parsing problem. When a new VCL NAL unit type is defined to indicate the dependent slice, the slice header syntax structure can be changed as follows. In the example above the NAL unit type “DS_NUT” is used to indicate that the current VCL nal unit is a dependent slice. Compared to the state-of-the-art slice header syntax structure that is described in Non Patent Literature 3, the following two changes are introduced in the present embodiment. (1) no_output_of_prior_pics_flag is not signaled in the dependent slice header. In other words the presence of no_output_of_prior_pics_flag is based on the condition that the current slice is not a dependent slice. (no_output_of_prior_pics_flag can be present in the slice header when the current slice is not a dependent slice). (2) first_slice_in_pic_flag is signaled conditionally on the value of the nal_unit_type. When the value of the nal_unit_type indicates that the current slice is a dependent slice, the syntax element first_slice_in_pic_flag is not signaled explicitly and inferred to be 0. This saves bit rate at the same quality. According to the example, no_output_of_prior_pics_flag is not signaled when the current slice is a dependent slice. Accordingly the value of the RapPicFlag is not required to be evaluated when the current slice is a dependent slice. Therefore the slice header of a dependent slice can be parsed without a problem. More specifically, the slice header of the dependent slice can be parsed without referring to the NAL unit header of a preceding nal unit header. A problem occurs when the preceding nal unit header is not present at the time of decoding. Secondly, the first_slice_in_pic_flag is signaled based on the value of the NAL_unit_type. This change is the same as that of the example described inFIG.12. InFIG.12, first_slice_in_pic_flag is signaled in the slice header only when the current slice is not a dependent slice (which is indicated by the dependent_slice_flag). Similarly in the above example first_slice_in_pic_flag is signaled only when the nal_unit_type is not equal to “DS_NUT”, which means that the current slice is not a dependent slice. The two changes that are presented above are not required to be done together. It is also possible to perform only one of the changes in the slice header. The benefit of each change is associated with the cost to check whether or not a slice is a dependent slice. However, when the two changes are performed together, the benefits of both changes can come both for the same costs as the benefit of each of the individual changes in the case where the two syntax elements first_slice_in_pic_flag and no_output_of_prior_pics_flag are coded consecutively. Thus, the application of both changes in combination with a consecutive coding of the mentioned two syntax elements gives an advantage over the straight forward application of each of the changes individually. In all of the explanation in the embodiments, it is also possible to remove dependent_slice_enabled_flag from the bitstream when the dependent slice indication is not coded conditionally on it. In other words, when for instance a new NAL unit type is used to indicate that the current slice is a dependent slice, then the dependent_slice_enabled_flag can be removed from the bitstream. FIG.15shows a NAL unit header1510that is the same as the NAL unit header910shown inFIG.9Aand a slice header1520which is changed from the slice header920shown inFIG.9A. The slice header1520includes the termination of the dependent_slice_flag value in accordance with the type of the NALU. In particular, NAL_unit_type syntax element with values 15 and 16 define dependent slices. When NAL_unit_type is equal to 15, the type of the slice is a dependent slice of random access picture. If, on the other hand, NAL_unit_type is equal to 16, the slice is a dependent slice of a non-random access picture. Therefore, a relationship of the following Expression 3 is established. [Math. 3] RapPicFlag=(nal_unit_type ≥7 && nal_unit_type ≤12∥nal_unit_type==15)  (Expression 3) It is noted that the values 15 and 16 were selected only as an example. As is clear to those skilled in the art, any predefined numbers may be adopted which are not used otherwise. Specifically, a first type of NALU is to be defined for identifying a dependent slice content of a random access picture, and a second type of a NALU is to be defined for identifying a dependent slice content of a non random access picture. Moreover, a restriction may be applied that the dependent slices are only used for RAPs or only used for non-RAPs. In such cases, only one new NALU type is necessary. Embodiment 8 FIG.16is a diagram showing an alternative solution. A NAL unit header1610is the same as the NAL unit header910. The slice header1620assumes the definition of NAL_unit_type with the values 15 and 16 signaling dependent slices as described above. However, the NAL unit type is not used in the parsing of the dependent slice flag. This enables the usage of the NAL_unit_type being optional for the encoder. Accordingly, the advantage of the present embodiment is only achieved when the encoder is determined to adopt the new NALU types. Then, the router only needs to look into the NALU type. However, when the encoder does not use the new NALU types, the router would treat the dependent slices as in the state of the art. Summarizing, the dependency indication may be indicated by a NAL unit type. A predefined NAL unit type may be reserved to carry coded slices the slice header of which depends on slice header of a preceding slice. Advantageously, a separate NAL unit type indicating the dependency is provided for random access pictures and for non random access pictures. Summarizing, the above described embodiments relate to syntax of a bitstream carrying encoded video sequences. In particular, the above described embodiments relate to syntax related to dependent and entropy slices, of which the slice header depends on the slice header of a preceding slice. In order to allow a media-aware network element to consider this kind of dependence without essentially increasing its complexity and delay due to parsing, the indication of dependency is signaled at the beginning of the packets or in other words in the proximity of the headers or parameters which are to be parsed. This is achieved for instance, by including the dependency indication at the beginning of the slice header (FIGS.10to12), possibly after the parameter set identifier and before the slice address, or by including the dependency indication before the slice address (FIGS.10and11), or by providing the dependency indication in a NALU header (FIG.14), in a separate message or by a special NALU type for NALUs carrying dependent slices (FIGS.15and16). Modifications of Embodiments 1 to 8, Effect, and the Like Various modifications and corrections are possible for the embodiments according to the present disclosure. Each of the structural elements in each of the above-described embodiments may be configured in the form of an exclusive hardware product (processing circuit), or may be realized by executing a software program suitable for the structural element. Each of the structural elements may be realized by means of a program executing unit, such as a CPU and a processor, reading and executing the software program recorded on a recording medium such as a hard disk or a semiconductor memory. Although in Embodiments 1 to 8, the description assumes wavefront, it is not limited to such. However, in the case or wavefront, all substreams cannot be started at the same time. As described above, regarding each of the substreams except for the substream at the beginning, the start of processing (coding or decoding) is delayed by two LCUs from the preceding substream. Therefore, in wavefront, a further shortening of the processing is required. In the present embodiment, by locating the dependency indication (dependent_slice_flag) after the syntax which identifies PPS and before the slice address, the number of syntax elements to be parsed can be reduced and thus the processing is reduced. Moreover, in the above described Embodiments 1 to 8, by arranging the dependency indication upward within the slice header (notably at the beginning), it is possible, for example, to check whether or not each of the slices is a dependent slice at an early stage of the picture processing. In other words, at the time of the start of processing on a picture (coding or decoding), when a step of checking whether or not each of the slices is a dependent slice, it is possible to extract a starting point of the parallel processing at the time of the start of processing on the picture. In other words, when the picture includes a plurality of normal slices, it is possible to extract a starting point of the parallel processing at the time of the processing on a picture or at an early stage of the processing. Here, conventionally, when the dependency indication is disposed after the slice address, it is not possible to check whether the slice is a dependent slice or a normal slice until the parsing of the slice address is completed. In this case, the start of the processing on the normal slice in the middle of the picture is significantly delayed from the start of the processing on the normal slice at the beginning of the picture. Conversely, in the above described Embodiments 1 to 8, since it is possible to check whether or not each of the slices is a dependent slice at an earlier stage of the processing on a picture, it is possible to expedite the start of the processing on a normal slice in the middle of the picture. In other words, it is possible to start the processing on the normal slice in the middle of a picture at the same time as the normal slice at the beginning of the picture. Embodiment 9 The processing described in each of embodiments can be simply implemented in an independent computer system, by recording, in a recording medium, a program for implementing the configurations of the moving picture coding method (image coding method) and the moving picture decoding method (image decoding method) described in each of embodiments. The recording media may be any recording media as long as the program can be recorded, such as a magnetic disk, an optical disk, a magnetic optical disk, an IC card, and a semiconductor memory. Hereinafter, the applications to the moving picture coding method (image coding method) and the moving picture decoding method (image decoding method) described in each of embodiments and systems using thereof will be described. The system has a feature of having an image coding and decoding apparatus that includes an image coding apparatus using the image coding method and an image decoding apparatus using the image decoding method. Other configurations in the system can be changed as appropriate depending on the cases. FIG.17illustrates an overall configuration of a content providing system ex100for implementing content distribution services. The area for providing communication services is divided into cells of desired size, and base stations ex106, ex107, ex108, ex109, and ex110which are fixed wireless stations are placed in each of the cells. The content providing system ex100is connected to devices, such as a computer ex111, a personal digital assistant (PDA) ex112, a camera ex113, a cellular phone ex114and a game machine ex115, via the Internet ex101, an Internet service provider ex102, a telephone network ex104, as well as the base stations ex106to ex110, respectively. However, the configuration of the content providing system ex100is not limited to the configuration shown inFIG.17, and a combination in which any of the elements are connected is acceptable. In addition, each device may be directly connected to the telephone network ex104, rather than via the base stations ex106to ex110which are the fixed wireless stations. Furthermore, the devices may be interconnected to each other via a short distance wireless communication and others. The camera ex113, such as a digital video camera, is capable of capturing video. A camera ex116, such as a digital camera, is capable of capturing both still images and video. Furthermore, the cellular phone ex114may be the one that meets any of the standards such as Global System for Mobile Communications (GSM) (registered trademark), Code Division Multiple Access (CDMA), Wideband-Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and High Speed Packet Access (HSPA). Alternatively, the cellular phone ex114may be a Personal Handyphone System (PHS). In the content providing system ex100, a streaming server ex103is connected to the camera ex113and others via the telephone network ex104and the base station ex109, which enables distribution of images of a live show and others. In such a distribution, a content (for example, video of a music live show) captured by the user using the camera ex113is coded as described above in each of embodiments (i.e., the camera functions as the image coding apparatus according to an aspect of the present disclosure), and the coded content is transmitted to the streaming server ex103. On the other hand, the streaming server ex103carries out stream distribution of the transmitted content data to the clients upon their requests. The clients include the computer ex111, the PDA ex112, the camera ex113, the cellular phone ex114, and the game machine ex115that are capable of decoding the above-mentioned coded data. Each of the devices that have received the distributed data decodes and reproduces the coded data (i.e., functions as the image decoding apparatus according to an aspect of the present disclosure). The captured data may be coded by the camera ex113or the streaming server ex103that transmits the data, or the coding processes may be shared between the camera ex113and the streaming server ex103. Similarly, the distributed data may be decoded by the clients or the streaming server ex103, or the decoding processes may be shared between the clients and the streaming server ex103. Furthermore, the data of the still images and video captured by not only the camera ex113but also the camera ex116may be transmitted to the streaming server ex103through the computer ex111. The coding processes may be performed by the camera ex116, the computer ex111, or the streaming server ex103, or shared among them. Furthermore, the coding and decoding processes may be performed by an LSI ex500generally included in each of the computer ex111and the devices. The LSI ex500may be configured of a single chip or a plurality of chips. Software for coding and decoding video may be integrated into some type of a recording medium (such as a CD-ROM, a flexible disk, and a hard disk) that is readable by the computer ex111and others, and the coding and decoding processes may be performed using the software. Furthermore, when the cellular phone ex114is equipped with a camera, the video data obtained by the camera may be transmitted. The video data is data coded by the LSI ex500included in the cellular phone ex114. Furthermore, the streaming server ex103may be composed of servers and computers, and may decentralize data and process the decentralized data, record, or distribute data. As described above, the clients may receive and reproduce the coded data in the content providing system ex100. In other words, the clients can receive and decode information transmitted by the user, and reproduce the decoded data in real time in the content providing system ex100, so that the user who does not have any particular right and equipment can implement personal broadcasting. Aside from the example of the content providing system ex100, at least one of the moving picture coding apparatus (image coding apparatus) and the moving picture decoding apparatus (image decoding apparatus) described in each of embodiments may be implemented in a digital broadcasting system ex200illustrated inFIG.18. More specifically, a broadcast station ex201communicates or transmits, via radio waves to a broadcast satellite ex202, multiplexed data obtained by multiplexing audio data and others onto video data. The video data is data coded by the moving picture coding method described in each of embodiments (i.e., data coded by the image coding apparatus according to an aspect of the present disclosure). Upon receipt of the multiplexed data, the broadcast satellite ex202transmits radio waves for broadcasting. Then, a home-use antenna ex204with a satellite broadcast reception function receives the radio waves. Next, a device such as a television (receiver) ex300and a set top box (STB) ex217decodes the received multiplexed data, and reproduces the decoded data (i.e., functions as the image decoding apparatus according to an aspect of the present disclosure). Furthermore, a reader/recorder ex218(i) reads and decodes the multiplexed data recorded on a recording medium ex215, such as a DVD and a BD, or (i) codes video signals in the recording medium ex215, and in some cases, writes data obtained by multiplexing an audio signal on the coded data. The reader/recorder ex218can include the moving picture decoding apparatus or the moving picture coding apparatus as shown in each of embodiments. In this case, the reproduced video signals are displayed on the monitor ex219, and can be reproduced by another device or system using the recording medium ex215on which the multiplexed data is recorded. It is also possible to implement the moving picture decoding apparatus in the set top box ex217connected to the cable ex203for a cable television or to the antenna ex204for satellite and/or terrestrial broadcasting, so as to display the video signals on the monitor ex219of the television ex300. The moving picture decoding apparatus may be implemented not in the set top box but in the television ex300. FIG.19illustrates the television (receiver) ex300that uses the moving picture coding method and the moving picture decoding method described in each of embodiments. The television ex300includes: a tuner ex301that obtains or provides multiplexed data obtained by multiplexing audio data onto video data, through the antenna ex204or the cable ex203, etc. that receives a broadcast; a modulation/demodulation unit ex302that demodulates the received multiplexed data or modulates data into multiplexed data to be supplied outside; and a multiplexing/demultiplexing unit ex303that demultiplexes the modulated multiplexed data into video data and audio data, or multiplexes video data and audio data coded by a signal processing unit ex306into data. The television ex300further includes: a signal processing unit ex306including an audio signal processing unit ex304and a video signal processing unit ex305that decode audio data and video data and code audio data and video data, respectively (which function as the image coding apparatus and the image decoding apparatus according to the aspects of the present disclosure); and an output unit ex309including a speaker ex307that provides the decoded audio signal, and a display unit ex308that displays the decoded video signal, such as a display. Furthermore, the television ex300includes an interface unit ex317including an operation input unit ex312that receives an input of a user operation. Furthermore, the television ex300includes a control unit ex310that controls overall each constituent element of the television ex300, and a power supply circuit unit ex311that supplies power to each of the elements. Other than the operation input unit ex312, the interface unit ex317may include: a bridge ex313that is connected to an external device, such as the reader/recorder ex218; a slot unit ex314for enabling attachment of the recording medium ex216, such as an SD card; a driver ex315to be connected to an external recording medium, such as a hard disk; and a modem ex316to be connected to a telephone network. Here, the recording medium ex216can electrically record information using a non-volatile/volatile semiconductor memory element for storage. The constituent elements of the television ex300are connected to each other through a synchronous bus. First, the configuration in which the television ex300decodes multiplexed data obtained from outside through the antenna ex204and others and reproduces the decoded data will be described. In the television ex300, upon a user operation through a remote controller ex220and others, the multiplexing/demultiplexing unit ex303demultiplexes the multiplexed data demodulated by the modulation/demodulation unit ex302, under control of the control unit ex310including a CPU. Furthermore, the audio signal processing unit ex304decodes the demultiplexed audio data, and the video signal processing unit ex305decodes the demultiplexed video data, using the decoding method described in each of embodiments, in the television ex300. The output unit ex309provides the decoded video signal and audio signal outside, respectively. When the output unit ex309provides the video signal and the audio signal, the signals may be temporarily stored in buffers ex318and ex319, and others so that the signals are reproduced in synchronization with each other. Furthermore, the television ex300may read multiplexed data not through a broadcast and others but from the recording media ex215and ex216, such as a magnetic disk, an optical disk, and a SD card. Next, a configuration in which the television ex300codes an audio signal and a video signal, and transmits the data outside or writes the data on a recording medium will be described. In the television ex300, upon a user operation through the remote controller ex220and others, the audio signal processing unit ex304codes an audio signal, and the video signal processing unit ex305codes a video signal, under control of the control unit ex310using the coding method described in each of embodiments. The multiplexing/demultiplexing unit ex303multiplexes the coded video signal and audio signal, and provides the resulting signal outside. When the multiplexing/demultiplexing unit ex303multiplexes the video signal and the audio signal, the signals may be temporarily stored in the buffers ex320and ex321, and others so that the signals are reproduced in synchronization with each other. Here, the buffers ex318, ex319, ex320, and ex321may be plural as illustrated, or at least one buffer may be shared in the television ex300. Furthermore, data may be stored in a buffer so that the system overflow and underflow may be avoided between the modulation/demodulation unit ex302and the multiplexing/demultiplexing unit ex303, for example. Furthermore, the television ex300may include a configuration for receiving an AV input from a microphone or a camera other than the configuration for obtaining audio and video data from a broadcast or a recording medium, and may code the obtained data. Although the television ex300can code, multiplex, and provide outside data in the description, it may be capable of only receiving, decoding, and providing outside data but not the coding, multiplexing, and providing outside data. Furthermore, when the reader/recorder ex218reads or writes multiplexed data from or on a recording medium, one of the television ex300and the reader/recorder ex218may decode or code the multiplexed data, and the television ex300and the reader/recorder ex218may share the decoding or coding. As an example,FIG.20illustrates a configuration of an information reproducing/recording unit ex400when data is read or written from or on an optical disk. The information reproducing/recording unit ex400includes constituent elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407to be described hereinafter. The optical head ex401irradiates a laser spot in a recording surface of the recording medium ex215that is an optical disk to write information, and detects reflected light from the recording surface of the recording medium ex215to read the information. The modulation recording unit ex402electrically drives a semiconductor laser included in the optical head ex401, and modulates the laser light according to recorded data. The reproduction demodulating unit ex403amplifies a reproduction signal obtained by electrically detecting the reflected light from the recording surface using a photo detector included in the optical head ex401, and demodulates the reproduction signal by separating a signal component recorded on the recording medium ex215to reproduce the necessary information. The buffer ex404temporarily holds the information to be recorded on the recording medium ex215and the information reproduced from the recording medium ex215. The disk motor ex405rotates the recording medium ex215. The servo control unit ex406moves the optical head ex401to a predetermined information track while controlling the rotation drive of the disk motor ex405so as to follow the laser spot. The system control unit ex407controls overall the information reproducing/recording unit ex400. The reading and writing processes can be implemented by the system control unit ex407using various information stored in the buffer ex404and generating and adding new information as necessary, and by the modulation recording unit ex402, the reproduction demodulating unit ex403, and the servo control unit ex406that record and reproduce information through the optical head ex401while being operated in a coordinated manner. The system control unit ex407includes, for example, a microprocessor, and executes processing by causing a computer to execute a program for read and write. Although the optical head ex401irradiates a laser spot in the description, it may perform high-density recording using near field light. FIG.21illustrates the recording medium ex215that is the optical disk. On the recording surface of the recording medium ex215, guide grooves are spirally formed, and an information track ex230records, in advance, address information indicating an absolute position on the disk according to change in a shape of the guide grooves. The address information includes information for determining positions of recording blocks ex231that are a unit for recording data. Reproducing the information track ex230and reading the address information in an apparatus that records and reproduces data can lead to determination of the positions of the recording blocks. Furthermore, the recording medium ex215includes a data recording area ex233, an inner circumference area ex232, and an outer circumference area ex234. The data recording area ex233is an area for use in recording the user data. The inner circumference area ex232and the outer circumference area ex234that are inside and outside of the data recording area ex233, respectively are for specific use except for recording the user data. The information reproducing/recording unit400reads and writes coded audio, coded video data, or multiplexed data obtained by multiplexing the coded audio and video data, from and on the data recording area ex233of the recording medium ex215. Although an optical disk having a layer, such as a DVD and a BD is described as an example in the description, the optical disk is not limited to such, and may be an optical disk having a multilayer structure and capable of being recorded on a part other than the surface. Furthermore, the optical disk may have a structure for multidimensional recording/reproduction, such as recording of information using light of colors with different wavelengths in the same portion of the optical disk and for recording information having different layers from various angles. Furthermore, a car ex210having an antenna ex205can receive data from the satellite ex202and others, and reproduce video on a display device such as a car navigation system ex211set in the car ex210, in the digital broadcasting system ex200. Here, a configuration of the car navigation system ex211will be a configuration, for example, including a GPS receiving unit from the configuration illustrated inFIG.19. The same will be true for the configuration of the computer ex111, the cellular phone ex114, and others. FIG.22Aillustrates the cellular phone ex114that uses the moving picture coding method and the moving picture decoding method described in embodiments. The cellular phone ex114includes: an antenna ex350for transmitting and receiving radio waves through the base station ex110; a camera unit ex365capable of capturing moving and still images; and a display unit ex358such as a liquid crystal display for displaying the data such as decoded video captured by the camera unit ex365or received by the antenna ex350. The cellular phone ex114further includes: a main body unit including an operation key unit ex366; an audio output unit ex357such as a speaker for output of audio; an audio input unit ex356such as a microphone for input of audio; a memory unit ex367for storing captured video or still pictures, recorded audio, coded or decoded data of the received video, the still pictures, e-mails, or others; and a slot unit ex364that is an interface unit for a recording medium that stores data in the same manner as the memory unit ex367. Next, an example of a configuration of the cellular phone ex114will be described with reference toFIG.22B. In the cellular phone ex114, a main control unit ex360designed to control overall each unit of the main body including the display unit ex358as well as the operation key unit ex366is connected mutually, via a synchronous bus ex370, to a power supply circuit unit ex361, an operation input control unit ex362, a video signal processing unit ex355, a camera interface unit ex363, a liquid crystal display (LCD) control unit ex359, a modulation/demodulation unit ex352, a multiplexing/demultiplexing unit ex353, an audio signal processing unit ex354, the slot unit ex364, and the memory unit ex367. When a call-end key or a power key is turned ON by a user's operation, the power supply circuit unit ex361supplies the respective units with power from a battery pack so as to activate the cell phone ex114. In the cellular phone ex114, the audio signal processing unit ex354converts the audio signals collected by the audio input unit ex356in voice conversation mode into digital audio signals under the control of the main control unit ex360including a CPU, ROM, and RAM. Then, the modulation/demodulation unit ex352performs spread spectrum processing on the digital audio signals, and the transmitting and receiving unit ex351performs digital-to-analog conversion and frequency conversion on the data, so as to transmit the resulting data via the antenna ex350. Also, in the cellular phone ex114, the transmitting and receiving unit ex351amplifies the data received by the antenna ex350in voice conversation mode and performs frequency conversion and the analog-to-digital conversion on the data. Then, the modulation/demodulation unit ex352performs inverse spread spectrum processing on the data, and the audio signal processing unit ex354converts it into analog audio signals, so as to output them via the audio output unit ex357. Furthermore, when an e-mail in data communication mode is transmitted, text data of the e-mail inputted by operating the operation key unit ex366and others of the main body is sent out to the main control unit ex360via the operation input control unit ex362. The main control unit ex360causes the modulation/demodulation unit ex352to perform spread spectrum processing on the text data, and the transmitting and receiving unit ex351performs the digital-to-analog conversion and the frequency conversion on the resulting data to transmit the data to the base station ex110via the antenna ex350. When an e-mail is received, processing that is approximately inverse to the processing for transmitting an e-mail is performed on the received data, and the resulting data is provided to the display unit ex358. When video, still images, or video and audio in data communication mode is or are transmitted, the video signal processing unit ex355compresses and codes video signals supplied from the camera unit ex365using the moving picture coding method shown in each of embodiments (i.e., functions as the image coding apparatus according to the aspect of the present disclosure), and transmits the coded video data to the multiplexing/demultiplexing unit ex353. In contrast, during when the camera unit ex365captures video, still images, and others, the audio signal processing unit ex354codes audio signals collected by the audio input unit ex356, and transmits the coded audio data to the multiplexing/demultiplexing unit ex353. The multiplexing/demultiplexing unit ex353multiplexes the coded video data supplied from the video signal processing unit ex355and the coded audio data supplied from the audio signal processing unit ex354, using a predetermined method. Then, the modulation/demodulation unit (modulation/demodulation circuit unit) ex352performs spread spectrum processing on the multiplexed data, and the transmitting and receiving unit ex351performs digital-to-analog conversion and frequency conversion on the data so as to transmit the resulting data via the antenna ex350. When receiving data of a video file which is linked to a Web page and others in data communication mode or when receiving an e-mail with video and/or audio attached, in order to decode the multiplexed data received via the antenna ex350, the multiplexing/demultiplexing unit ex353demultiplexes the multiplexed data into a video data bit stream and an audio data bit stream, and supplies the video signal processing unit ex355with the coded video data and the audio signal processing unit ex354with the coded audio data, through the synchronous bus ex370. The video signal processing unit ex355decodes the video signal using a moving picture decoding method corresponding to the moving picture coding method shown in each of embodiments (i.e., functions as the image decoding apparatus according to the aspect of the present disclosure), and then the display unit ex358displays, for instance, the video and still images included in the video file linked to the Web page via the LCD control unit ex359. Furthermore, the audio signal processing unit ex354decodes the audio signal, and the audio output unit ex357provides the audio. Furthermore, similarly to the television ex300, it is possible for a terminal such as the cellular phone ex114to have 3 types of implementation configurations including not only (i) a transmitting and receiving terminal including both a coding apparatus and a decoding apparatus, but also (ii) a transmitting terminal including only a coding apparatus and (iii) a receiving terminal including only a decoding apparatus. Although the digital broadcasting system ex200receives and transmits the multiplexed data obtained by multiplexing audio data onto video data in the description, the multiplexed data may be data obtained by multiplexing not audio data but character data related to video onto video data, and may be not multiplexed data but video data itself. As such, the moving picture coding method and the moving picture decoding method in each of embodiments can be used in any of the devices and systems described. Thus, the advantages described in each of embodiments can be obtained. Furthermore, the present disclosure is not limited to embodiments, and various modifications and revisions are possible without departing from the scope of the present disclosure. Embodiment 10 Video data can be generated by switching, as necessary, between (i) the moving picture coding method or the moving picture coding apparatus shown in each of embodiments and (ii) a moving picture coding method or a moving picture coding apparatus in conformity with a different standard, such as MPEG-2, MPEG-4 AVC, and VC-1. Here, when a plurality of video data that conforms to the different standards is generated and is then decoded, the decoding methods need to be selected to conform to the different standards. However, since the standard to which each of the plurality of the video data to be decoded conforms cannot be detected, there is a problem that an appropriate decoding method cannot be selected. In order to solve the problem, multiplexed data obtained by multiplexing audio data and others onto video data has a structure including identification information indicating to which standard the video data conforms. The specific structure of the multiplexed data including the video data generated in the moving picture coding method and by the moving picture coding apparatus shown in each of embodiments will be hereinafter described. The multiplexed data is a digital stream in the MPEG-2 Transport Stream format. FIG.23illustrates a structure of multiplexed data. As illustrated inFIG.23, the multiplexed data can be obtained by multiplexing at least one of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream. The video stream represents primary video and secondary video of a movie, the audio stream (IG) represents a primary audio part and a secondary audio part to be mixed with the primary audio part, and the presentation graphics stream represents subtitles of the movie. Here, the primary video is normal video to be displayed on a screen, and the secondary video is video to be displayed on a smaller window in the primary video. Furthermore, the interactive graphics stream represents an interactive screen to be generated by arranging the GUI components on a screen. The video stream is coded in the moving picture coding method or by the moving picture coding apparatus shown in each of embodiments, or in a moving picture coding method or by a moving picture coding apparatus in conformity with a conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1. The audio stream is coded in accordance with a standard, such as Dolby-AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, and linear PCM. Each stream included in the multiplexed data is identified by PID. For example, 0x1011 is allocated to the video stream to be used for video of a movie, 0x1100 to 0x111F are allocated to the audio streams, 0x1200 to 0x121F are allocated to the presentation graphics streams, 0x1400 to 0x141F are allocated to the interactive graphics streams, 0x1B00 to 0x1B1F are allocated to the video streams to be used for secondary video of the movie, and 0x1A00 to 0x1A1F are allocated to the audio streams to be used for the secondary audio to be mixed with the primary audio. FIG.24schematically illustrates how data is multiplexed. First, a video stream ex235composed of video frames and an audio stream ex238composed of audio frames are transformed into a stream of PES packets ex236and a stream of PES packets ex239, and further into TS packets ex237and TS packets ex240, respectively. Similarly, data of a presentation graphics stream ex241and data of an interactive graphics stream ex244are transformed into a stream of PES packets ex242and a stream of PES packets ex245, and further into TS packets ex243and TS packets ex246, respectively. These TS packets are multiplexed into a stream to obtain multiplexed data ex247. FIG.25illustrates how a video stream is stored in a stream of PES packets in more detail. The first bar inFIG.25shows a video frame stream in a video stream. The second bar shows the stream of PES packets. As indicated by arrows denoted as yy1, yy2, yy3, and yy4inFIG.25, the video stream is divided into pictures as I pictures, B pictures, and P pictures each of which is a video presentation unit, and the pictures are stored in a payload of each of the PES packets. Each of the PES packets has a PES header, and the PES header stores a Presentation Time-Stamp (PTS) indicating a display time of the picture, and a Decoding Time-Stamp (DTS) indicating a decoding time of the picture. FIG.26illustrates a format of TS packets to be finally written on the multiplexed data. Each of the TS packets is a 188-byte fixed length packet including a 4-byte TS header having information, such as a PID for identifying a stream and a 184-byte TS payload for storing data. The PES packets are divided, and stored in the TS payloads, respectively. When a BD ROM is used, each of the TS packets is given a 4-byte TP_Extra_Header, thus resulting in 192-byte source packets. The source packets are written on the multiplexed data. The TP_Extra_Header stores information such as an Arrival_Time_Stamp (ATS). The ATS shows a transfer start time at which each of the TS packets is to be transferred to a PID filter. The source packets are disposed in the multiplexed data as shown at the bottom ofFIG.26. The numbers incrementing from the head of the multiplexed data are called source packet numbers (SPNs). Each of the TS packets included in the multiplexed data includes not only streams of audio, video, subtitles and others, but also a Program Association Table (PAT), a Program Map Table (PMT), and a Program Clock Reference (PCR). The PAT shows what a PID in a PMT used in the multiplexed data indicates, and a PID of the PAT itself is registered as zero. The PMT stores PIDs of the streams of video, audio, subtitles and others included in the multiplexed data, and attribute information of the streams corresponding to the PIDs. The PMT also has various descriptors relating to the multiplexed data. The descriptors have information such as copy control information showing whether copying of the multiplexed data is permitted or not. The PCR stores STC time information corresponding to an ATS showing when the PCR packet is transferred to a decoder, in order to achieve synchronization between an Arrival Time Clock (ATC) that is a time axis of ATSs, and an System Time Clock (STC) that is a time axis of PTSs and DTSs. FIG.27illustrates the data structure of the PMT in detail. A PMT header is disposed at the top of the PMT. The PMT header describes the length of data included in the PMT and others. A plurality of descriptors relating to the multiplexed data is disposed after the PMT header. Information such as the copy control information is described in the descriptors. After the descriptors, a plurality of pieces of stream information relating to the streams included in the multiplexed data is disposed. Each piece of stream information includes stream descriptors each describing information, such as a stream type for identifying a compression codec of a stream, a stream PID, and stream attribute information (such as a frame rate or an aspect ratio). The stream descriptors are equal in number to the number of streams in the multiplexed data. When the multiplexed data is recorded on a recording medium and others, it is recorded together with multiplexed data information files. Each of the multiplexed data information files is management information of the multiplexed data as shown inFIG.28. The multiplexed data information files are in one to one correspondence with the multiplexed data, and each of the files includes multiplexed data information, stream attribute information, and an entry map. As illustrated inFIG.28, the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time. The system rate indicates the maximum transfer rate at which a system target decoder to be described later transfers the multiplexed data to a PID filter. The intervals of the ATSs included in the multiplexed data are set to not higher than a system rate. The reproduction start time indicates a PTS in a video frame at the head of the multiplexed data. An interval of one frame is added to a PTS in a video frame at the end of the multiplexed data, and the PTS is set to the reproduction end time. As shown inFIG.29, a piece of attribute information is registered in the stream attribute information, for each PID of each stream included in the multiplexed data. Each piece of attribute information has different information depending on whether the corresponding stream is a video stream, an audio stream, a presentation graphics stream, or an interactive graphics stream. Each piece of video stream attribute information carries information including what kind of compression codec is used for compressing the video stream, and the resolution, aspect ratio and frame rate of the pieces of picture data that is included in the video stream. Each piece of audio stream attribute information carries information including what kind of compression codec is used for compressing the audio stream, how many channels are included in the audio stream, which language the audio stream supports, and how high the sampling frequency is. The video stream attribute information and the audio stream attribute information are used for initialization of a decoder before the player plays back the information. In the present embodiment, the multiplexed data to be used is of a stream type included in the PMT. Furthermore, when the multiplexed data is recorded on a recording medium, the video stream attribute information included in the multiplexed data information is used. More specifically, the moving picture coding method or the moving picture coding apparatus described in each of embodiments includes a step or a unit for allocating unique information indicating video data generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments, to the stream type included in the PMT or the video stream attribute information. With the configuration, the video data generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments can be distinguished from video data that conforms to another standard. Furthermore,FIG.30illustrates steps of the moving picture decoding method according to the present embodiment. In Step exS100, the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is obtained from the multiplexed data. Next, in Step exS101, it is determined whether or not the stream type or the video stream attribute information indicates that the multiplexed data is generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments. When it is determined that the stream type or the video stream attribute information indicates that the multiplexed data is generated by the moving picture coding method or the moving picture coding apparatus in each of embodiments, in Step exS102, decoding is performed by the moving picture decoding method in each of embodiments. Furthermore, when the stream type or the video stream attribute information indicates conformance to the conventional standards, such as MPEG-2, MPEG-4 AVC, and VC-1, in Step exS103, decoding is performed by a moving picture decoding method in conformity with the conventional standards. As such, allocating a new unique value to the stream type or the video stream attribute information enables determination whether or not the moving picture decoding method or the moving picture decoding apparatus that is described in each of embodiments can perform decoding. Even when multiplexed data that conforms to a different standard is input, an appropriate decoding method or apparatus can be selected. Thus, it becomes possible to decode information without any error. Furthermore, the moving picture coding method or apparatus, or the moving picture decoding method or apparatus in the present embodiment can be used in the devices and systems described above. Embodiment 11 Each of the moving picture coding method, the moving picture coding apparatus, the moving picture decoding method, and the moving picture decoding apparatus in each of embodiments is typically achieved in the form of an integrated circuit or a Large Scale Integrated (LSI) circuit. As an example of the LSI,FIG.31illustrates a configuration of the LSI ex500that is made into one chip. The LSI ex500includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509to be described below, and the elements are connected to each other through a bus ex510. The power supply circuit unit ex505is activated by supplying each of the elements with power when the power supply circuit unit ex505is turned on. For example, when coding is performed, the LSI ex500receives an AV signal from a microphone ex117, a camera ex113, and others through an AV IO ex509under control of a control unit ex501including a CPU ex502, a memory controller ex503, a stream controller ex504, and a driving frequency control unit ex512. The received AV signal is temporarily stored in an external memory ex511, such as an SDRAM. Under control of the control unit ex501, the stored data is segmented into data portions according to the processing amount and speed to be transmitted to a signal processing unit ex507. Then, the signal processing unit ex507codes an audio signal and/or a video signal. Here, the coding of the video signal is the coding described in each of embodiments. Furthermore, the signal processing unit ex507sometimes multiplexes the coded audio data and the coded video data, and a stream IO ex506provides the multiplexed data outside. The provided multiplexed data is transmitted to the base station ex107, or written on the recording medium ex215. When data sets are multiplexed, the data should be temporarily stored in the buffer ex508so that the data sets are synchronized with each other. Although the memory ex511is an element outside the LSI ex500, it may be included in the LSI ex500. The buffer ex508is not limited to one buffer, but may be composed of buffers. Furthermore, the LSI ex500may be made into one chip or a plurality of chips. Furthermore, although the control unit ex501includes the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, the configuration of the control unit ex501is not limited to such. For example, the signal processing unit ex507may further include a CPU. Inclusion of another CPU in the signal processing unit ex507can improve the processing speed. Furthermore, as another example, the CPU ex502may serve as or be a part of the signal processing unit ex507, and, for example, may include an audio signal processing unit. In such a case, the control unit ex501includes the signal processing unit ex507or the CPU ex502including a part of the signal processing unit ex507. The name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration. Moreover, ways to achieve integration are not limited to the LSI, and a special circuit or a general purpose processor and so forth can also achieve the integration. Field Programmable Gate Array (FPGA) that can be programmed after manufacturing LSIs or a reconfigurable processor that allows re-configuration of the connection or configuration of an LSI can be used for the same purpose. Such a programmable logic device can typically execute the moving picture coding method and/or the moving picture decoding method according to any of the above embodiments, by loading or reading from a memory or the like one or more programs that are included in software or firmware. In the future, with advancement in semiconductor technology, a brand-new technology may replace LSI. The functional blocks can be integrated using such a technology. The possibility is that the present disclosure is applied to biotechnology. Embodiment 12 When video data generated in the moving picture coding method or by the moving picture coding apparatus described in each of embodiments is decoded, it is possible for the processing amount to increase compared to when video data that conforms to a conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1 is decoded. Thus, the LSI ex500needs to be set to a driving frequency higher than that of the CPU ex502to be used when video data in conformity with the conventional standard is decoded. However, when the driving frequency is set higher, there is a problem that the power consumption increases. In order to solve the problem, the moving picture decoding apparatus, such as the television ex300and the LSI ex500is configured to determine to which standard the video data conforms, and switch between the driving frequencies according to the determined standard.FIG.32illustrates a configuration ex800in the present embodiment. A driving frequency switching unit ex803sets a driving frequency to a higher driving frequency when video data is generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments. Then, the driving frequency switching unit ex803instructs a decoding processing unit ex801that executes the moving picture decoding method described in each of embodiments to decode the video data. When the video data conforms to the conventional standard, the driving frequency switching unit ex803sets a driving frequency to a lower driving frequency than that of the video data generated by the moving picture coding method or the moving picture coding apparatus described in each of embodiments. Then, the driving frequency switching unit ex803instructs the decoding processing unit ex802that conforms to the conventional standard to decode the video data. More specifically, the driving frequency switching unit ex803includes the CPU ex502and the driving frequency control unit ex512inFIG.31. Here, each of the decoding processing unit ex801that executes the moving picture decoding method described in each of embodiments and the decoding processing unit ex802that conforms to the conventional standard corresponds to the signal processing unit ex507inFIG.31. The CPU ex502determines to which standard the video data conforms. Then, the driving frequency control unit ex512determines a driving frequency based on a signal from the CPU ex502. Furthermore, the signal processing unit ex507decodes the video data based on the signal from the CPU ex502. For example, it is possible that the identification information described in Embodiment 10 is used for identifying the video data. The identification information is not limited to the one described in Embodiment 10 but may be any information as long as the information indicates to which standard the video data conforms. For example, when which standard video data conforms to can be determined based on an external signal for determining that the video data is used for a television or a disk, etc., the determination may be made based on such an external signal. Furthermore, the CPU ex502selects a driving frequency based on, for example, a look-up table in which the standards of the video data are associated with the driving frequencies as shown inFIG.34. The driving frequency can be selected by storing the look-up table in the buffer ex508and in an internal memory of an LSI, and with reference to the look-up table by the CPU ex502. FIG.33illustrates steps for executing a method in the present embodiment. First, in Step exS200, the signal processing unit ex507obtains identification information from the multiplexed data. Next, in Step exS201, the CPU ex502determines whether or not the video data is generated by the coding method and the coding apparatus described in each of embodiments, based on the identification information. When the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, in Step exS202, the CPU ex502transmits a signal for setting the driving frequency to a higher driving frequency to the driving frequency control unit ex512. Then, the driving frequency control unit ex512sets the driving frequency to the higher driving frequency. On the other hand, when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, in Step exS203, the CPU ex502transmits a signal for setting the driving frequency to a lower driving frequency to the driving frequency control unit ex512. Then, the driving frequency control unit ex512sets the driving frequency to the lower driving frequency than that in the case where the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiment. Furthermore, along with the switching of the driving frequencies, the power conservation effect can be improved by changing the voltage to be applied to the LSI ex500or an apparatus including the LSI ex500. For example, when the driving frequency is set lower, it is possible that the voltage to be applied to the LSI ex500or the apparatus including the LSI ex500is set to a voltage lower than that in the case where the driving frequency is set higher. Furthermore, when the processing amount for decoding is larger, the driving frequency may be set higher, and when the processing amount for decoding is smaller, the driving frequency may be set lower as the method for setting the driving frequency. Thus, the setting method is not limited to the ones described above. For example, when the processing amount for decoding video data in conformity with MPEG-4 AVC is larger than the processing amount for decoding video data generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, it is possible that the driving frequency is set in reverse order to the setting described above. Furthermore, the method for setting the driving frequency is not limited to the method for setting the driving frequency lower. For example, when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, it is possible that the voltage to be applied to the LSI ex500or the apparatus including the LSI ex500is set higher. When the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, it is possible that the voltage to be applied to the LSI ex500or the apparatus including the LSI ex500is set lower. As another example, it is possible that, when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, the driving of the CPU ex502is not suspended, and when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1, the driving of the CPU ex502is suspended at a given time because the CPU ex502has extra processing capacity. It is possible that, even when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each of embodiments, in the case where the CPU ex502has extra processing capacity, the driving of the CPU ex502is suspended at a given time. In such a case, it is possible that the suspending time is set shorter than that in the case where when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1. Accordingly, the power conservation effect can be improved by switching between the driving frequencies in accordance with the standard to which the video data conforms. Furthermore, when the LSI ex500or the apparatus including the LSI ex500is driven using a battery, the battery life can be extended with the power conservation effect. Embodiment 13 There are cases where a plurality of video data that conforms to different standards, is provided to the devices and systems, such as a television and a cellular phone. In order to enable decoding the plurality of video data that conforms to the different standards, the signal processing unit ex507of the LSI ex500needs to conform to the different standards. However, the problems of increase in the scale of the circuit of the LSI ex500and increase in the cost arise with the individual use of the signal processing units ex507that conform to the respective standards. In order to solve the problem, what is conceived is a configuration in which the decoding processing unit for implementing the moving picture decoding method described in each of embodiments and the decoding processing unit that conforms to the conventional standard, such as MPEG-2, MPEG-4 AVC, and VC-1 are partly shared. Ex900inFIG.35Ashows an example of the configuration. For example, the moving picture decoding method described in each of embodiments and the moving picture decoding method that conforms to MPEG-4 AVC have, partly in common, the details of processing, such as entropy coding, inverse quantization, deblocking filtering, and motion compensated prediction. It is possible for a decoding processing unit ex902that conforms to MPEG-4 AVC to be shared by common processing operations, and for a dedicated decoding processing unit ex901to be used for processing which is unique to an aspect of the present disclosure and does not conform to MPEG-4 AVC. The decoding processing unit for implementing the moving picture decoding method described in each of embodiments may be shared for the processing to be shared, and a dedicated decoding processing unit may be used for processing unique to that of MPEG-4 AVC. Furthermore, ex1000inFIG.35Bshows another example in that processing is partly shared. This example uses a configuration including a dedicated decoding processing unit ex1001that supports the processing unique to an aspect of the present disclosure, a dedicated decoding processing unit ex1002that supports the processing unique to another conventional standard, and a decoding processing unit ex1003that supports processing to be shared between the moving picture decoding method according to the aspect of the present disclosure and the conventional moving picture decoding method. Here, the dedicated decoding processing units ex1001and ex1002are not necessarily specialized for the processing according to the aspect of the present disclosure and the processing of the conventional standard, respectively, and may be the ones capable of implementing general processing. Furthermore, the configuration of the present embodiment can be implemented by the LSI ex500. As such, reducing the scale of the circuit of an LSI and reducing the cost are possible by sharing the decoding processing un it for the processing to be shared between the moving picture de coding method according to the aspect of the present disclosure a nd the moving picture decoding method in conformity with the conventional standard. Each of the structural elements in each of the above-described embodiments may be configured in the form of an exclusive hardware product, or may be realized by executing a software program suitable for the structural element. Each of the structural elements may be realized by means of a program executing unit, such as a CPU and a processor, reading and executing the software program recorded on a recording medium such as a hard disk or a semiconductor memory. Here, the software program for realizing an image coding apparatus and an image decoding apparatus according to each of the embodiments is a program described below. Although only some exemplary embodiments have been described above, the scope of the Claims of the present application is not limited to these embodiments. Those skilled in the art will readily appreciate that various modifications may be made in these exemplary embodiments and that other embodiments may be obtained by arbitrarily combining the structural elements of the embodiments without materially departing from the novel teachings and advantages of the subject matter recited in the appended Claims. Accordingly, all such modifications and other embodiments are included in the present disclosure. INDUSTRIAL APPLICABILITY An image coding method and an image decoding method according to the present disclosure can be applied to various multimedia data. The image coding method and the image decoding method according to the present disclosure is useful as an image coding method and an image decoding method in storage, transmission, communication, and the like using a mobile phone, a DVD device, a personal computer, and the like.
158,537
11943485
DETAILED DESCRIPTION OF THE ENCODING PART An embodiment of the invention will now be described, in which the encoding method according to the invention is used to encode a sequence of images according to a binary stream close to that obtained by an encoding according to the H.264/MPEG-4 AVC standard. In this embodiment, the encoding method according to the invention is for example implemented in software or hardware form by modifications of an encoder initially compliant with the H.264/MPEG-4 AVC standard. The encoding method according to the invention is represented in the form of an algorithm including steps C1to C40, represented inFIG.1. According to the embodiment of the invention, the encoding method according to the invention is implemented in an encoding device or encoder CO, an embodiment of which is represented inFIG.2. In accordance with the invention, prior to the actual encoding step, an image IE of a sequence of images to be encoded in a predetermined order is split into a plurality Z of partitions B1, B2, . . . , Bi, . . . , BZ, as represented inFIG.2. It is appropriate to note that in the sense of the invention, the term “partition” means coding unit. This latter terminology is notably used in the HEVC/H.265 standard being drafted, for example in the document accessible at the following Internet address: http://phenix.int-evry.fr/jct/doc_end_user/current_document.php?id=3286 In particular, such a coding unit groups together sets of rectangular or square shape pixels, also called blocks, macroblocks, or sets of pixels exhibiting other geometric shapes. In the example represented inFIG.2, said partitions are blocks which have a square shape and are all the same size. Depending on the size of the image, which is not necessarily a multiple of the size of the blocks, the last blocks to the left and the last blocks at the bottom are able to not be square-shaped. In an alternative embodiment, the blocks can be for example of rectangular size and/or not aligned with one another. Each block or macroblock can moreover be itself divided into subblocks which are themselves subdividable. Such splitting is performed by a partitioning module PCO represented inFIG.2which uses for example a partitioning algorithm that is well known as such. Following said splitting step, each of the current partitions Bi(where i is an integer such that 1≤i≤Z) of said image IE is encoded. In the example represented inFIG.2, such an encoding is applied successively to each of the blocks B1to BZof the current image IE. The blocks are encoded for example according to a scan such as the raster scan, which is well known to the person skilled in the art. The encoding according to the invention is implemented in an encoding software module MC_CO of the encoder CO, as represented inFIG.2. During a step C1represented inFIG.1, the encoding module MC_CO ofFIG.2selects as current block Bithe first block B1to be encoded of the current image IE. As represented inFIG.2, this is the first lefthand block of the image IE. During a step C2represented inFIG.1, the predictive encoding of the current block B1by known intra and/or inter prediction techniques is carried out, during which predictive encoding the block B1is predicted with respect to at least one previously encoded and decoded block. Such a prediction is carried out by a prediction software module PRED_CO as represented inFIG.2. Needless to say other intra prediction modes as proposed in the H.264 standard are possible. The current block B1can also be subjected to a predictive encoding in inter mode, during which the current block is predicted with respect to a block from a previously encoded and decoded image. Other types of prediction can of course be envisaged. Among the predictions possible for a current block, the optimal prediction is chosen according to a rate distortion criterion that is well known to the person skilled in the art. Said abovementioned predictive encoding step provides for constructing a predicted block Bp1which is an approximation of the current block B1. The information items relating to this predictive encoding are intended to be included in a signal to be transmitted to the decoder. Such information items comprise notably the type of prediction (inter or intra), and if necessary, the intra prediction mode, the type of partitioning of a block or macroblock if the latter has been subdivided, the reference image index and the motion vector which are used in the inter prediction mode. These information items are compressed by the encoder CO. During a next step C3represented inFIG.1, the prediction module PRED_CO compares the data items relating to the current block B1with the data items of the predicted block Bp1. More specifically, during this step, conventionally the predicted block Bp1is subtracted from the current block B1to produce a residual block Br1. During a next step C4represented inFIG.1, the residual block Br1is transformed according to a conventional direct transform operation such as for example a DCT type discrete cosine transform, to produce a transformed block Bt1. Such an operation is executed by a transform software module MT_CO, as represented inFIG.2. During a next step C5represented inFIG.1, the transformed block Bt1is quantized according to a conventional quantization operation, such as for example a scalar quantization. A block Bq1of quantized coefficients is then obtained. Such a step is executed by means of a quantization software module MQ_CO, as represented inFIG.2. During a next step C6represented inFIG.1, the quantized coefficients of the block Bq1are scanned in a predefined order. In the example represented, this is a conventional zigzag scan. Such a step is executed by a read software module ML_CO, as represented inFIG.2. At the end of step C6, a one-dimensional list E1=(ε1, ε2, . . . , εL) of coefficients is obtained, more commonly known as “quantized residue”, where L is an integer greater than or equal to 1. Each of the coefficients in the list E1is associated with different digital information items which are intended to undergo an entropy encoding. Such digital information items are described below by way of example. Assume that in the example represented, L=16 and that the list E1contains the following sixteen coefficients: E1=(0, +9, −7, 0, 0, +1, 0, −1, +2, 0, 0, +1, 0, 0, 0, 0). In this particular case:for each coefficient located before the last non-zero coefficient in the list E1, a digital information item, such as a bit, is intended to be entropically encoded to indicate whether or not the coefficient is zero: if the coefficient is zero, it is for example the bit of value 0 which will be encoded, while if the coefficient is not zero, it is the bit of value 1 which will be encoded;for each non-zero coefficient +9, −7, +1, −1, +2, +1, a digital information item, such as a bit, is intended to be entropically encoded to indicate whether or not the absolute value of the coefficient is equal to one: if it is equal to 1, it is for example the bit of value 1 which will be encoded, while if it is not equal to 1, it is the bit of value 0 which will be encoded;for each non-zero coefficient and for which the absolute value is not equal to one and which is located before the last non-zero coefficient, such as the coefficients of value +9, −7, +2, an amplitude information item (absolute value of the coefficient at which the value two is subtracted) is entropically encoded;for each non-zero coefficient, the sign assigned to it is encoded by a digital information item, such as a bit for example set to ‘0’ (for the + sign) or set to ‘1’ (for the − sign). With reference toFIG.1, the specific encoding steps according to the invention will now be described. In accordance with the invention, it is decided to avoid entropically encoding at least one of the abovementioned information items. For the reasons explained earlier in the description, in a preferred embodiment, it is decided to not entropically encode at least one sign of one of said coefficients in the list E1. By way of alternative example, it could notably be decided to entropically encode the least significant bit of the binary representation of the amplitude of the first non-zero coefficient in said list E1. To this end, during a step C7represented inFIG.1, the number of signs to hide during the later entropy encoding step is chosen. Such a step is executed by a processing software module MTR_CO, as represented inFIG.2. In the preferred embodiment, the number of signs to be hidden is one or zero. Additionally, in accordance with said preferred embodiment, it is the sign of the first non-zero coefficient which is intended to be hidden. In the example represented, it is therefore the sign of the coefficient ε2=+9 that is hidden. In an alternative embodiment, the number of signs to be hidden is either zero, one, two, three or more. In accordance with the preferred embodiment of step C7, during a first substep C71represented inFIG.1, a sublist SE1containing coefficients suitable for being modified, ε′1, ε′2, . . . , ε′M where M<L, is determined from said list E1. Such coefficients will be called modifiable coefficients hereafter in the description. According to the invention, a coefficient is modifiable if the modification of its quantized value does not cause desynchronization at the decoder, once this modified coefficient is processed by the decoder. Thus, the processing module MTR_CO is configured initially to not modify:the zero coefficient or coefficients located before the first non-zero coefficient such that the decoder does not affect the value of the sign hidden at this or these zero coefficients,and for reasons of computation complexity, the zero coefficient or coefficients located after the last non-zero coefficient. In the example represented, at the end of substep C71, the sublist SE1obtained is such that SE1=(9, −7.0, 0, 1, 0, −1, 2, 0, 0, 1). Consequently, eleven modifiable coefficients are obtained. During a next substep C72represented inFIG.1, the processing module MTR_CO proceeds with the comparison of the number of modifiable coefficients with a predetermined threshold TSIG. In the preferred embodiment, TSIG has the value 4. If the number of modifiable coefficients is less than the threshold TSIG, then during a step C20represented inFIG.1, a conventional entropy encoding of the coefficients in the list E1is carried out, such as that performed for example in a CABAC encoder, denoted by the reference CE_CO inFIG.2. To this end, the sign of each non-zero coefficient in the list E1is entropically encoded. If the number of modifiable coefficients is greater than the threshold TSIG, then during a step C8represented inFIG.1, the processing module MTR_CO calculates the value of a function f which is representative of the coefficients in the sublist SE1. In the preferred embodiment in which only one sign is intended to be hidden in the signal to be transmitted to the decoder, the function f is the parity of the sum of the coefficients in the sublist SE1. During a step C9represented inFIG.1, the processing module MTR_CO checks whether the parity of the value of the sign to be hidden corresponds to the parity of the sum of the coefficients in the sublist SE1, according to a convention defined beforehand at the encoder CO. In the example proposed, said convention is such that a positive sign is associated with a bit of value equal to zero, while a negative sign is associated with a bit of value equal to one. If, in accordance with the convention adopted in the encoder CO according to the invention, the sign is positive, which corresponds to an encoding bit value of zero, and if the sum of the coefficients in the sublist SE1is even, then step C20for the entropy encoding of the coefficients in the aforementioned list E1is carried out, with the exception of the sign of the coefficient ε2. It, still in accordance with the convention adopted in the encoder CO according to the invention, the sign is negative, which corresponds to an encoding bit value of one, and if the sum of the coefficients in the sublist SE1is odd, then also step C20for the entropy encoding of the coefficients in the aforementioned list E1is carried out, with the exception of the sign of the coefficient ε2. If, in accordance with the convention adopted in the encoder CO according to the invention, the sign is positive, which corresponds to an encoding bit value of zero, and if the sum of the coefficients in the sublist SE1is odd, then during a step C10represented inFIG.1, at least one modifiable coefficient in the sublist SE1is modified. If, still in accordance with the convention adopted in the encoder CO according to the invention, the sign is negative, which corresponds to an encoding bit value of one, and if the sum of the coefficients in the sublist SE1is even, then also at step C10, at least one modifiable coefficient in the sublist SE1is modified. Such a modification operation is carried out by the processing module MTR_CO inFIG.2. In the example embodiment in which SE1=(+9, −7, 0, 0, +1, 0, −1, +2, 0, 0, +1), the total sum f of the coefficients is equal to 5, and is therefore odd. In order that the decoder can reconstruct the positive sign assigned to the first non-zero coefficient ε2=+9, without the encoder CO having to transmit this coefficient to the decoder, the parity of the sum must become even. Consequently, the processing module MTR_CO tests, during said step C10, various modifications of coefficients in the sublist SE1, all aiming to change the parity of the sum of the coefficients. In the preferred embodiment, +1 or −1 is added to each modifiable coefficient and a modification is selected from among those which are carried out. In the preferred embodiment, such a selection forms the optimal prediction according to a performance criterion which is for example the rate distortion criterion that is well known to the person skilled in the art. Such a criterion is expressed by equation (1) below: J=D+λR(1) where D represents the distortion between the original macroblock and the reconstructed macroblock, R represents the encoding cost in bits of the encoding information items and λ represents a Lagrange multiplier, the value of which can be fixed prior to the encoding. In the example proposed, the modification which brings about an optimal prediction according to the abovementioned rate distortion criterion is the addition of the value 1 to the second coefficient −7 in the sublist SE1. At the end of step C10, a modified sublist is hence obtained, SEm1=(+9, −6, 0, 0, +1, 0, −1, +2, 0, 0, +1). It is appropriate to note that during this step, certain modifications are prohibited. Thus, in the case in which the first non-zero coefficient ε2would have the value +1, it would not have been possible to add −1 to it, since it would have become zero, and it would then have lost its characteristic of first non-zero coefficient in the list E1. The decoder would then have later attributed the decoded sign (by calculation of the parity of the sum of the coefficients) to another coefficient, and there would then have been a decoding error. During a step C11represented inFIG.1, the processing module MTR_CO carries out a corresponding modification of the list E1. The next modified list Em1=(0, +9, −6, 0, 0, +1, 0, −1, +2, 0, 0, +1, 0, 0, 0, 0) is then obtained. Then step C20for the entropy encoding of the coefficients in the aforementioned list Em1is carried out, with the exception of the sign of the coefficient ε2, which is the + sign of the coefficient9in the proposed example, which sign is hidden in the parity of the sum of the coefficients. It is appropriate to note that the set of amplitudes of the coefficients in the list E1or in the modified list Em1is encoded before the set of signs, with the exclusion of the sign of the first non-zero coefficient ε2which is not encoded, as has been explained above. During a next step C30represented inFIG.1, the encoding module MC_CO inFIG.2tests whether the current encoded block is the last block of the image IE. If the current block is the last block of the image IE, then during a step C40represented inFIG.1, the encoding method is ended. If this is not the case, the next block Biis selected, which is then encoded in accordance with the order of the previously mentioned raster scan, by repeating steps C1to C20, for 1≤i≤Z. Once the entropy encoding of all the blocks B1to BZis carried out, a signal F is constructed, representing, in binary form, said encoded blocks. The construction of the binary signal F is implemented in a stream construction software module CF, as represented inFIG.2. The stream F is then transmitted via a communication network (not represented) to a remote terminal. The latter includes a decoder which will be described further in detail later in the description. There will now be described, mainly with reference toFIG.1, another embodiment of the invention. This other embodiment is distinguished from the previous one only by the number of coefficients to be hidden which is either 0, or N, where N is an integer such that N≥2. To this end, previously mentioned comparison substep C72is replaced by substep C72arepresented in dotted-line inFIG.1, during which the number of modifiable coefficients is compared with several predetermined thresholds 0<TSIG_1<TSIG_2<TSIG_3. . . , in such a way that if the number of modifiable coefficients is between TSIG_N and TSIG_N+1, N signs are intended to be hidden. If the number of modifiable coefficients is less than the first threshold TSIG_1, then during abovementioned step C20, conventional entropy encoding of the coefficients in the list E1is carried out. To this end, the sign of each non-zero coefficient in the list E1is entropically encoded. If the number of modifiable coefficients is between the threshold TSIG_N and TSIG_N+1, then during a step C8represented inFIG.1, the processing module MTR_CO calculates the value of a function f which is representative of the coefficients in the sublist E1. In this other embodiment, since the decision at the encoder is to hide N signs, the function f is the modulo 2Nremainder of the sum of the coefficients in the sublist SE1. It is assumed in the proposed example that N=2, the two signs to be hidden being the two first signs of the two first non-zero coefficients respectively, i.e. ε2and ε3. During next step C9represented inFIG.1, the processing module MTR_CO verifies whether the configuration of the N signs, i.e. 2Npossible configurations, corresponds to the value of the modulo 2Nremainder of the sum of the coefficients in the sublist SE1. In the example proposed where N=2, there are 22=4 different configurations of signs. These four configurations comply with a convention at the encoder CO, which convention is for example determined as follows:a remainder equal to zero corresponds to two consecutive positive signs: +, +;a remainder equal to one corresponds to, consecutively, a positive sign and a negative sign: +, −;a remainder equal to two corresponds to, consecutively, a negative sign and a positive sign: −, +;a remainder equal to three corresponds to two consecutive negative signs: −, −. If the configuration of N signs corresponds to the value of the modulo 2Nremainder of the sum of the coefficients in the sublist SE1, then step C20for the entropy encoding of the coefficients in the abovementioned list E1is carried out, with the exception of the sign of the coefficient ε2and of the coefficient ε3, which signs are hidden in the parity of the modulo 2Nsum of the coefficients. If this is not the case, then step C10for modifying at least one modifiable coefficient in the sublist SE1is carried out. Such a modification is executed by the processing module MTR_CO inFIG.2in such a way that the modulo 2Nremainder of the sum of the modifiable coefficients in the sublist SE1attains the value of each of the two signs to be hidden. During previously mentioned step C11, the processing module MTR_CO carries out a corresponding modification of the list E1. A modified list Em1is hence obtained. Then step C20for the entropy encoding of the coefficients in the aforementioned list Em1is carried out, with the exception of the sign of the coefficient ε2and the sign of the coefficient ε3, which signs are hidden in the parity of the modulo 2Nsum of the coefficients. Detailed Description of the Decoding Part An embodiment of the decoding method according to the invention will now be described, in which the decoding method is implemented in software or hardware form by modifications of a decoder initially compliant with the H.264/MPEG-4 AVC standard. The decoding method according to the invention is represented in the form of an algorithm including steps D1to D12, represented inFIG.3. According to the embodiment of the invention, the decoding method according to the invention is implemented in a decoding device or decoder DO, as represented inFIG.4. During a preliminary step not represented inFIG.3, in the received data signal F, the partitions B1to BZwhich have been encoded previously by the encoder CO, are identified. In the preferred embodiment, said partitions are blocks which have a square shape and are all the same size. Depending on the size of the image, which is not necessarily a multiple of the size of the blocks, the last blocks to the left and the last blocks at the bottom are able to not be square-shaped. In an alternative embodiment, the blocks can be for example of rectangular size and/or not aligned with one another. Each block or macroblock can moreover be itself divided into subblocks which are themselves subdividable. Such an identification is executed by a stream analysis software module EX_DO, as represented inFIG.4. During a step D1represented inFIG.3, the module EX_DO inFIG.4selects as current block Bithe first block B1to be decoded. Such a selection consists for example in placing a read pointer in the signal F at the start of the data items of the first block B1. Then the decoding of each of the selected encoded blocks is carried out. In the example represented inFIG.3, such a decoding is applied successively to each of the encoded blocks B1to BZ. The blocks are decoded for example according to a raster scan, which is well known to the person skilled in the art. The decoding according to the invention is implemented in a decoding software module MD_DO of the decoder DO, as represented inFIG.4. During a step D2represented inFIG.3, first the entropy decoding of the first current block B1which has been selected is carried out. Such an operation is carried out by an entropy decoding module DE_DO represented inFIG.4, for example of the CABAC type. During this step, the module DE_DO carries out an entropy decoding of the digital information items corresponding to the amplitude of each of the encoded coefficients in the list E1or in the modified list Em1. At this stage, only the signs of the coefficients in the list E1or in the modified list Em1are not decoded. During a step D3represented inFIG.3, the number of signs capable of having been hidden during previous entropy encoding step C20is determined. Such a step D3is executed by a processing software module MTR_DO, as represented inFIG.4. Step D3is similar to previously mentioned step C7for determining the number of signs to be hidden. In the preferred embodiment, the number of hidden signs is one or zero. Additionally, in accordance with said preferred embodiment, it is the sign of the first non-zero coefficient which is hidden. In the example represented, it is therefore the positive sign of the coefficient ε2=+9. In an alternative embodiment, the number of hidden signs is either zero, one, two, three or more. In accordance with the preferred embodiment of step D3, during a first substep D31represented inFIG.3, a sublist containing coefficients ε′1, ε′2, . . . , ε′M where M<L which are capable of having been modified at the encoding is determined from said list E1or from the modified list Em1. Such a determination is performed the same way as in previously mentioned encoding step C7. Like the previously mentioned processing module MTR_CO, the processing module MTR_DO is initially configured to not modify:the zero coefficient or coefficients located before the first non-zero coefficient,and for reasons of computation complexity, the zero coefficient or coefficients located after the last non-zero coefficient. In the example represented, at the end of substep D31, there is the sublist SEm1such that SEm1=(9, −6, 0, 0, 1, 0, −1, 2, 0, 0, 1). Consequently, eleven coefficients capable of having been modified are obtained. During a next substep D32represented inFIG.3, the processing module MTR_DO proceeds with the comparison of the number of coefficients capable of having been modified with a predetermined threshold TSIG. In the preferred embodiment, TSIG has the value 4. If the number of coefficients capable of having been modified is less than the threshold TSIG, then during a step D4represented inFIG.3, a conventional entropy decoding of all the signs of the coefficients in the list E1is carried out. Such a decoding is executed by the CABAC decoder, denoted by the reference DE_DO inFIG.4. To this end, the sign of each non-zero coefficient in the list E1is entropically decoded. If the number of coefficients capable of having been modified is greater than the threshold TSIG, then during said step D4, a conventional entropy decoding of all the signs of the coefficients in the list Em1is carried out, with the exception of the sign of the first non-zero coefficient ε2. During a step D5represented inFIG.3, the processing module MTR_DO calculates the value of a function f which is representative of the coefficients in the sublist SEm1so as to determine whether the calculated value is even or odd. In the preferred embodiment where only one sign is hidden in the signal F, the function f is the parity of the sum of the coefficients in the sublist SEm1. In accordance with the convention used at the encoder CO, which is the same at the decoder DO, an even value of the sum of the coefficients in the sublist SEm1means that the sign of the first non-zero coefficient in the modified list Em1is positive, while an odd value of the sum of the coefficients in the sublist SEm1means that the sign of the first non-zero coefficient in the modified list Em1is negative. In the example embodiment in which SEm1=(+9, −6, 0, 0, +1, 0, −1, +2, 0, 0, +1), the total sum of the coefficients is equal to 6, and is therefore even. Consequently, at the end of step D5, the processing module MTR_DO deduces therefrom that the hidden sign of the first non-zero coefficient ε2is positive. During a step D6represented inFIG.3, and with the aid of all the reconstructed digital information items during steps D2, D4and D5, the quantized coefficients of the block Bq1are reconstructed in a predefined order. In the example represented, this is an inverse zigzag scan with respect to the zigzag scan carried out during previously mentioned encoding step C6. Such a step is executed by a read software module ML_DO, as represented inFIG.4. More specifically, the module ML_DO proceeds to include the coefficients of the list E1(one-dimensional) in the block Bq1(two-dimensional), using said inverse zigzag scan order. During a step D7represented inFIG.3, the quantized residual block Bq1is dequantized according to a conventional dequantization operation which is the inverse operation of the quantization performed at previously mentioned encoding step C5, in order to produce a decoded dequantized block BDq1. Such a step is executed by means of a dequantization software module MDQ_DO, as represented inFIG.4. During a step D8represented inFIG.3, the inverse transformation of the dequantized block BDq1is carried out, which is the inverse operation of the direct transformation performed at the encoding at previously mentioned step C4. A decoded residual block BDr1is hence obtained. Such an operation is executed by an inverse-transform software module MTI_DO, as represented inFIG.4. During a step D9represented inFIG.3, the predictive decoding of the current block B1is carried out. Such a predictive decoding is conventionally carried out by known intra and/or inter prediction techniques, during which the block B1is predicted with respect to the at least one previously decoded block. Such an operation is carried out by a predictive decoding module PRED_DO as represented inFIG.4. Needless to say other intra prediction modes as proposed in the H.264 standard are possible. During this step, the predictive decoding is carried out using decoded syntax elements at the previous step and notably comprising the type of prediction (inter or intra), and if necessary, the intra prediction mode, the type of partitioning of a block or macroblock if the latter has been subdivided, the reference image index and the motion vector which are used in the inter prediction mode. Said abovementioned predictive decoding step provides for constructing a predicted block Bp1. During a step D10represented inFIG.3, the decoded block BD1is constructed by adding the decoded residual block BDr1to the predicted block Bp1. Such an operation is executed by a reconstruction software module MR_DO represented inFIG.4. During a step D11represented inFIG.3, the decoding module MD_DO tests whether the current decoded block is the last block identified in the signal F. If the current block is the last block in the signal F, then during a step D12represented inFIG.3, the decoding method is ended. If this is not the case, the next block Biis selected, to be decoded in accordance with the order of the previously mentioned raster scan, by repeating steps D1to D10, for 1≤i≤Z. There will now be described, mainly with reference toFIG.3, another embodiment of the invention. This other embodiment is distinguished from the previous one only by the number of hidden coefficients which is either 0, or N, where N is an integer such that N≥2. To this end, previously mentioned comparison substep D32is replaced by substep D32arepresented in dotted-line inFIG.3, during which the number of coefficients capable of having been modified is compared with several predetermined thresholds 0<TSIG_1<TSIG_2<TSIG_3. . . , in such a way that if the number of said coefficients is between TSIG_N and TSIG_N+1, N signs have been hidden. If the number of said coefficients is less than the first threshold TSIG_1, then during previously mentioned step D4, the conventional entropy decoding of all the signs of the coefficients in the list E1is carried out. To this end, the sign of each non-zero coefficient in the list E1is entropically decoded. If the number of said coefficients is between the threshold TSIG_N and TSIG_N+1, then during previously mentioned step D4, the conventional entropy decoding of all the signs of the coefficients in the list E1is carried out, with the exception of the N respective signs of the first non-zero coefficients in said modified list Em1, said N signs being hidden. In this other embodiment, the processing module MTR_DO calculates, during step D5, the value of the function f which is the modulo 2Nremainder of the sum of the coefficients in the sublist SEm1. It is assumed in the proposed example that N=2. The processing module MTR_DO hence deduces therefrom the configuration of the two hidden signs which are assigned to each of the two first non-zero coefficients ε2and ε3respectively, according to the convention used at the encoding. Once these two signs have been reconstructed, steps D6to D12described above are carried out. It goes without saying that the embodiments which have been described above have been given purely by way of indication and are not at all limiting, and that a number of modifications can easily be brought about by the person skilled in the art without thereby departing from the scope of the invention. Thus for example, according to a simplified embodiment with respect to that represented inFIG.1, the encoder CO could be configured to hide at least N′ predetermined signs, where N′≥1, instead of either zero, one or N predetermined signs. In that case, comparison step C72or C72awould be removed. In a corresponding way, according to a simplified embodiment with respect to that represented inFIG.3, the decoder DO would be configured to reconstruct N′ predetermined signs instead of either zero, one or N predetermined signs. In that case, comparison step D32or D32awould be removed. Additionally, the decision criterion applied at encoding step C72and at decoding step D32could be replaced by another type of criterion. To this end, instead of comparing the number of modifiable coefficients or the number of coefficients capable of having been modified with a threshold, the processing module MTR_CO or MTR_DO could apply a decision criterion which is a function of the sum of the amplitudes of the coefficients that are modifiable or capable of having been modified, respectively, or of the number of zeros present among the coefficients that are modifiable or capable of having been modified, respectively.
33,430
11943486
DESCRIPTION OF EMBODIMENTS Referring to the drawings, same components are represented by same component symbols. The principle of the present disclosure is illustrated by an application in a suitable computing environment. The following description is based on the illustrated specific embodiment of the present disclosure, which should not be construed as limiting other specific embodiments not described in detail herein. In the description below, the specific embodiments of the present disclosure will be described with reference to steps and signs of operations that are performed by one or more computers, unless indicated otherwise. Therefore, it will be understood that such steps and operations, which are at times referred to as being computer-executed, include the manipulation by the processing unit of the computer of electrical signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in the memory system of the computer, which reconfigures or otherwise alters the operation of the computer in a manner well understood by a person skilled in the art. The data structures where data is maintained are physical locations of the memory that have particular properties defined by the format of the data. However, while the principle of the present disclosure is being described in the foregoing text, it is not meant to be limiting as a person skilled in the art will appreciate that the various steps and operations described hereinafter may also be implemented in hardware. The present disclosure provides a live video broadcast method and apparatus. The live video broadcast method and apparatus in the present disclosure may be applied to various mobile terminals, so that the mobile terminal performs live video broadcast on the mobile terminal. The mobile terminal may be a terminal device such as a mobile terminal having an IOS Apple system. A mobile terminal in the present disclosure may perform live video broadcast on the mobile terminal anytime at any place, and present barrage comments (e.g., also referred to as comments) or a comment for a video on another mobile terminal in a timely manner. In some embodiments, barrage comments of a video (e.g., a live video) refer to live comments that are posted by one or more or all viewers of the video in real time as the viewers watch the video on their respective terminal devices. In some embodiments, barrage comments include a plurality of comments from multiple viewers that are displayed in real time while the live video is displayed on a first client device. For example, the plurality of comments are displayed to overlay the video. In some embodiments, the barrage comments are displayed with any type of suitable animation, such as scrolling left or right, sliding up and down, flying up and down, etc. In some embodiments, the barrage comments are displayed simultaneously as the corresponding video content are being displayed on the first client device. For example, comments about a certain scene of a live video posted by multiple viewers from their respective terminal devices are displayed on the first client device while image frames associated with the certain scene are being broadcasted on the first client device. The terminal device includes a memory and a processor. The memory stores an instruction that can be executed by the processor. The processor implements the live video broadcast method in the following embodiments by executing the instruction. Referring toFIG.1,FIG.1is a flowchart of an embodiment of a live video broadcast method according to the present disclosure. The live video broadcast method in this embodiment may be implemented by using the foregoing mobile terminal. The live video broadcast method in this embodiment includes: Step S101. Receive a live broadcast command, and create a video buffer based on the live broadcast command. Step S102. Bind the video buffer with a picture drawing environment, the picture drawing environment being configured to detect and extract a video picture frame. Step S103. Detect and extract the video picture frame by using the picture drawing environment (that is, detect and extract a video picture by using a picture drawing module), and perform a storage operation on all video picture frames by using the video buffer. Step S104. Collect an external voice by using a microphone, and synchronously synthesize the external voice and the video picture frames into a video streaming media file. Step S105. Upload the video streaming media file to a live broadcast server, so that the live broadcast server performs live broadcasting. A specific procedure of each step of the live video broadcast method in this embodiment is described in detail below. In step S101, a live video broadcast apparatus (a mobile terminal) receives a live broadcast command of a user. The live broadcast command is a command that the user requests to upload a current video picture of the live video broadcast apparatus, for example, a game picture of a currently running game, to the live broadcast server, to perform live broadcasting of the video picture. Subsequently, the live video broadcast apparatus creates a video buffer based on the live broadcast command. The video buffer is used to perform a storage operation on a video picture frame of a current video. Subsequently, step S102is performed. In step S102, the live video broadcast apparatus binds the video buffer created in step S101and the picture drawing environment. The picture drawing environment is configured to detect and extract the video picture frame of the current video. The picture drawing environment may be a game drawing environment such as an OpenGL (Open Graphics Library) context. After binding the video buffer with the picture drawing environment, the live video broadcast apparatus may extract the video picture frame detected by the picture drawing environment into the video buffer. Subsequently, step S103is performed. It may be understood that the picture drawing environment a software system having a picture drawing function, an applet, or the like. Therefore, the picture drawing environment may alternatively be referred to as “the picture drawing module”. Therefore, step S102may alternatively be described as “Bind the video buffer with the picture drawing module”. In step S103, the live video broadcast apparatus detects and extracts the video picture frame by using the picture drawing environment, and subsequently, the live video broadcast apparatus performs the storage operation on all the video picture frames by using the video buffer created in step S101. Subsequently, step S104is performed. It may be understood that, that the video buffer is used to perform the storage operation on all the video picture frames is actually that all the video picture frames extracted by the picture drawing environment are stored in the video buffer. In step S104, the live video broadcast apparatus collects an external voice of the live video broadcast apparatus by using a microphone of the live video broadcast apparatus. The external voice may include a sound given by a horn of the live video broadcast apparatus, a voice of a video user, and the like. Such external voice may both include a video sound, and may also include an explanation of a video by a video user, or even surrounding environment music of the video user. Certainly, the video user herein may alternatively disable the video sound as required, and only reserve the voice of the video user and the like. Subsequently, the live video broadcast apparatus synchronously synthesizes the collected external voice and the video picture frame collected in step S103, to generate the video streaming media file, such as an FLV (Flash Video) streaming media file. The video user may control the size of the video streaming media file by controlling a frame rate of the generated video streaming media file. In this way, video picture live broadcast content including both video information and audio information is obtained. Subsequently, step S105is performed. In step S105, the live video broadcast apparatus uploads the video streaming media file synthesized in step S104to the live broadcast server, so that the live broadcast server may use the video streaming media file to perform live broadcasting of the video. In this way, a live video broadcast process of the live video broadcast method in this embodiment is completed. According to the live video broadcast method in this embodiment, the video picture frame and the external voice are synthesized by configuring an additional video buffer, so that the live broadcast server processed a synthesized video streaming media file. The video user may perform live broadcasting of a mobile video anytime. Referring toFIG.2,FIG.2is a flowchart of an embodiment of a live video broadcast method according to the present disclosure. The live video broadcast method in this embodiment may be implemented by using the foregoing mobile terminal. The live video broadcast method in this embodiment includes: Step S201. Receive a live broadcast command, and create a video buffer based on the live broadcast command. Step S202. Bind the video buffer with a picture drawing environment, the picture drawing environment being configured to detect and extract a video picture frame. Step S203. Detect and extract the video picture frame by using the picture drawing environment, and perform a storage operation on all video picture frames by using the video buffer. Step S204. Collect an external voice by using a microphone, and synchronously synthesize the external voice and the video picture frames into a video streaming media file. Step S205. Upload the video streaming media file to a live broadcast server, so that the live broadcast server performs live broadcasting. Step S206. Update the video picture frame to a frame buffer. Step S207. Perform broadcasting of the video picture frame in the frame buffer by using a screen of the live video broadcast apparatus. A specific procedure of each step of the live video broadcast method in this embodiment is described in detail below. In step S201, the live video broadcast apparatus receives the live broadcast command, and creates the video buffer based on the live broadcast command. Specifically, referring toFIG.3,FIG.3is a flowchart of step S201in this embodiment of the live video broadcast method according to the present disclosure. Step S201may include the following steps. Step S301: The live video broadcast apparatus invokes a pre-configured mounting component based on the live broadcast command. On existing live video broadcast apparatuses, especially on a live video broadcast apparatus of an IOS Apple system, a bottom layer of a game engine uses both OpenGL and Metal. Therefore, the preconfigured mounting component may be used to detect game picture loading of the game engine. Subsequently, step S302is performed. Step S302. When the live video broadcast apparatus detects game picture loading of a game engine, replace a picture loading method by using the mounting component invoked in step S301. Specifically, the live video broadcast apparatus detects whether the picture loading method is invoked, and if the picture loading method is invoked, replaces a picture drawing method in the picture loading method. For example, an occasion of invoking a static method may be triggered by loading Class of a game to a memory, to replace a picture drawing method, for example, presentRenderbuffer, in the picture loading method, so that an OpenGL (Open Graphics Library) context of a game drawing environment may detect updating of a video picture frame. In this way, the live video broadcast apparatus may detect each video picture frame of a current video by using a replaced picture loading method. Subsequently, step S303is performed. Step S303. The live video broadcast apparatus creates a corresponding video buffer based on the picture loading method replaced in step S302, to perform a storage operation on a video picture frame extracted by a picture drawing environment. Subsequently, step S202is performed. In step S202, the live video broadcast apparatus binds the video buffer created in step S201with the picture drawing environment. After binding the video buffer with the picture drawing environment, the live video broadcast apparatus can extract the video picture frame detected by the picture drawing environment into the video buffer. Subsequently, step S203is performed. In step S203, the live video broadcast apparatus detects and extracts the video picture frame by using the picture drawing environment, and subsequently, the live video broadcast apparatus performs the storage operation on all the video picture frames by using the video buffer created in step S201. Subsequently, step S204and step S206are performed. In step S204, the live video broadcast apparatus collects the external voice of the live video broadcast apparatus by using the microphone of the live video broadcast apparatus. Subsequently, the live video broadcast apparatus synchronously synthesizes the collected external voice and the video picture frame collected in step S203, to generate the video streaming media file, such as an FLV (Flash Video) streaming media file. A user may control the size of the video streaming media file by controlling a frame rate of the generated video streaming media file. In this way, video picture live broadcast content including both video information and audio information is obtained. Subsequently, step S202is performed. In step S205, the live video broadcast apparatus uploads the video streaming media file synthesized in step S204to the live broadcast server, so that the live broadcast server may use the video streaming media file to perform live broadcasting of the video. Specifically, when performing live broadcasting of the video streaming media file, the live broadcast server may receive comment information of another terminal; and subsequently synthesize the comment information with the video streaming media file, to obtain a live streaming media file, and finally the live broadcast server performs a playback operation on the live streaming media file. In this way, comments of others users may be presented in a timely manner during live broadcasting of the mobile video. In addition, the live broadcast server may further receive barrage comments information from another terminal. Subsequently, when performing the playback operation on the video streaming media file, the live broadcast server synchronously superimpose the comments information on a playback screen. In this way, barrage comments information of others users may be presented in a timely manner during live broadcasting of the mobile video. In step S206, in addition to uploading the video picture to the live broadcast server, presentation of the video picture on the live video broadcast apparatus also needs to be ensured continuously. Therefore, the live video broadcast apparatus further needs to update the video picture frame extracted in step S203from the video buffer to the frame buffer. The frame buffer is configured to perform the storage operation on the video picture frame presented on the screen of the live video broadcast apparatus. Subsequently, step S207is performed. It may be understood that, in fact, S206is that the video picture frame in the video buffer is extracted and stored in the frame buffer, so that the video picture frame in the frame buffer may be drawn onto a display interface. That is, the video picture frame in the frame buffer is displayed on a terminal screen, to implement playback of the video picture. In step S207, the live video broadcast apparatus performs playback of the video picture frame in the frame buffer by using the screen. In this way, normal display of the video picture on the live video broadcast apparatus is also ensured. In this way, a live video broadcast process of the live video broadcast method in this embodiment is completed. Based on the previous embodiment, in the live video broadcast method in this embodiment, the mounting component is disposed, to further improve stability of obtaining the video picture frame; setting of the video buffer and the frame buffer ensures normal display of a live broadcast picture and the video picture; and the live broadcast server may further present barrage comments and a comment of another user in a timely manner. The present disclosure further provides a live video broadcast apparatus. Referring toFIG.4,FIG.4is a schematic structural diagram of a live video broadcast apparatus according to an embodiment of the present disclosure. The live video broadcast apparatus in this embodiment may use the live video broadcast method in the foregoing first embodiment. The live video broadcast apparatus40in this embodiment includes: one or more memories; and one or more processors, the one or more memories storing one or more command modules, configured to be executed by the one or more processors, the one or more command modules including a buffer creation module41, a buffer binding module42, a picture frame storage module43, a synthesizing module44, and an uploading module45. The buffer creation module41is configured to: receive a live broadcast command, and create a video buffer based on the live broadcast command. The buffer binding module42is configured to bind the video buffer with a picture drawing environment, where the picture drawing environment is configured to detect and extract a video picture frame. The picture frame storage module43is configured to: detect and extract the video picture frame by using the picture drawing environment, and perform a storage operation on all video picture frames by using the video buffer. The synthesizing module44is configured to: collect an external voice by using a microphone, and synchronously synthesize the external voice and the video picture frames into a video streaming media file. The uploading module45is configured to upload the video streaming media file to a live broadcast server, so that the live broadcast server performs live broadcasting. When the live video broadcast apparatus40is used, the buffer creation module41receives a live broadcast command of a user. The live broadcast command is a command that the user requests to upload a current video picture of the live video broadcast apparatus, for example, a game picture of a currently running game, to the live broadcast server, to perform live broadcasting of the video picture. Subsequently, the buffer creation module41creates a video buffer based on the live broadcast command. The video buffer is used to perform a storage operation on a video picture frame of a current video. Subsequently, the buffer binding module42binds the video buffer created by the buffer creation module41with a picture drawing environment. The picture drawing environment is configured to detect and extract the video picture frame of the current video. The picture drawing environment may be a game drawing environment such as an OpenGL (Open Graphics Library) context. After binding the video buffer with the picture drawing environment, the live video broadcast apparatus may extract the video picture frame detected by the picture drawing environment into the video buffer. Subsequently, the picture frame storage module43detects and extracts the video picture frame by using the picture drawing environment. Subsequently, the picture frame storage module43performs a storage operation on all video picture frames by using the video buffer created by the buffer creation module. Subsequently, the synthesizing module44collects an external voice of the live video broadcast apparatus by using the microphone of the live video broadcast apparatus40. The external voice may include a sound given by a horn of the live video broadcast apparatus, a voice of a video user, and the like. Such external voice may both include a video sound, and may also include an explanation of a video by a video user, or even surrounding environment music of the video user. Certainly, the video user herein may alternatively disable the video sound as required, and only reserve the voice of the video user and the like. Subsequently, the synthesizing module44synchronously synthesizes the collected external voice and the video picture frame collected by the picture frame storage module, to generate the video streaming media file, such as an FLV (Flash Video) streaming media file. The video user may control the size of the video streaming media file by controlling a frame rate of the generated video streaming media file. In this way, video picture live broadcast content including both video information and audio information is obtained. Finally, the uploading module45uploads the video streaming media file synthesized by the synthesizing module44to the live broadcast server, so that the live broadcast server may perform live broadcasting of the video by using the video streaming media file. In this way, a live video broadcast process of the live video broadcast apparatus40in this embodiment is completed. According to the live video broadcast apparatus in this embodiment, the video picture frame and the external voice are synthesized by configuring an additional video buffer, so that the live broadcast server processed a synthesized video streaming media file. The video user may perform live broadcasting of a mobile video anytime. Referring toFIG.5,FIG.5is a schematic structural diagram of a live video broadcast apparatus according to an embodiment of the present disclosure. The live video broadcast apparatus in this embodiment may use the live video broadcast method in the foregoing second embodiment. The live video broadcast apparatus50in this embodiment includes:one or more memories; andone or more processors, the one or more memories storing one or more command modules, configured to be executed by the one or more processors,the one or more command modules including a buffer creation module51, a buffer binding module52, a picture frame storage module53, a synthesizing module54, an uploading module55, a frame buffer updating module56, and a terminal playback module57. The buffer creation module51is configured to: receive a live broadcast command, and create a video buffer based on the live broadcast command. The buffer binding module52is configured to bind the video buffer with a picture drawing environment, where the picture drawing environment is configured to detect and extract a video picture frame. The picture frame storage module53is configured to: detect and extract the video picture frame by using the picture drawing environment, and perform a storage operation on all video picture frames by using the video buffer. The synthesizing module54is configured to: collect an external voice by using a microphone, and synchronously synthesize the external voice and the video picture frames into a video streaming media file. The uploading module55is configured to upload the video streaming media file to a live broadcast server, so that the live broadcast server performs live broadcasting. The frame buffer updating module56is configured to update the video picture frame to a frame buffer. The terminal playback module57is configured to perform playback of the video picture frame in the frame buffer by using a screen of a mobile terminal. Referring toFIG.6,FIG.6is a schematic structural diagram of a buffer creation module of a live video broadcast apparatus according to an embodiment of the present disclosure. The buffer creation module51includes a mounting component invoking unit61, a loading method replacement unit62, and a buffer creation unit63. The mounting component invoking unit61is configured to invoke a pre-configured mounting component based on a live broadcast command. The loading method replacement unit62is configured to replace a picture loading method by using the mounting component. The buffer creation unit63creates a corresponding video buffer based on a replaced picture loading method. Referring toFIG.7,FIG.7is a schematic structural diagram of a loading method replacement unit of a buffer creation module of a live video broadcast apparatus according to an embodiment of the present disclosure. The loading method replacement unit62includes a detection subunit71and a loading method replacement subunit72. The detection subunit71is configured to detect whether a picture loading method is invoked. The loading method replacement subunit72is configured to: if the picture loading method is invoked, replace a picture drawing method in the picture loading method, so that updating of a video picture frame is detected by using a picture drawing environment. When the live video broadcast apparatus50in this embodiment is used, the buffer creation module51first receives the live broadcast command, and creates the video buffer based on the live broadcast command. Specifically, the mounting component invoking unit61of the buffer creation module51invokes the mounting component based on the live broadcast command. On existing live video broadcast apparatuses, especially on a live video broadcast apparatus of an IOS Apple system, a bottom layer of a game engine uses both OpenGL and Metal. Therefore, the pre-configured mounting component may be used to detect game picture loading of the game engine. When the live video broadcast apparatus50detects game picture loading of the game engine, the loading method replacement unit62of the buffer creation module51immediately uses the mounting component invoked by the mounting component invoking unit to replace the picture loading method. Specifically, the detection subunit71of the loading method replacement unit62detects whether the picture loading method is invoked. If the picture loading method is invoked, the loading method replacement subunit72of the loading method replacement unit62replaces the picture drawing method in the picture loading method. For example, an occasion of invoking a static method may be triggered by loading Class of a game to a memory, to replace the picture drawing method, for example, presentRenderbuffer, in the picture loading method, so that an OpenGL (Open Graphics Library) context of a game drawing environment may detect updating of a video picture frame. In this way, the live video broadcast apparatus50may detect each video picture frame of a current video by using the replaced picture loading method. The buffer creation unit63of the buffer creation module51creates the corresponding video buffer based on the picture loading method replaced by the loading method replacement unit62, so as to perform the storage operation on the video picture frame extracted by the picture drawing environment. Subsequently, the buffer binding module52binds the video buffer created by the buffer creation module51with the picture drawing environment. After binding the video buffer with the picture drawing environment, the live video broadcast apparatus50may extract the video picture frame detected by the picture drawing environment into the video buffer. Subsequently, the picture frame storage module53detects and extracts the video picture frame by using the picture drawing environment. Subsequently, the picture frame storage module53performs the storage operation on all video picture frames by using the video buffer created by the buffer creation module. Subsequently, the synthesizing module54collects the external voice of the live video broadcast apparatus by using the microphone of the live video broadcast apparatus50. Subsequently, the synthesizing module54synchronously synthesizes the collected external voice and the video picture frame collected by the picture frame storage module, to generate the video streaming media file, such as an FLV (Flash Video) streaming media file. A user may control the size of the video streaming media file by controlling a frame rate of the generated video streaming media file. In this way, video picture live broadcast content including both video information and audio information is obtained. Finally, the uploading module55uploads the video streaming media file synthesized by the synthesizing module54to the live broadcast server, so that the live broadcast server may perform live broadcasting of the video by using the video streaming media file. Specifically, referring toFIG.8,FIG.8is a schematic structural diagram1of a live broadcast server corresponding to a live video broadcast apparatus according to an embodiment of the present disclosure. The live broadcast server80includes a comment receiving module81, a comment synthesizing module82, and a playback module83. The comment receiving module81is configured to receive comment information of another terminal. The comment synthesizing module82is configured to synthesize the comment information and the video streaming media file, to obtain a live streaming media file. The playback module83is configured to perform a playback operation on the live streaming media file. When the live broadcast server80performs live broadcasting of the video streaming media file, the comment receiving module81may receive the comment information of the another terminal. Subsequently, the comment synthesizing module82synthesizes the comment information with the video streaming media file, to obtain the live streaming media file. Finally, the playback module83performs the playback operation on the live streaming media file. In this way, comments of others users may be presented in a timely manner during live broadcasting of the mobile video. In addition, further referring toFIG.9,FIG.9is a schematic structural diagram2of a live broadcast server corresponding to a live video broadcast apparatus according to an embodiment of the present disclosure. The live broadcast server90includes a barrage comments receiving module91and a playback module92. The barrage comments receiving module91is configured to receive barrage comments information of another terminal. The playback module92is configured to perform a playback operation on the video streaming media file, and synchronously superimpose the barrage comments information on a playback screen. The barrage comments receiving module91receives the barrage comments information of the another terminal. Subsequently, the playback module92performs the playback operation on the video streaming media file, and synchronously superimposes the barrage comments information on the playback screen. In this way, barrage comments information of others users may be presented in a timely manner during live broadcasting of the mobile video. Meanwhile, in addition to uploading the video picture to the live broadcast server, presentation of the video picture on the live video broadcast apparatus also needs to be ensured continuously. Therefore, the frame buffer updating module56of the live video broadcast apparatus50further needs to update the video picture frame extracted by the picture frame storage module from the video buffer to the frame buffer. The frame buffer is configured to perform a storage operation on the video picture frame presented on the screen of the live video broadcast apparatus. Finally, the terminal playback module57performs playback of the video picture frame in the frame buffer by using the screen. In this way, normal display of the video picture on the live video broadcast apparatus is also ensured. In this way, a live video broadcast process of the live video broadcast apparatus50in this embodiment is completed. Based on the foregoing first embodiment, in the live video broadcast apparatus in this embodiment, the mounting component is disposed, to further improve stability of obtaining the video picture frame; setting of the video buffer and the frame buffer ensures normal display of a live broadcast picture and the video picture; and the live broadcast server may further present barrage comments and a comment of another user in a timely manner. A specific working principle of a live video broadcast method and apparatus of the present disclosure is described below by using a specific embodiment. Referring toFIG.10,FIG.10is a sequence diagram of a working process of the specific embodiment of the live video broadcast method and apparatus according to the present disclosure. The live video broadcast apparatus may be disposed in a corresponding mobile terminal having an IOS Apple system. The mobile terminal includes: a game drawing environment configured to detect and extract a game picture frame, a mounting component invoking the game drawing environment, a video buffer storing a game picture frame for live broadcasting, a frame buffer storing a game picture frame displayed on the mobile terminal, a display screen that is on the mobile terminal and on which the game picture frame is displayed, a microphone collecting an external voice, and a synchronous synthesizing thread synthesizing the external voice and the game picture frame in the video buffer. A live video broadcast process of this specific embodiment includes: Step S1001. After receiving a game live broadcast command of a user, the mobile terminal invokes the corresponding mounting component. Step S1002. The mounting component creates the video buffer by replacing a game picture loading method. Step S1003. Bind the video buffer with the game drawing environment. Step S1004. The game drawing environment detects the game picture frame, and stores the game picture frame in the video buffer. Step S1005. The video buffer sends the game picture frame to the synchronous synthesizing thread, and the microphone also sends the collected external voice to the synchronous synthesizing thread at the same time. Step S1006. The synchronous synthesizing thread synchronously synthesizes the external voice and the game picture frame, to form an FLV streaming media file; and sends the FLV streaming media file to the live broadcast server to perform live broadcasting of a game video. Step S1007. The video buffer sends the game picture frame to the frame buffer. Step S1008. The display screen displays the game picture frame in the frame buffer, thereby ensuring normal display of a game picture on the mobile terminal. In this way, according to the live video broadcast method and apparatus in this specific embodiment, live broadcast display of the game picture on the live broadcast server and local display of the game picture on the mobile terminal are completed. According to the live video broadcast method and apparatus of the present disclosure, the video picture frame and the external voice are synthesized by configuring an additional video buffer, so that the live broadcast server processes a synthesized video streaming media file. A video user may perform live broadcasting of a mobile terminal any time, and the live broadcast server may present barrage comments and a comment of another user in a timely manner, to resolve technical problems that live broadcasting of the mobile video cannot be performed any time according to an existing live video broadcast method and apparatus, and the barrage comments and the comments of other users cannot be presented in a timely manner during live broadcasting of the mobile video. The terms, such as “component”, “module”, “system”, “interface”, and “process”, used in the present disclosure generally indicate a computer-related entity: hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable application, an executed thread, a program, and/or a computer. With reference to the drawings, an application running on a controller and the controller may both be components. One or more components may be in an executed process and/or thread and the components may be located on one computer and/or distributed between or among two or more computers. FIG.11and the following discussions provide short and summary descriptions of a working environment of an electronic device in which the live video broadcast apparatus of the present disclosure is located. The working environment inFIG.11is only an instance of a suitable working environment, and is not intended to suggest any limitation to a scope of a purpose or a function of the working environment. The instance of an electronic device1112includes, but is not limited to, a wearable device, a head mounted device, a medical health platform, a personal computer, a server computer, a handheld or laptop device, a mobile device (for example, a mobile phone, a personal digital assistant (PDA), and a media player), a multiprocessor system, a consumption-based electronic device, a minicomputer, a mainframe computer, a distributed computing environment including the foregoing any system or device, and the like. Although not required, this embodiment is described under a general background that “a computer-readable instruction” is executed by one or more electronic devices. The computer-readable instruction may be distributed by a computer-readable medium (discussed below). The computer readable instruction may be implemented as a program module, for example, a function, an object, an application programming interface (API), or a data structure for performing a specific task or implementing a specific abstract data type. Typically, functions of the computer readable instruction may be randomly combined or distributed in various environments. FIG.11shows the instance of the electronic device1112including one or more embodiments of the live video broadcast apparatus of the present disclosure. In a configuration, the electronic device1112includes at least one processing unit1116and a memory1118. Based on an exact configuration and type of the electronic device, the memory1118may be a volatile memory (for example, a RAM), a non-volatile memory (for example, a ROM or a flash memory), or a combination thereof. The configuration is represented by using a dashed line1114inFIG.11. In another embodiment, the electronic device1112may include an additional feature and/or function. For example, the device1112may further include an additional storage apparatus (for example, a removable/or non-removable storage apparatus), and includes, but is not limited to, a magnetic storage apparatus, an optical storage apparatus, and the like. The additional storage apparatus is represented by using a storage apparatus1120inFIG.11. In an embodiment, a computer-readable instruction used to implement one or more embodiments provided in the present disclosure may be stored in the storage apparatus1120. The storage apparatus1120may further be configured to store other computer-readable instructions for implementing an operating system, an application program, and the like. For example, the computer-readable instruction may be loaded to the memory1118, and executed by the processing unit1116. The term “computer-readable media” used in the present disclosure includes a computer storage medium. The computer storage media includes volatile and non-volatile media, and removable and non-removable media implemented by using any method or technology used for storing information such as a computer-readable instruction or other data. The memory1118and the storage apparatus1120are instances of the computer storage media. The computer storage media includes, but is not limited to a RAM, a ROM, an EEPROM, a flash memory or another storage technology, a CD-ROM, a digital versatile disc (DVD) or another optical storage apparatus, a cassette, a magnetic tape, a magnetic disk storage device, or any other media configured to store desired information and accessed by the electronic device1112. Such a computer storage medium may be a part of the electronic device1112. The electronic device1112may further include a communications connection1126allowing communication between the electronic device1112and another device. The communications connection1126may include, but is not limited to, a modem, a network interface card (NIC), an integrated network interface, RF transmitter/receiver, infrared port, a USB connection, or another interface configured to connect the electronic device1112to another electronic device. The communications connection1126may include a wired connection or a wireless connection. The communications connection1126may transmit and/or receive a communications medium. The term “computer-readable media” may include the communications medium. Typically, the communications medium includes a computer-readable instruction or other data in a “modulated data signal”, for example, a carrier or another transmission mechanism, and includes any information transmission medium. The term “modulated data signal” may include such a signal: One or more features of the signal are set or changed by encoding information into the signal. The electronic device1112may include an input device1124, for example, a keyboard, a mouse, a stylus, a voice input device, a touch input device, an infrared camera, a video input device, and/or any other input device. The device1112may further include an output device1122, for example, one or more displays, a speaker, a printer, and/or any other output device. The input device1124and the output device1122may be connected to the electronic device1112through a wired connection, a wireless connection, or any combination thereof. In an embodiment, an input device or an output device of another electronic device may be used as the input device1124or the output device1122of the electronic device1112. The components of the electronic device1112may be connected by using various interconnects (for example, a bus). Such interconnect may include a peripheral component interconnect (PCI) (for example, a PCI express), a universal serial bus (USB), a live line (IEEE 1394), an optical bus structure, and the like. In another embodiment, the components of the electronic device1112may be interconnected by using a network. For example, the memory1118may include a plurality of physical memory units located at different physical positions and interconnected by using the network. A person skilled in the art may be aware that a storage device configured to store the computer-readable instruction may be distributed across the network. For example, the electronic device1130that may be accessed by using a network1128may store a computer-readable instruction used to implement one or more embodiments of the present disclosure. The electronic device1112may access the electronic device1130and download a part or all of the computer-readable instruction for execution. Alternatively, the electronic device1112may download a plurality of computer-readable instructions as required, or some instructions may be executed by the electronic device1112and some instructions may be executed by the electronic device1130. The present disclosure provides various operations of embodiments. In an embodiment, the one or more operations may constitute one or more computer-readable instructions stored on a computer-readable medium, and the computer-readable instructions enable a computing device to perform the operations when the computer-readable instructions are executed by an electronic device. Describing a sequence of some or all operations shall not be interpreted as implying that the operations must be sequentially related. A person skilled in the art will understand an alternative sequence having the benefits of the present specification. Moreover, it should be understood that not all operations necessarily exist in each embodiment provided in the present disclosure. Furthermore, although the present disclosure is shown and described by using one or more implementation manners, a person skilled in the art may conceive of equivalent variations and modifications based on reading and understanding on the specification and the accompany drawings. The present disclosure includes all such variations and modifications, which is only limited by the scope of the appended claims. In particular regard to the various functions performed by the foregoing components (such as elements and resources), terms used to describe such components are intended to correspond to any component (unless indicated otherwise) performing specified functions of the components (for example, the components are equivalent in functions), even though structures of the functions are not equivalent to the disclosed structures of functions in the exemplary implementation manners in the present disclosure shown in the specification. In addition, although specific features of the specification are disclosed with respect to only one of several implementation manners, the features may be combined with one or more other features of other implementation manners that are desirable for and advantageous to a given or specific application. Moreover, when the terms “include”, “include”, “contain” and any variants thereof are used in a specific implementation or the claims, the terms are intended to cover in a manner similar to “include”. Functional units according to the embodiments of the present disclosure may be integrated in one processing module or exist as separate physical units, or two or more units are integrated in one module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module. If implemented in the form of software functional modules and sold or used as an independent product, the integrated modules may also be stored in a computer-readable storage medium. The aforementioned storage medium may be a read-only memory, a magnetic disk or an optical disc. The foregoing apparatuses or systems can execute methods in corresponding process embodiments. An embodiment of the present disclosure further provides a non-volatile computer-readable storage medium, storing a computer program, and when the program is executed by the processor, steps of the foregoing method being implemented. In conclusion, although the present disclosure is disclosed above by using the embodiments, the sequence numbers such as “first” and “second” in the embodiments are used merely for ease of description, but do not limit a sequence of the embodiments of the present disclosure. Moreover, the foregoing embodiments are not used to limit the present disclosure. A person of ordinary skill in the art may make various modifications and refinements without departing from the spirit of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to scope defined by the claims.
47,004
11943487
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Encoding for video distribution has traditionally followed a constant bitrate (CBR) approach. The CBR approach holds bitrate substantially constant within a sliding window of the encoded asset. For example, an 8 Mbps encoding with a sliding window of one second implies that each one second sequence within the content asset has the size of 1 Mbyte. This model can be formalized in terms of a hypothetic reference decoder (HRD) defined in ISO/IEC 14496-10 and ISO/IEC 23008-2. A coded picture buffer (CPB) size may be a limitation on the aggregate picture sizes at any given time. The example above can be expressed as a CPB of 1.0 seconds. Constant perceptual visual quality has been shown to improve the quality of a user experience. However, one weakness of the CBR approach is that it allocates similar bit budgets to both relatively simple scenes, such as a talking head, and more complex scenes, such as a battle scene with various moving parts, which may include significant variation in video quality. Variable bitrate encoding (VBR) is alternative approach where bitrate is allowed to vary over time. Benefits of VBR encoding include improved quality constancy and bitrate savings by only allocating bits when they are needed. Traditionally, the focus of VBR has been concentrated on statistical multiplexing. This approach is useful in a “fixed pipe” such as a quadrature amplitude modulation (QAM) infrastructure where 38.810 Mbit/sec channels carry multiplexes of several linear channels. While individual bitrates of these channels may vary, the aggregate bitrate of the multiplex is constant. A statistical multiplexer is a device which accepts information regarding expected content properties for all channels within the multiplex, and explicitly assigns a bit budget to each channel in the multiplex for a time window. The statistical multiplexing approach leads to a constrained VBR where the limit on bitrate of a single channel is determined by the overall capacity of the “fixed pipe” and the relative complexities of the multiplexed linear channels within a given time window. Adaptive streaming over HTTP can be viewed as a VBR approach where video is multiplexed with all other types of IP traffic. In this case, the streaming client takes the role of the statistical multiplexer and requests the video bitrate it can sustain given the bandwidth available to it. This may allow loosely constrained VBR and results in significant bandwidth and storage cost savings. Adaptive streaming is a widely accepted approach to video distribution over best effort IP networks. Adaptive streaming solves two major problems: dynamic adaptation to changing network conditions and heterogeneity of device capabilities. In cases of high-quality encoding of long sequences, bitrate may vary significantly and “bitrate peaks” may be higher than the average. When media (e.g., video) content is prepared for distribution using systems such as DASH, it may be encoded in multiple representations. Representations can differ by properties such as bitrate, frame rate, resolution, number of channels, sampling rate, etc. For each representation, its media file may be partitioned into media segments, which are playable small chunks of media that are typically about two to ten seconds in length. A client device may parse the media presentation description (MPD) and select a representation that it has the ability to download and present. It may then start requesting and downloading media segments and may continuously re-evaluate which representation offers the best quality and is sustainable under current network conditions. Significant bitrate peaks are expected with constant quality variable bitrate encoding. These may interfere with client rate adaptation (e.g., using DASH, HLS, etc.) as it may be unaware of the size of an upcoming segment. Methods and systems are disclosed herein for improving delivery and playback of video content. One example of improving delivery and playback of video content for the VBR approach is to provide advance information about an upcoming media segment in a or linear media transmission. While the examples discussed throughout this disclosure refer to the use or linear media transmission, it is understood that the examples may apply to any type of media transmission, including but not limited to video on demand and IP video transmission. An example system100is shown inFIG.1. The system100may comprise a server102and a device120. The server102may comprise a decoder104, a pre-filter106, a segment analyzer108, an encoder110, and a packager112. One or more of the components104,106,108,110or112may be configured to insert one or more encoding characteristics of a content segment into a portion (e.g., a header) of one or more other content segments of a content asset. While each of the components104,106,108,110and112are shown inFIG.1as being part of the server102, it is understood that one or more of the components may be located externally to the server102. An input to the server102(e.g., at the decoder104) may comprise compressed or uncompressed media (e.g., video). In an example that the input comprises compressed video, the video may be compressed with codecs such as JPEG2000 or Apple® ProRes, or codecs such as MPEG-2 (H.262, ITU-T 13818-2), MPEG-4 AVC (ITU-T H.264, ISO/IEC 14496-10), and HEVC (ITU-T H.265, ISO/IEC 23008-2), or any other similar type of media content. The decoder104may be configured as a decoder for one or more such standards. In an example that the input comprises uncompressed video input, the decoder104may be configured to accept video over a serial digital interface (SDI) or an Ethernet hardware interface. The pre-filter106may be configured to receive uncompressed video from the decoder104. The pre-filter106may be an optional component of the server102. The pre-filter106may be configured to receive input video characteristics (e.g., statistics) and compressed domain characteristics (e.g., frame types, frame and block-level quantizers, motion vectors, and coefficients). The decoder104and the pre-filter106may be configured to pass both the input video and the content characteristics to the segment analyzer108. The segment analyzer108may be configured to determine one or more encoding characteristics of a content segment in advance of the content segment's transmission time or playback time. The one or more encoding characteristics of the content segment may comprise an estimated bitrate required for transmission of the content segment or the existence of a significant change in content properties (e.g., an upcoming change in content complexity as seen in the pixel-domain and compressed-domain properties the input content). The segment analyzer108may be configured to pass the estimated characteristics of the content segment to one or more of the encoder110and the packager112. The encoder110may perform a lookahead operation to determine the one or more encoding characteristics of the content segment. Using the lookahead operation, the encoder110may determine one or more of a size of the content segment, a quality of the content segment, and a resolution of the content segment, and may use the information associated with the content segment to determine the one or more encoding characteristics of the content segment. The encoder110may be configured to encode multiple versions of the content segment, such as a version of the content segment at a plurality of resolutions (e.g., 480p, 720p, 1080p, etc.) which may be packaged by the packager112. The encoder110may be a multi-pass encoder. A first stage (lookahead) encoder may be configured to examine the content segment and to determine one or more encoding characteristics for the content segment. The first stage encoder may comprise a real-time encoder, and may provide a second stage encoder with frame encoding parameters such as a type, a quantizer, and a maximum number of bits. The second stage encoder may receive the information from the first encoder and produce a compressed frame or segment. The encoder110may be configured to pass compressed frame to the packager112, which may generate media segments and manifests such as MPD (DASH) or m3u8 (HLS). The packager112may be configured to insert the one or more encoding characteristics of the content segment into a portion of one or more other content segments using mechanisms such as, for example, the DASH inband message or an ID3v2 tag. The packager112may insert the one or more encoding characteristics of the content segment into a header of one or more other content segments. The one or more other content segments may be configured for transmission to the device120prior to the content segment. The packager112may embed the estimated characteristics in the media content and transmit it to the device120. WhileFIG.1shows an example where the packager112inserts that one or more encoding characteristics into the portion of the content segment, it is understood that this functionality may be performed by any other of the components shown inFIG.1or a combination of those components. The device120may be configured to receive, in a header of another content segment, one or more encoding characteristics of a content segment, and to determine one or more playback characteristics of the content segment. The device120may comprise a display122and a speaker124. The display122may be configured to display one or more content segments of the content asset. The display22may include any device capable of displaying video or image content to a user, such as a tablet, a computer monitor, or a television screen. The display122may be part of the device120such as in the example that the device120is a tablet or a computer. The display122may be separate from the device120such as in an example that the device120is a set top box and the display122is a television screen in electrical communication with the set top box. The speaker124may be configured to output audio associated with the content segments. The speaker124may be any device capable of outputting audio associated with the media file. The speaker124may be part of the device120such as in the example that the device120is streaming player or a tablet or a computer. The speaker124may be separate from the device120such as in an example that the device120is a set top box and the speaker124is a television or other external speaker in electrical communication with the set top box. FIG.2shows a flow chart of an example method200. At step202, one or more encoding characteristics of a first content segment may be determined. The first content segment may be associated with a content asset. The content asset may be any type of media asset capable of being played by a device, such as a television show, a movie, streaming media content, etc., or a portion thereof. The content asset may comprise a plurality of content segments. Each of the content segments may correspond to a portion of the content asset, such as a two second portion of the content asset or a ten second portion of the content asset. The one or more encoding characteristics of the first content segment may comprise an estimated bitrate required for transmission of the first content segment over a network. The one or more encoding characteristics may additionally or alternatively comprise a segment duration, a non-reference visual quality metric of the segment, a full-reference visual quality metric of the segment, etc. The visual quality metrics of the content segment may indicate a quality of the segment in cases of different pixel densities of the display device (e.g., pixel density of a 1080p phone vs 1080p 40-inch TV). The one or more encoding characteristics of the first content segment may be determined based on information associated with the first content segment, such as a size of a prior segment within the content asset, properties of the content segment representation such as a resolution, a bit depth, a dynamic range, and a frame rate of the content segment, and/or one or more other content characteristics including but not limited to a non-reference indication of visual quality of the contribution (source) version of the first content segment (including an indication of artefacts like blockiness). Such content features may be statistical properties of the media picture (variance, histograms, etc.), properties of a plurality of consequent frames (scene boundaries and their types, motion field characteristics such as quality and uniformity, etc.), as well as the properties of the contribution (source) video format (e.g., percentage of skip modes, quantizer values, etc., of the incoming video), an indication of future change of content transmitted as SCTE 35 message, etc. The determination of the estimated encoding characteristics may be performed using machine learning approaches including but not limited to convolutional neural networks (CNN) and recurring neural networks (RNN). In cases of machine learning techniques accepting feedback, the differences between the actual encoding characteristics of the first content segment and its estimate may be used on such feedback. It is understood that the encoding characteristics are not limited to these examples and may include any type of encoding characteristic of the first content segment. It is further understood that the characteristics may additionally or alternatively include other characteristics of the first content segment not including encoding characteristics. Determining the one or more encoding characteristics of the first content segment may comprise performing a lookahead operation in the content asset. Using the lookahead operation, the server may be configured to determine the information associated with first content segment in advance of the first content segment's transmission time or playback time. The server may use the information from the lookahead operation to determine an estimated bitrate required for transmission of the first content segment over the network. For example, the server may determine that a bitrate of 4 Mbps may be required for a 1080p (e.g., 1920×1080, 2.07 megapixels) version of the first content segment and that a bitrate of 2 Mbps may be required for 720p (1280×720, 0.92 megapixels) version of the first content segment. The lookahead operation may comprise a trial encode of the first content segment in order to determine information such as a bit allocation and a slice type for the first content segment. This information may then be used as a starting point for encoding the first content segment. A segment encoding function may accept this information and may return a final encoded first content segment where the media content within the first content segment is encoded based on this information. The server may be able to determine the one or more playback characteristics of the content segment without referring to a manifest file associated with the content asset. At step204, an indication of the one or more encoding characteristics of the first content segment may be inserted into a portion of a second content segment. The second content segment may be associated with the same content asset as the first content segment. The portion of the second content segment may be a header of the second content segment. Inserting the one or more encoding characteristics of the first content segment into the portion of the second content segment may comprise inserting the one or more characteristics of the first content segment into the header of the second content segment. The first content segment and the second content segment may be configured for linear transmission. For example, the first content segment and the second content segment may be configured for adaptive bitrate (ABR) streaming. The one or more encoding characteristics of the first content segment may be inserted into the portion of the second content segment by embedding a table comprising the encoding characteristics of one or more representations of the first content segment into the second segment. This can be done for example using an ‘emsg’ box contained in segment formats such as ISO-BMFF (ISO Base Media File Format and MPEG-2 TS of the second content segment and prepending the ‘emsg’ box to the start of the second content segment. In an example that uses MPEG DASH, the encoding characteristics of the first content segment may be carried in one or more XML element(s) of the AdaptationSet or Representation elements within the MPD, as discussed above. In an example that uses HLS, the table may be embedded in a tag within a second-level (rendition) m3u8 file or in an ID3v2 private frame Encoder Boundary Point (EBP) may be used in an example where the table would be transmitted at the end of the message. At step206, the second content segment may be sent to a device. The second content segment may comprise an indication of the one or more encoding characteristics of the first content segment. The one or more encoding characteristics of the first content segment may be contained in the header of the second content segment. The second content segment may be sent to the device prior to the first content segment. The second content segment may be configured for playback by the device prior to the first content segment. The first content segment may be sent to the device after the second content segment. The first content segment may comprise information associated with one or more other content segments, such as one or more encoding characteristics of one or more other content segments that follow the first content segment. The device may be configured to determine one or more playback characteristics of the first content segment based on the received indication of the one or more encoding characteristics of the first content segment contained in the portion of the second content segment. For example, the second content segment (which is played prior to the first content segment) may require a bitrate of 2 Mbps for a 720p resolution of the content segment. The device may determine that a 1080p version of the first content segment requires a bitrate of 4 Mbps and that the 720p version of the first content segment requires a bitrate of 2 Mbps. The device may determine to request playback of one of the 1080p version of the first content segment or the 720p version of the first content segment based on a current network bandwidth available to the device. Multi-pass encoders may be used for providing advance information about an upcoming media segment in live media transmission. A first stage (“lookahead”) encoder may be configured to examine the content segment and to determine one or more encoding characteristics for the content segment, such as an estimated bitrate required for transmission of the content segment. The first stage encoder may comprise a real-time multi-rate adaptive streaming encoder, and may be used for estimation purposes. The first stage encoder may have a lookahead delay of D1 seconds while an estimator may have a delay of D2 seconds (which may be same or longer than a segment duration) to estimate the bitrates and quality parameters (QP) for the second stage encode. As a result of the estimation, a good approximation of size and quality of the upcoming segment may be available before the second encode. The first stage encoder may store information associated with the one or more encoding characteristics for the content segment. The second stage encoder may receive the information from the first encoder and may determine a bitrate for the content segment based on the information received from the first encoder. The second stage encoder may be used for outputting the final bitstreams which, may be packaged by a packager into media segments and further delivered to consumers using protocols such as MPEG DASH and Apple® HLS. The information output by the second encoder may be organized into a table. The information may be indexed by representation order in an adaptation set, a hash of representation ID, or by any other unique per-representation sequence of bytes. When the results of final encodes for each resolution are packaged into segments at the end of the second stage encode, the table may be updated to include information for one or more future segments. Thus, the size and quality estimates for the upcoming segments can be communicated to the adaptive streaming clients. On reception of the information, the client may check whether it can sustain the bitrate given the information known about the upcoming segments. It may also estimate the quality of the next two segments and switch to a lower representation in case the quality of this representation is sufficient. The client may be located at the CPE (such as a set-top box, a TV, or a mobile device), at the server-side edge location such as the location of CMTS, or the point at which the traffic is modulated (e.g., QAM). In the case of server-side client, this client may be acting as a statistical multiplexer and may take into account the overall bandwidth available to video services. The overall customer satisfaction may be improved if a predictive model of the top channels (e.g., 20-50 channels) can be built, and higher visual quality rates can be assigned to more popular channels. The approach can be applied both to IP and QAM services, as well as to wireless and UDP-based distribution services, and reliable streaming using protocols such as SRT, WebRTC, ROUTE/DASH, etc. FIG.3shows an example of a content asset300comprising content segments304,308and312. While the content asset300is shown as having three content segments, it is understood that the content asset may have any number of segments. Each of the content segments may have a defined duration, such as two seconds or ten seconds of content. An example thirty-minute content asset may comprise900content segments each being two seconds in duration. Each of the content segments may comprise a header.FIG.3shows a header302associated with content segment304, a header306associated with content segment308, and a header310associated with content segment312. The header associated with a particular content segment may comprise information associated with the content segment, such as a size of the content segment, a quality of the content segment, and a resolution of the content segment. The header may additionally or alternatively comprise information associated with one or more other content segments of the content asset, such as one or more encoding characteristics associated with one or more of the other content segments of the content asset. As disclosed herein, a header of a particular content segment may comprise one or more encoding characteristics of one or more other content segments. Header302associated with content segment304may comprise one or more encoding characteristics associated with content segment308and/or content segment312. Content segments308and312may be configured for transmission to the playback device after content segment304. As shown inFIG.3, a playback device may be configured to receive and cause playback of content segment304, content segment308, and content segment312in that order. The header302associated with content segment304may comprise an estimated bitrate required for transmission of content segment308and/or content segment312over a network. Similarly, header306associated with content segment308may comprise an estimated bitrate required for transmission of content segment312over the network. Upon receipt of the content segment304and the content header302, a playback device may be configured to analyze information contained in the header302to determine one or more playback characteristics of content segment308based on information contained in the header302. The playback device may determine based on the information in the header302that there are multiple versions of the content segment308available for playback. A first version of the content segment308may have a resolution of 1080p and require a bitrate of 4 Mbps for transmission while a second version of the content segment308may have a resolution of 720p and require a bitrate of 2 Mbps for transmission. The playback device may analyze a network connection or bandwidth available to the device in order to determine which version of the content segment to request. Based on determining that the playback device has a download capacity of 6 Mbps, the playback device may decide to request the 1080p version of the content segment308from the content server. Based on determining that the playback device has a download capacity of 3 Mbps, the playback device may decide to request the 720p version of the content segment308from the content server. FIG.4shows an example method400. At step402, a first content segment may be received. The first content segment may be associated with a content asset. The content asset may be any type of media asset capable of being played by a device, such as a television show, a movie, streaming media content, etc., or a portion thereof. The content asset may comprise a plurality of content segments. Each of the content segments may correspond to a portion of the content asset, such as a two second portion of the content asset or a ten second portion of the content asset. The first content segment may be received by a device such as a playback device. A portion of the first content segment may comprise an indication of one or more encoding characteristics of a second content segment of the content asset. The portion of the first content segment may be a header of the first content segment. The one or more encoding characteristics of the second content segment may comprise an estimated bitrate required for transmission of the second content segment over a network. The one or more encoding characteristics of the second content segment may be based on one or more of a size of the second content segment, a quality of the second content segment, and a resolution of the second content segment. The first content segment and the second content segment may be configured for linear transmission. For example, the first content segment and the second content segment may be configured for adaptive bitrate (ABR) streaming. While the examples discussed throughout this disclosure refer to the use or linear media transmission, it is understood that the examples may apply to any type of media transmission, including but not limited to video on demand and IP video transmission. The one or more encoding characteristics of the second content segment may be determined based on lookahead operation in the content asset performed by the server. Using the lookahead operation, the server may be configured to determine the information associated with the second content segment in advance of the second content segment's transmission time or playback time. The server may use the information from the lookahead operation to determine an estimated bitrate required for transmission of the second content segment over the network. For example, the server may determine that a bitrate of 4 Mbps may be required for a 1080p (e.g., 1920×1080, 2.07 megapixels) version of the second content segment and that a bitrate of 2 Mbps may be required for 720p (1280×720, 0.92 megapixels) version of the second content segment. At step404, one or more playback characteristics of the second content segment may be determined. The device may be configured to determine one or more playback characteristics of the second content segment based on the received indication of the one or more encoding characteristics of the second content segment contained in the portion of the first content segment. For example, the first content segment (which is played prior to the first content segment) may require a bitrate of 2 Mbps for a 720p resolution of the content segment. The device may determine that a 1080p version of the second content segment requires a bitrate of 4 Mbps and that the 720p version of the second content segment requires a bitrate of 2 Mbps. At step406, the device may determine an appropriate version of the second content segment for playback. For example, the device may determine to request playback of one of the 1080p version of the second content segment or the 720p version of the second content segment based on a current network bandwidth available to the device. The device may additionally or alternatively send to the server a request for the determined or selected version of the second content segment. At step408, the second content segment may be received. The second content segment may be received at the device after the first content segment. The second content segment may be configured for playback by the device after playback of the first content segment. The first content segment may be received at the device before the second content segment. The second content segment may comprise another segment of the content asset. For example, the first content segment may correspond to a first two-second fragment of the content asset and the second content segment may correspond to a second two-second fragment of the content asset. A header of the second content segment may comprise information associated with one or more encoding characteristics of one or more other segments of the content asset. At step410, playback of the second content segment may be caused. Playback of the second content segment may be based on the one or more playback characteristics of the second content segment. Based on determining that the playback device has a download capacity of 6 Mbps, the playback device may decide to request the 1080p version of the second content segment from the content server. Based on determining that the playback device has a download capacity of 3 Mbps, the playback device may decide to request the 720p version of the second content segment from the content server. FIG.5shows a flow chart of an example method500. At step502, one or more encoding characteristics of a content segment may be determined. The content segment may be associated with a content asset. The content asset may be any type of media asset capable of being played by a device, such as a television show, a movie, streaming media content, etc., or a portion thereof. The content asset may comprise a plurality of content segments. Each of the content segments may correspond to a portion of the content asset, such as a two second portion of the content asset or a ten second portion of the content asset. The one or more encoding characteristics of the content segment may comprise an estimated bitrate required for transmission of the content segment over a network. The one or more encoding characteristics of the content segment may be determined based on information associated with the content segment, such as a size of the content segment, a quality of the content segment, and a resolution of the content segment. It is understood that the encoding characteristics are not limited to these examples and may include any type of encoding characteristic of the content segment. It is further understood that the characteristics may additionally or alternatively include other characteristics of the content segment not including encoding characteristics. Determining the one or more encoding characteristics of the content segment may comprise performing a lookahead operation in the content asset. Using the lookahead operation, the server may be configured to determine the information associated with content segment in advance of the content segment's transmission time or playback time. The server may use the information from the lookahead operation to determine an estimated bitrate required for transmission of the content segment over the network. For example, the server may determine that a bitrate of 4 Mbps may be required for a 1080p version of the content segment and that a bitrate of 2 Mbps may be required for 720p version of the content segment. At step504, an indication of the one or more encoding characteristics of the content segment may be inserted into a portion of an other content segment. The other content segment may be associated with the same content asset as the content segment. The portion of the other content segment may be a header of the other content segment. Inserting the one or more encoding characteristics of the content segment into the portion of the other content segment may comprise inserting the one or more characteristics of the content segment into the header of the other content segment. The content segment and the other content segment may be configured for linear transmission. For example, the content segment and the other content segment may be configured for adaptive bitrate (ABR) streaming. At step506, the other content segment may be sent to a device. The other content segment may comprise an indication of the one or more encoding characteristics of the content segment. The one or more encoding characteristics of the content segment may be contained in the header of the other content segment. The other content segment may be sent to the device prior to the content segment. The other content segment may be configured for playback by the device prior to the content segment. The content segment may be sent to the device after the other content segment. The content segment may comprise information associated with one or more other content segments, such as one or more encoding characteristics of one or more other content segments that follow the content segment. The device may be configured to determine one or more playback characteristics of the content segment based on the received indication of the one or more encoding characteristics of the content segment contained in the portion of the other content segment. For example, the other content segment (which is played prior to the content segment) may require a bitrate of 2 Mbps for a 720p resolution of the content segment. The device may determine that a 1080p version of the content segment requires a bitrate of 4 Mbps and that the 720p version of the content segment requires a bitrate of 2 Mbps. The device may determine to request playback of one of the 1080p version of the content segment or the 720p version of the content segment based on a current network bandwidth available to the device. FIG.6depicts a computing device that may be used in various aspects, such as the servers, modules, and/or devices depicted inFIG.1. With regard to the example architecture ofFIG.1, the server102, the media file processor104, the encoder106, the database108, the device120, the processor112, the display114, and/or the speaker116may each be implemented in an instance of a computing device600ofFIG.6. The computer architecture shown inFIG.6shows a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, PDA, e-reader, digital cellular phone, or other computing node, and may be utilized to execute any aspects of the computers described herein, such as to implement the methods described in relation toFIGS.2,4and5. The computing device600may include a baseboard, or “motherboard,” which is a printed circuit board to which a multitude of components or devices may be connected by way of a system bus or other electrical communication paths. One or more central processing units (CPUs)604may operate in conjunction with a chipset606. The CPU(s)604may be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computing device600. The CPU(s)604may perform the necessary operations by transitioning from one discrete physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements may generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements may be combined to create more complex logic circuits including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The CPU(s)604may be augmented with or replaced by other processing units, such as GPU(s)605. The GPU(s)605may comprise processing units specialized for but not necessarily limited to highly parallel computations, such as graphics and other visualization-related processing. A user interface may be provided between the CPU(s)604and the remainder of the components and devices on the baseboard. The interface may be used to access a random access memory (RAM)608used as the main memory in the computing device600. The interface may be used to access a computer-readable storage medium, such as a read-only memory (ROM)620or non-volatile RAM (NVRAM) (not shown), for storing basic routines that may help to start up the computing device600and to transfer information between the various components and devices. ROM620or NVRAM may also store other software components necessary for the operation of the computing device600in accordance with the aspects described herein. The user interface may be provided by a one or more electrical components such as the chipset606. The computing device600may operate in a networked environment using logical connections to remote computing nodes and computer systems through local area network (LAN)616. The chipset606may include functionality for providing network connectivity through a network interface controller (NIC)622, such as a gigabit Ethernet adapter. A NIC622may be capable of connecting the computing device600to other computing nodes over a network616. It should be appreciated that multiple NICs622may be present in the computing device600, connecting the computing device to other types of networks and remote computer systems. The computing device600may be connected to a storage device628that provides non-volatile storage for the computer. The storage device628may store system programs, application programs, other program modules, and data, which have been described in greater detail herein. The storage device628may be connected to the computing device600through a storage controller624connected to the chipset606. The storage device628may consist of one or more physical storage units. A storage controller624may interface with the physical storage units through a serial attached SCSI (SAS) interface, a serial advanced technology attachment (SATA) interface, a fiber channel (FC) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computing device600may store data on a storage device628by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of a physical state may depend on various factors and on different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the physical storage units and whether the storage device628is characterized as primary or secondary storage and the like. For example, the computing device600may store information to the storage device628by issuing instructions through a storage controller624to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computing device600may read information from the storage device628by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the storage device628described herein, the computing device600may have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media may be any available media that provides for the storage of non-transitory data and that may be accessed by the computing device600. By way of example and not limitation, computer-readable storage media may include volatile and non-volatile, transitory computer-readable storage media and non-transitory computer-readable storage media, and removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, or any other medium that may be used to store the desired information in a non-transitory fashion. A storage device, such as the storage device628depicted inFIG.6, may store an operating system utilized to control the operation of the computing device600. The operating system may comprise a version of the LINUX operating system. The operating system may comprise a version of the WINDOWS SERVER operating system from the MICROSOFT Corporation. According to additional aspects, the operating system may comprise a version of the UNIX operating system. Various mobile phone operating systems, such as IOS and ANDROID, may also be utilized. It should be appreciated that other operating systems may also be utilized. The storage device628may store other system or application programs and data utilized by the computing device600. The storage device628or other computer-readable storage media may also be encoded with computer-executable instructions, which, when loaded into the computing device600, transforms the computing device from a general-purpose computing system into a special-purpose computer capable of implementing the aspects described herein. These computer-executable instructions transform the computing device600by specifying how the CPU(s)604transition between states, as described herein. The computing device600may have access to computer-readable storage media storing computer-executable instructions, which, when executed by the computing device600, may perform the methods described in relation toFIGS.2,4and5. A computing device, such as the computing device600depicted inFIG.6, may also include an input/output controller632for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller632may provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, a plotter, or other type of output device. It will be appreciated that the computing device600may not include all of the components shown inFIG.6, may include other components that are not explicitly shown inFIG.6, or may utilize an architecture completely different than that shown inFIG.6. As described herein, a computing device may be a physical computing device, such as the computing device600ofFIG.6. A computing node may also include a virtual machine host process and one or more virtual machine instances. Computer-executable instructions may be executed by the physical hardware of a computing device indirectly through interpretation and/or execution of instructions stored and executed in the context of a virtual machine. It is to be understood that the methods and systems are not limited to specific methods, specific components, or to particular implementations. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another embodiment includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another embodiment. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not. Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude, for example, other components, integers or steps. “Exemplary” means “an example of” and is not intended to convey an indication of a preferred or ideal embodiment. “Such as” is not used in a restrictive sense, but for explanatory purposes. Components are described that may be used to perform the described methods and systems. When combinations, subsets, interactions, groups, etc., of these components are described, it is understood that while specific references to each of the various individual and collective combinations and permutations of these may not be explicitly described, each is specifically contemplated and described herein, for all methods and systems. This applies to all aspects of this application including, but not limited to, operations in described methods. Thus, if there are a variety of additional operations that may be performed it is understood that each of these additional operations may be performed with any specific embodiment or combination of embodiments of the described methods. The present methods and systems may be understood more readily by reference to the following detailed description of preferred embodiments and the examples included therein and to the Figures and their descriptions. As will be appreciated by one skilled in the art, the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices. Embodiments of the methods and systems are described below with reference to block diagrams and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, may be implemented by computer program instructions. These computer program instructions may be loaded on a general-purpose computer, special-purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. The various features and processes described herein may be used independently of one another, or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain methods or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto may be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically described, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the described example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the described example embodiments. It will also be appreciated that various items are illustrated as being stored in memory or on storage while being used, and that these items or portions thereof may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments, some or all of the software modules and/or systems may execute in memory on another device and communicate with the illustrated computing systems via inter-computer communication. Furthermore, in some embodiments, some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits (“ASICs”), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (“FPGAs”), complex programmable logic devices (“CPLDs”), etc. Some or all of the modules, systems, and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network, or a portable media article to be read by an appropriate device or via an appropriate connection. The systems, modules, and data structures may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission media, including wireless-based and wired/cable-based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, the present disclosure may be practiced with other computer system configurations. While the methods and systems have been described in connection with preferred embodiments and specific examples, it is not intended that the scope be limited to the particular embodiments set forth, as the embodiments herein are intended in all respects to be illustrative rather than restrictive. Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its operations be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its operations or it is not otherwise specifically stated in the claims or descriptions that the operations are to be limited to a specific order, it is no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; and the number or type of embodiments described in the specification. It will be apparent to those skilled in the art that various modifications and variations may be made without departing from the scope or spirit of the present disclosure. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practices described herein. It is intended that the specification and example figures be considered as exemplary only, with a true scope and spirit being indicated by the following claims.
53,837
11943488
DETAILED DESCRIPTION In the following description, certain specific details are set forth in order to provide a thorough understanding of various disclosed implementations. However, one skilled in the relevant art will recognize that implementations may be practiced without one or more of these specific details, or with other methods, components, materials, etc. In other instances, well-known structures associated with computer systems, server computers, and/or communications networks have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the implementations. Unless the context requires otherwise, throughout the specification and claims that follow, the word “comprising” is synonymous with “including,” and is inclusive or open-ended (i.e., does not exclude additional, unrecited elements or method acts). Reference throughout this specification to “one implementation” or “an implementation” means that a particular feature, structure or characteristic described in connection with the implementation is included in at least one implementation. Thus, the appearances of the phrases “in one implementation” or “in an implementation” in various places throughout this specification are not necessarily all referring to the same implementation. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations. As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. It should also be noted that the term “or” is generally employed in its sense including “and/or” unless the context clearly dictates otherwise. The headings and Abstract of the Disclosure provided herein are for convenience only and do not interpret the scope or meaning of the implementations. One or more implementations of the present disclosure are directed to computer-implemented systems and methods of automating and optimizing the placement of advertisements within advertisement placement opportunities offered by multiple media content providers. Buying and selling television, radio or digital advertising has traditionally been a highly manual process and requires many participants to execute orders. Layering in audience and pricing data adds another level of complexity to the campaign execution workflow. Furthermore, buying advertisement spots from multiple media content providers for similar advertisement placement opportunities adds yet another layer of complexity. Typically, each buyer must interface and negotiate with each media content provider individually, which requires the buyer to interact with a different computer system, user interface, etc., for each media content provider if the buyer wishes to place their advertisement in similar advertisement placement opportunities for multiple media content providers. Additionally, buying advertisement spots from multiple media content providers for similar advertisement placement opportunities allows a buyer to ensure that their advertisement receives a significant portion of the total number of impressions available across advertisement placement opportunities offered by different media content providers in the same time period (referred to as “unduplicated reach”). As mentioned above, obtaining this unduplicated reach manually requires a buyer to interface with many different media content providers to individually negotiate and place their advertisement in the correct advertisement placement opportunities. Furthermore, media content providers may not be able to provide the correct advertisement slots or advertisement placement opportunities to allow a buyer to obtain the unduplicated reach. Thus, in order for the buyer and the multiple media content providers to reach an agreement which results in unduplicated reach, each media content provider must spend large amounts of computing resources and time to determine which advertisement slots are available, which assigned advertisements can be moved, etc., until the buyer is able to find a time period with advertisement placement opportunities offered by each media content provider to which the buyer's advertisement can be assigned. Furthermore, the buyer must also use a large amount of their own computing resources and time to work with each of the media content providers individually in order to achieve unduplicated reach for their advertisement. In conventional workflows, an advertiser, or “buyer”, contacts multiple networks, or “media content providers,” to negotiate and purchase “advertisement placement opportunities.” The advertisement placement opportunities include one or more breaks within media content. Each break includes one or more “slots,” which each represent a portion of the time allotted to the break. In some embodiments, a break is referred to as a “pod.” A media content provider may represent an individual media outlet (such as a single television station) or multiple media outlets (such as a network of multiple television stations). When an advertisement is assigned to a break, the advertisement is assigned to one or more slots included in the break. The one or more slots within a break to which the advertisement is assigned are collectively referred to as a “spot.” Thus, a spot and an advertisement together are referred to as an “advertisement spot.” For example, a break may be 90 seconds long with six 15-second slots. A first advertisement may be assigned to a first slot 15-second slot within the break, and thus the first slot would be referred to as a spot. A second advertisement may be assigned to a second and a third 15-second slot, thus occupying 30-seconds within the break, and the second slot and third slot may also be referred to as a spot. While the example provided uses uniform lengths of time for each slot included in a break, embodiments are not so limited, and the length of time allocated to at least one slot within a break may be shorter or longer than the length of time allocated to at least one other slot within the break. Furthermore, assigning an “advertisement spot” includes assigning an advertisement to one or more slots, placing an advertisement at one or more spots, etc. An “advertisement spot” may refer to an advertisement and one or more slots to which the advertisement is assigned. For example, the advertisement placement opportunity may represent an opportunity to air the advertisement during a television show, and each break may represent a commercial break within the television show. Once the buyer has purchased the advertisement placement opportunity, the media content provider assigns content provided by the buyer to a break included in the advertisement placement opportunity. The buyer may negotiate a price with each of these providers, only to find out later on that some media content providers are unable to provide the advertisement placement opportunity which the buyer needed to achieve unduplicated reach, resulting in wasted time, effort, and, in the case of negotiations and purchases that use computer systems or automation, computer resources. An advertisement spot may be associated with content which will be displayed, aired, shown, presented, etc., by a media content provider during the time represented by the one or more slots to which the advertisement is assigned. The content may be referred to as material, a creative, a creative asset, creative material, a copy, or other terms used to describe content associated with an advertisement. An advertisement is “cleared” when a buyer's offer to buy an advertisement placement opportunity is accepted by a seller, or media content provider and the buyer's advertisement is published in the advertisement placement opportunity. However, if a buyer would like to buy multiple advertisement placement opportunities from multiple media content providers, such as, for example, advertisement placement opportunities which occur during the same half-hour in a geographic region, they must individually offer to buy the advertisement placement opportunity from each of the media content providers. Additionally, the buyer may not be able to ensure that the advertisement is shown in the same, or similar, break during the advertisement placement opportunities. Furthermore, the buyer may be unable to ensure that each of the offers to buy advertisement placement opportunities are accepted by the seller. As a result, buyers are unable to reliably ensure that their advertisement appears in similar advertisement placement opportunities across multiple media content providers. Furthermore, the buyer and each of the sellers must use additional resources, such as computing resources, processing power, memory, data storage, etc., in order for a buyer to buy similar advertisement placement opportunities from each seller. This results in wasted time, effort, and computing resources when the buyer is unable to buy all of the similar advertisement placement opportunities. Implementations of the present disclosure are directed to computer-implemented systems and methods for placement of content, such as advertisements, within similar content-breaks for multiple media content providers. Thus, the aforementioned inefficient and unreliable processes is improved to provide optimization that was previously not possible using conventional workflows. Such implementations are thus able to improve the functioning of computer or other hardware, such as by reducing the dynamic display area, processing, storage, and/or data transmission resources needed to perform a certain task, thereby enabling the task to be permitted by less capable, capacious, and/or expensive hardware devices, and/or be performed with lesser latency, and/or preserving more of the conserved resources for use in performing other tasks. For example, by allowing buyers to specify a time period, or other attribute of an advertisement placement opportunity, and to automatically place their advertisement within advertisement placement opportunities with the specified attribute and which are offered by multiple media content providers, the buyers do not need to expend computing resources, such as those used to generate user interfaces, present data related to advertisement placement opportunities, assist the buyer in offering to buy the advertisement placement opportunities, sending a copy or creative to the seller, etc., for interacting with each media content provider's systems. Furthermore, the media content providers do not need to expend similar computing resources for interacting with the buyer. In at least some implementations, buyers and sellers can trade mediacast (e.g., broadcast, Webcast) advertisement inventory (e.g., direct or programmatically) at local, national, and/or worldwide levels. The types of media traded via may simultaneously include numerous types of media, including TV, cable, satellite, radio, outdoor, display, digital, print, etc. Such programmatic advertising implements data-driven automation of audience-based advertising operations which inverts the industry standard in which marketers rely on show ratings to determine desirable audiences for the marketers' advertisements. In at least some implementations, the cross provider content assignment systems disclosed herein interface with demand side platforms (DSPs) that optimize offers to guarantee that an advertisement is included in multiple advertisement placement opportunities across multiple media content providers. Sellers enjoy seamless transaction workflow for getting advertisements from proposal, to publishing, and to billing that delivers a significant reduction in time spent on reconciliation and “make-goods” and streamlines processes for creative management and revenue management across direct and programmatic sales channels. The advertiser-facing interface facilitates creative placement and reviewing for the buy side, and may have transcoding and approval tools for the sell side. For example, in some implementations, once an advertisement transaction is approved, the advertiser-facing interface sends the advertisement directly to a broadcaster's traffic system. In some implementations, the advertiser-facing interface facilitates a buyer in specifying a time or attribute for advertisement placement opportunities, such that the cross provider content assignment system is able to assign their advertisement to advertisement placement opportunities which occur at the specified time or have the specified attribute. The cross provider content assignment system may receive a creative or copy for the advertisement, and transmit the creative or copy to each of the media content providers associated with an advertisement placement opportunity to which the advertisement is assigned. In some implementations, the cross provider content assignment systems disclosed herein automate aspects of billing, reconciliation, and creative execution. In some implementations, the cross provider content assignment systems may be integrated with advertisement management software and sales and traffic management systems. FIG.1shows an example networked environment100according to one illustrated implementation in which various apparatus, methods and articles described herein may operate. The environment100includes a cross provider content assignment system102an advertiser-facing interface (AFI)103, a media content provider-facing interface (PFI), a number of sellers or content providers104A-104N (collectively104), a number of seller side platforms (SSPs)108A-108N (collectively108), a number of demand side platforms (DSPs)110A-110N (collectively110) and122, a number of buyers112B-112N (collectively112), such as advertisers or agencies, and a roadblock buyer120, such as an advertiser or agency, all communicatively coupled by one or more networks or other communications channels. The various components of the environment may be distributed or integrated in any number of ways. For example, in at least some implementations, two or more of the DSPs110, DSP122, AFI103, cross provider content assignment system102, and PFI105may be integrated into a single platform provided by one or more entities. The sellers104may take a variety of forms, for example, radio stations or broadcasters, television stations or broadcasters, other terrestrial or satellite broadcasters or multicasters (not shown), Webcasters, printed content (e.g., print media) providers, outdoor content (e.g., billboards) providers, etc. The sellers104may, or may not, own the content that they provide. The sellers104utilize the PFI105to access the cross provider content assignment system102. On the buy side, the buyers112(e.g., advertisers, agencies) and roadblock buyer120may interface with the system102via the AFI103through the buyers' respective DSPs110and122. The roadblock buyer120is a buyer which wishes to buy advertisement spots with multiple advertisement placement opportunities which have a similar attribute and which are offered by multiple media content providers. For example, the roadblock buyer120may wish to place an advertisement spot with multiple media content providers at a certain time, such as, for example, between 3:00 PM and 3:15 PM. In this example, the roadblock buyer120may specify the time period during which the advertisement spot should air, and the cross provider content assignment system102interfaces with multiple sellers in order to ensure that the roadblock buyer's120advertisement spot airs during the specified time period. In some embodiments, the roadblock buyer120provides additional information to the cross provider assignment system102, such as: a goal of the roadblock buyer120, such as an impression goal, a minimum number of media content providers associated with advertisement placement opportunities to which the advertisement spot is assigned, a minimum number of advertisement placement opportunities to which the advertisement spot is assigned, a CPM goal, or other goals the roadblock buyer120may have; one or more creatives for an advertisement spot; instructions, rules, or guidelines for the cross provider assignment system102to use to un-assign the roadblock buyer's120advertisement spot from advertisement placement opportunities (such as un-assigning the advertisement spot if the advertisement spot does not meet a goal of the roadblock buyer120); a type or class of media content within which the roadblock buyer120wishes their advertisement spot to be placed (such as sports shows, reality television shows, situational comedy shows, news programs, etc.); or other information usable to assign an advertisement spot120to a plurality of advertisement placement opportunities which are offered by multiple media content providers. With presently available systems, it is difficult for advertisers to ensure that their advertisement appears in advertisement placement opportunities associated with multiple media content providers and which occur during a certain time, or share a similar attribute, without manually identifying each potential advertisement placement opportunity and interacting with each media content provider associated with each advertisement placement opportunity to buy time during the advertisement placement opportunity. Further, advertisers may not want to place their advertisement in the advertisement placement opportunities if they are not able to buy a certain number of the advertisement placement opportunities. Thus, they may take back their offer, causing a waste of effort and resources, such as computing resources, for both the buyers and the sellers. To address these and other issues, one or more implementations of the present disclosure identify advertisement placement opportunities associated with multiple sellers and which occur during a certain time, or share a similar attribute, and place advertisements within the advertisement placement opportunities. The one or more implementations of the present disclosure are additionally able to re-assign advertisement spots already assigned to a slot within the advertisement placement opportunity in order to ensure that the advertisements are placed in as many of the identified advertisement placement opportunities as possible. In an example implementation, a roadblock buyer120may specify that an advertisement spot should be placed within multiple advertisement placement opportunities defined by a plurality of the sellers104a-104n. Each advertisement placement opportunity may include a plurality of breaks within which an advertisement may be placed. Each seller104a-104nalso negotiates with other buyers, to place an advertisement within the same advertisement placement opportunity. The cross provider content assignment system102ensures that the advertisement spot is placed within a plurality of the advertisement placement opportunities, and re-assigns other advertisement spots already assigned to the advertisement placement opportunities as needed. Each seller104a-104nmay additionally provide the cross provider content assignment system102with one or more priority scores for at least one spot included in the advertisement placement opportunities associated with the respective seller. The cross provider content assignment system102may use the priority score to determine whether an advertisement can be re-assigned, determine which advertisement spot to re-assign, etc. In some implementations, the priority score is based on one or more of: a priority code, a spot rate, or other data useful for calculating a priority score for an advertisement spot within an advertisement placement opportunity. In some implementations, the cross provider content assignment system102generates the priority score. Furthermore, each seller104a-104nmay additionally provide the cross provider content assignment system102with data indicating one or more advertisement placement opportunities that the cross provider content assignment system102can use to assign advertisement spots across multiple media content providers. For example, a seller104amay specify that an advertisement placement opportunity occurring at 5:00 pm is eligible for the cross provider content assignment system102to assign advertisement spots across multiple media content providers. Likewise, a seller104bmay also specify that an advertisement placement opportunity occurring at the same time is eligible for the cross provider content assignment system102. The cross provider content assignment system102may then assign an advertisement to both advertisement placement opportunities such that the advertisement spot appears at the same, or nearly the same time, during content provided by each of the sellers104aand104b. While this example uses advertisement placement opportunities specified by sellers for use by the cross provider content assignment system, one of ordinary skill in the art will realize that the cross provider content assignment system may find advertisement placement opportunities which are offered by different sellers and which match the criteria specified by the buyer, such as a time, and assign an advertisement spot to each of the advertisement placement opportunities without each of the sellers specifying that certain advertisement placement opportunities may be used by the cross provider content assignment system. FIG.2is a block diagram showing sample elements of a cross provider content assignment system102, according to one illustrated implementation. The cross provider content assignment system102includes a cross provider content assignment engine201, seller data203, and buyer advertisement data205. In some implementations, the cross provider content assignment system additionally includes content distribution rules207. The cross provider content assignment engine201uses the seller data203and buyer advertisement data205to identify advertisement placement opportunities from multiple media content providers which share a common attribute, and assign an advertisement spot associated with a buyer to a plurality of the identified advertisement placement opportunities. The cross provider content assignment engine201may also use the seller data203to re-assign advertisement spots already assigned to advertisement placement opportunities in order to assign the advertisement spot associated with the buyer. The seller data203includes advertisement placement data213. The seller data203may additionally include advertisement delivery data215. The advertisement placement data213includes data related to advertisement placement opportunities, such as: various attributes of the advertisement placement opportunity, such as an identifier for the media content provider which provided the advertisement placement opportunity, the time of the advertisement placement opportunity, the number of breaks within the advertisement placement opportunity, the advertisements already assigned to the advertisement placement opportunity, at least one product code for the advertisements assigned to the advertisement placement opportunity, a number of impressions for an advertisement placement opportunity, a media or content type for content provided during the advertisement placement opportunity, or other attributes related to an advertisement placement opportunity. The advertisement placement data213may additionally include break data related to each of the content breaks (“breaks”) included within each advertisement placement opportunity. The break data may include: the number of breaks in the advertisement placement opportunity; an indication of which advertisement spots are currently assigned to each break, or each slot in the respective break, in the advertisement placement opportunity; the number of slots included in each break; a priority score for at least one advertisement spot in a respective break; or other data used to describe the breaks included in an advertisement placement opportunity. In some implementations, the seller data203includes advertisement delivery data215. The advertisement delivery data215may include data related to the “delivery” of an advertisement, i.e. the appearance of the advertisement during the content provided by the seller. The advertisement delivery data215may include data related to one or more of: the number of impressions that the advertisement received, website traffic generated by the advertisement, data related to the buyer's social media, the impact of customer bookings, and other data related to the delivery or airing of advertisements. The buyer advertisement data205includes data related to the advertisement which is to be assigned to multiple advertisement placement opportunities across multiple media content providers. The buyer advertisement data205may include data: data regarding the buyer, such as the type of industry the buyer operates in, data regarding rules for distribution of the buyer's content, or other data used to describe the buyer; data regarding the advertisement, such as a code identifying the type of good, industry, or service that the advertisement is for, a creative or copy for the advertisement, the length of the advertisement; data regarding a time or attribute for identifying advertisement placement opportunities across media content providers; or other data related to the buyer or the advertisement. In some implementations, the cross provider content assignment system102includes content distribution rules207. The content distribution rules207include rules used by the cross provider content assignment system to distribute advertisement content within an advertisement placement opportunity. The content distribution rules207may be used to determine how many advertisements of a certain product type, company, etc., may appear in each break, how much separation there is between advertisements for similar products or services, etc. FIG.3is a flow diagram showing a method300of operating a cross provider content assignment system to assign content, such as advertisement spots, to advertisement placement opportunities which share a common attribute, such as a time for the advertisement placement opportunities, according to one non-limiting illustrated implementation. After a start block, the method300begins at301where the cross provider content assignment system receives data indicating a plurality of media content providers. The data indicating a plurality of media content providers may include data regarding one or more advertisement placement opportunities. Each advertisement placement opportunity may include one or more slots. At303, for each media content provider, the cross provider content assignment system receives data indicating one or more advertisement spots assigned to an advertisement placement opportunity of the respective media content provider. At least one advertisement spot assigned to an advertisement placement opportunity may be assigned to at least one slot included in the advertisement placement opportunity. The advertisement placement opportunities may include one or more breaks. Each break may include one or more slots for placing or assigning advertisement spots. The data indicating the one or more advertisement spots may include advertisement information for each of the one or more advertisement spots. The advertisement information may include one or more content distribution rules for the advertisement spot. At305, the cross provider content assignment system receives an indication that an advertisement spot is to be assigned to a plurality of advertisement placement opportunities. The indication that an advertisement spot is to be assigned to a plurality of advertisement placement opportunities may include one or more of: an indication of an attribute of an advertisement placement opportunity, an indication of a period of time within which the advertisement placement opportunities should occur, an indication of a minimum number of advertisement placement opportunities to place the advertisement, an indication of one or more geographic regions for the advertisement placement opportunities, an indication of one or more time zones for the advertisement placement opportunities, and an indication of one or more media content providers. The indication that the advertisement spot is to be assigned to a plurality of advertisement placement opportunities may be used to identify one or more advertisement placement opportunities to which the advertisement spot should be assigned. The one or more advertisement placement opportunities may include advertisement placement opportunities for two or more media content providers. At307, the cross provider assignment system assigns the indicated advertisement spot to two or more advertisement placement opportunities of the plurality of advertisement placement opportunities. In some implementations, the cross provider assignment system uses the method400described inFIG.4for assigning the indicated advertisement spot to two or more of the plurality of advertisement placement opportunities. After307, the method300ends. FIG.4is a flow diagram showing a method400of operating a cross provider assignment system to assign an indicated advertisement spot to a plurality of advertisement placement opportunities for a plurality of media content providers, according to one non-limiting illustrated implementation. After a start block, the method400begins at401, where the cross provider assignment system determines whether there are open slots in the advertisement placement opportunity. If there are open slots in the advertisement placement opportunity, the method400proceeds to403. If there are no open slots in the advertisement placement opportunity, the method400proceeds to405. In some embodiments,401is performed with respect to one or more breaks within the advertisement placement opportunity. The one or more breaks may be identified based on the indication that the advertisement spot is to be assigned to a plurality of advertisement placement opportunities obtained in the method300. For example, the indication that the advertisement spot is to be assigned to a plurality of advertisement placement opportunities may indicate that the advertisement must appear in a break between 1:00 pm and 1:15 pm. The cross provider assignment system may identify advertisement placement opportunities which include a break between 1:00 pm and 1:15 pm, and will consider breaks scheduled to occur within that time period at401when determining if there are open slots in the advertisement placement opportunity. At403, the cross provider assignment system assigns the indicated advertisement spot to an open slot in the advertisement placement opportunity. After403, the method400ends. At405, the cross provider assignment system determines which advertisement spot in the advertisement placement opportunity has the lowest priority score. The priority score for each advertisement spot may be included in the data indicating the advertisement placement opportunities received in the method300. The priority score for each advertisement spot may be calculated by the cross provider assignment system. At407, the cross provider assignment system un-assigns the advertisement spot with the lowest priority score. The un-assigned advertisement spot may be referred to as the “previously assigned advertisement spot.” At409, the cross provider assignment system assigns the indicated advertisement spot to the slot that the previously assigned advertisement was un-assigned from. At411, the cross provider assignment system re-assigns the previously assigned advertisement spot to a slot within a new advertisement placement opportunity. After411, the method400ends. FIG.5is a flow diagram showing a method500of operating a cross provider assignment system to identify advertisement placement opportunities from multiple media content providers based on an indicated time period, according one non-limiting illustrated implementation. The method500begins, after a start block, at501where the cross provider assignment system receives an indication of a time period. The indication of the time period may be received via user input, such as via a user interface such as the AFI103, PFI105, or another user interface. At503, the cross provider assignment system identifies the plurality of advertisement placement opportunities based on the indicated time period. In some implementations, each of the advertisement placement opportunities in the plurality of advertisement placement opportunities are offered by a different media content provider. For example, the cross provider assignment system may receive an indication that the time period for the advertisement is between 4:00 pm and 4:30 pm. The cross provider assignment system may identify which media content providers have an advertisement placement opportunity between 4:00 pm and 4:30 pm and include the advertisement placement opportunities for the identified media content providers in the plurality of advertisement placement opportunities. In some implementations, at least two of the different media content providers are in a different time zone. For example, if the indicated time period is between 4:00 pm and 4:30 pm in the Eastern Time Zone, the cross provider assignment system may also identify advertisement placement opportunities belonging to media content providers in the Central Time Zone between 3:00 pm and 3:30 pm, in the Mountain Time Zone between 2:00 pm and 2:30 pm, and in the Pacific Time Zone between 1:00 pm and 1:30 pm. Thus, the cross provider assignment system is able to identify advertisement placement opportunities belonging to media content providers which operate in different time zones. In some implementations, at least two of the different media content providers are in, or provide content to, a different geographic region. For example, if the indicated time period is between 4:00 pm and 4:30 pm, the cross provider assignment system may identify media content providers with advertisement placement opportunities during that time period in the northeastern and southeastern regions of the United States. In such implementations, the cross provider assignment system may adjust the indicated time period to the time for the time zones corresponding to the identified geographic regions. Alternatively, the cross provider assignment system may not adjust the indicated time period to the time for the time zones corresponding to the identified geographic regions, such that the local time of each region is used (i.e. in the example above, if New York state and California were identified as regions, searching for advertisements between 4:00 pm and 4:30 pm local time in New York state and between 4:00 pm and 4:30 pm local time in California). In some implementations, the cross provider assignment system receives an indication of one or more attributes of an advertisement placement opportunity at501. In such implementations, at503, the cross provider assignment system may identify the plurality of advertisement placement opportunities based on the received time period and the one or more attributes of the advertisement placement opportunity. For example, a buyer may intend for their advertisement to appear during as many sporting events in the United States as possible between 4:00 pm and 4:30 pm on a Saturday afternoon. The buyer would indicate the time period, 4:00 pm and 4:30 pm on Saturday, and that they would like their advertisement to appear during a sporting event to the cross provider assignment system. The cross provider assignment system identifies advertisement placement opportunities offered by media content providers in the United States which occur between 4:00 pm and 4:30 pm and which occur during sporting events. The cross provider assignment system would then be able to attempt to assign the buyer's advertisement to as many of the identified advertisement placement opportunities as possible. In some implementations, the cross provider assignment system receives an indication of which media content providers have already received a creative for the indicated advertisement spot. The cross provider assignment system may use the indication of which media content providers have already received the creative to identify advertisement placement opportunities for the indicated advertisement spot. After503, the method500ends. FIG.6is a flow diagram showing a method600of operating a cross provider assignment system to extend an indicated time period, according to one non-limiting illustrated implementation. After a start block, the method600begins at601, where the cross provider assignment system determines whether at least one advertisement placement opportunity of the identified plurality of advertisement placement opportunities does not have a slot to which the advertisement spot can be assigned. In some implementations, the cross provider assignment system indicates to the user, via a user interface such as the AFI103, that the at least one advertisement placement opportunity does not have a slot to which the advertisement spot can be assigned. At603, the cross provider assignment system receives an indication that the indicated time period should be extended. The indication that the indicated time period should be extended may be obtained via user input, such as via the AFI103. At605, the cross provider assignment system extends the time period based on the indication that the time period should be extended. At607, the cross provider assignment system identifies the plurality of advertisement placement opportunities based on the extended time period. In some implementations, the cross provider assignment system receives an indication of one or more attributes of an advertisement placement opportunity in addition to, or instead of, the indicated time period. The indication that the one or more attributes should be changed may be obtained via the AFI103. The cross provider assignment system may then change the one or more attributes and identify the plurality of advertisement placement opportunities based on at least one of: the changed one or more attributes, the time period, and the extended time period. After607, the method600ends. FIG.7is a flow diagram showing a method700of operating a cross provider assignment system to assign the indicated advertisement spot to a slot based on content distribution rules, according to one non-limiting illustrated implementation. After a start block, the method700begins at701where the cross provider assignment system identifies one or more rules for placing advertisement spots in slots for the advertisement placement opportunity. The identified rules may be content distribution rules207included in the cross provider content assignment system. At703, the cross provider assignment system determines whether an advertisement spot can be assigned to a slot based on the identified rules. At705, the cross provider assignment system prevents the indicated advertisement spot from being assigned to the slot based on the determination of whether the advertisement spot can be assigned to the slot. In some implementations, in response to a determination that the advertisement spot cannot be assigned to the slot, the cross provider assignment system shifts, or switches, advertisement spots placed in slots in the advertisement placement opportunity (“previously assigned advertisements”) to other slots in the same advertisement placement opportunity based on the identified rules in order to identify a new slot for the advertisement. In such implementations, the cross provider assignment system assigns the advertisement spot to the new slot. In some implementations, when a new slot which follows the identified rules cannot be identified for the advertisement spot, the cross provider assignment system prevents the identified advertisement spot from being assigned to a slot in the advertisement placement opportunity. In some implementations, when a new slot which follows the identified rules cannot be identified for the advertisement spot, the cross provider assignment system un-assigns an advertisement from a slot to which the identified advertisement spot could be assigned according to the identified rules. After705, the method700ends. FIG.8is a flow diagram showing a method800of operating a cross provider assignment system to use user input to un-assign an advertisement spot from a slot, according to one non-limiting illustrated implementation. After a start block, the method800begins at801, where the cross provider assignment system determines whether the indicated advertisement spot should be un-assigned from at least one slot included in the advertisement placement opportunities to which the advertisement spot is assigned based on pre-configured rules. In some embodiments, before determining whether the indicated advertisement spot should be un-assigned, the cross provider assignment system may present the slots to which the indicated advertisement spot is assigned via a user interface, such as an AFI103. In some embodiments, the cross provider assignment system presents the slots to which the indicated advertisement spot is assigned to a computing system, algorithm, or other computing device or software. In some embodiments, the cross provider assignment system receives the pre-configured rules associated with un-assigning the indicated advertisement spot from an entity such as: the roadblock buyer120, one or more advertisers, or another entity associated with the cross provider assignment system. The pre-configured rules may include one or more rules which instruct the cross provider assignment system to un-assign the indicated advertisement spot from one or more slots. The pre-configured rules may include rules based on: one or more goals of the roadblock buyer; one or more goals of an advertiser; the acceptance of aspects of the advertisement, such as the creative, by one or more advertisers; or other aspects, properties, or requirements of the advertisement, buyer, advertiser, or other entity associated with the cross provider assignment system related to un-assigning the indicated advertisement from one or slots. At803, the cross provider assignment system un-assigns the indicated advertisement spot from the at least one assigned slot based on a determination that the advertisement spot should be un-assigned from the at least one slot. After803, the method800ends. FIG.9is a flow diagram showing a method900of operating a cross provider assignment system to prevent an advertisement spot from being un-assigned from a slot, according to one non-limiting illustrated implementation. After a start block, the method900begins at901, where the cross provider assignment system receives an indication that the indicated advertisement spot is to be re-assigned to another slot. In some embodiments, at901, the cross provider assignment system receives an indication that the indicated advertisement spot is to be un-assigned from a spot. At903, the cross provider assignment system determines whether a creative has been received for the indicated advertisement spot. At905, the cross provider assignment system prevents the advertisement spot from being re-assigned based on the determination of whether the creative has been received. After905, the method900ends. FIG.10is a flow diagram showing a method1000of operating a cross provider assignment system to transmit a creative to a plurality of media content providers, according to one non-limiting illustrated implementation. After a start block, the method1000begins at1001, where the cross provider assignment system receives an indication of a creative for the indicated advertisement spot. In some implementations, the cross provider assignment system receives the indication of the creative via an AFI103. At1003, the cross provider assignment system identifies which media content providers are associated with at least one advertisement placement opportunity which includes a slot to which the indicated advertisement spot is assigned. At1005, the cross provider assignment system transmits the creative to each of the identified media content providers. After1005, the method1000ends. FIG.11is a flow diagram showing a method1100of operating a cross provider assignment system to allow a user to un-assign the indicated advertisement spot from one or more slots, according to one non-limiting illustrated implementation. After a start block, the method1100begins at1101, where the cross provider assignment system determines whether the number of indicated advertisement placement opportunities which have a slot to which the advertisement spot is assigned exceeds a predetermined threshold. In some implementations, the predetermined threshold is obtained via user input, such as via an AFI103. In some implementations, the cross provider assignment system determines the predetermined threshold based on one or more attributes of the advertisement placement opportunities to which the indicated advertisement spot is assigned. For example, the buyer may indicate that they wish their advertisement to receive a certain number of impressions. The cross provider assignment system may determine how many impressions each advertisement placement opportunity could provide, and use that determination to determine whether the buyer's impression goal is met by the advertisement placement opportunities. If the cross provider assignment system determines that the number of indicated advertisement placement opportunities which have a slot to which the advertisement spot is assigned does exceed the predetermined threshold, the method1100ends. If the cross provider assignment system determines that the number of indicated advertisement placement opportunities which have a slot to which the advertisement spot is assigned does not exceed the predetermined threshold, the method continue to1103. At1103, the cross provider assignment system presents an option to un-assign the indicated advertisement spot from one or more slots included in the indicated advertisement placement opportunities. In some implementations, the option to un-assign the indicated advertisement spot is presented to a user via a user interface, such as an AFI103. At1105, the cross provider assignment system receives an indication that the indicated advertisement spot should be un-assigned from one or more spots. The indication that the indicated advertisement should be un-assigned from one or more slots may be received via a user interface, such as an AFI103. At1107, the cross provider assignment system un-assigns the indicated advertisement spots from the one or more indicated slots. In some embodiments, the buyer may provide to the cross provider assignment system: an impression goal of the buyer, a minimum number of advertising placement opportunities to which the advertisement spot is assigned, a cost-per-mille (“CPM”) goal, a minimum number of media content providers associated with advertisement placement opportunities to which the indicated advertisement placement spot is assigned, and other goals a buyer may have for an advertisement or advertisement campaign. In such embodiments, the cross provider assignment system may determine a score goal based on the goals provided by the buyer. The cross provider assignment system may determine as score for the indicated advertisement based on the advertisement placement opportunities to which the indicated advertisement is assigned and the goals provided by the buyer. The cross provider assignment system may then perform acts1103-1107based on a determination of whether the determined score exceeds the score goal. In some implementations, at1103, the cross provider assignment system presents an option to change one or more goals of the buyer. In some implementations, at1103, the cross provider assignment system presents an option to change one or more attributes of the advertisement placement opportunities which the buyer would like the advertisement spot to be assigned to. The cross provider assignment system may attempt to find new advertisement placement opportunities for the advertisement spot based on the changed goals, changed attributes of advertisement placement opportunities, etc. After1107, the method1100ends. FIG.12is a flow diagram showing a method1200of operating a cross provider assignment system to receive permission from a media content provider to use an advertisement placement opportunity in the cross provider assignment system, according to one non-limiting illustrated implementation. After a start block, the method1200begins at1201, where the cross provider assignment system receives an indication from one or more media content providers of at least one slot in an advertisement placement opportunity associated with the respective media content provider. In some implementations, the cross provider assignment system receives the indication via user input, such as via a PFI105. In some implementations, a media content provider indicates that the cross provider assignment system can only use slots it has indicated in1201. At1203, the cross provider assignment system prevents advertisements other than the indicated advertisement spot to be assigned to the indicated slots. At1205, the cross provider assignment system determines that the indicated advertisement spot is to be assigned to at least one of the indicated slots. The cross provider assignment system may determine the indicated slots to which the advertisement spot is to be assigned in a similar manner to the method described inFIG.5. In some implementations, the cross provider assignment system attempts to assign the indicated advertisement spot to the indicated slots before assigning the advertisement spot to slots which were not indicated by the one or more media content providers. In some implementations, the cross provider assignment system only assigns the indicated advertisement spot to the indicated slots. At1207, the cross provider assignment system assigns the indicated advertisement spot to at least one of the indicated slots. After1207, the method1200ends. FIG.13is a flow diagram showing a method1300of operating a cross provider assignment system to re-assign advertisement spots to different slots based on content distribution rules, according to one non-limiting illustrated implementation. After a start block, the method1300begins at1301, where the cross provider assignment system receives an indication that the indicated advertisement spot is to be assigned to a particular break within the advertisement placement opportunity. At1303, the cross provider assignment system determines whether there is an available slot in the particular break. If there is an available slot in the particular break, the method continues to1309, otherwise the method continues to1305. At1305, the cross provider assignment system identifies one or more content distribution rules applicable to advertisement spots assigned to slots in the advertisement placement opportunity. The content distribution rules may include the content distribution rules207. At1307, the cross provider assignment system re-assigns each of the advertisement spots assigned to slots in the advertisement placement opportunity based on the content distribution rules to make a slot in the particular break available for the indicated advertisement spot. In some implementations, the advertisement spots are assigned to slots in other breaks included in the advertisement placement opportunity. At1309, the cross provider assignment system assigns the indicated advertisement spot to the available slot in the particular break. After1309, the method1300ends. In some implementations, the cross provider assignment system is able to use the method1300to ensure that the indicated advertisement spot is assigned to a break which occurs within an indicated time period, while continuing to honor the content distribution rules set by the media content provider, other buyers, etc. FIG.14is a flow diagram showing a method1400of operating a cross provider assignment system to collect advertisement delivery data from one or more media content providers, according to one non-limiting illustrated implementation. After a start block, the method1400begins at1401, where the cross provider assignment system identifies one or more media content providers associated with advertisement placement opportunities which include a slot to which the indicated advertisement spot was assigned. At1403, the cross provider assignment system receives an indication from at least one media content provider of the one or more media content providers of delivery data for the indicated advertisement spot. In some implementations, the cross provider assignment system automatically receives the delivery data from the media content provider by connecting to, or communicating with, a computing system which has access to the delivery data. In some implementations, the cross provider assignment system receives the delivery data via user input, such as via a PFI105. At1405, the cross provider assignment system aggregates at least a portion of the delivery data received from each media content provider. In some implementations, the aggregation includes: summing a portion of the delivery data; obtaining a mean, median, or mode for a portion of the delivery data; or other forms of aggregating data. For example, the cross provider assignment system may use the delivery data to determine the average number of impressions obtained from the advertisement placement opportunities, the total number of impressions obtained, etc. At1407, the cross provider assignment system presents the aggregated delivery data to a user, such as by using an AFI103. After1407, the method1400ends. FIG.15shows a processor-based device1504suitable for implementing the various functionality described herein. Although not required, some portion of the implementations will be described in the general context of processor-executable instructions or logic, such as program application modules, objects, or macros being executed by one or more processors. Those skilled in the relevant art will appreciate that the described implementations, as well as other implementations, can be practiced with various processor-based system configurations, including handheld devices, such as smartphones and tablet computers, wearable devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, personal computers (“PCs”), network PCs, minicomputers, mainframe computers, and the like. The processor-based device1504may include one or more processors1506, a system memory1508and a system bus1510that couples various system components including the system memory1508to the processor(s)1506. The processor-based device1504will at times be referred to in the singular herein, but this is not intended to limit the implementations to a single system, since in certain implementations, there will be more than one system or other networked computing device involved. Non-limiting examples of commercially available systems include, but are not limited to, ARM processors from a variety of manufactures, Core microprocessors from Intel Corporation, U.S.A., PowerPC microprocessor from IBM, Sparc microprocessors from Sun Microsystems, Inc., PA-RISC series microprocessors from Hewlett-Packard Company, 68xxx series microprocessors from Motorola Corporation. The processor(s)1506may be any logic processing unit, such as one or more central processing units (CPUs), microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), etc. Unless described otherwise, the construction and operation of the various blocks shown inFIG.15are of conventional design. As a result, such blocks need not be described in further detail herein, as they will be understood by those skilled in the relevant art. The system bus1510can employ any known bus structures or architectures, including a memory bus with memory controller, a peripheral bus, and a local bus. The system memory1508includes read-only memory (“ROM”)1512and random access memory (“RAM”)1514. A basic input/output system (“BIOS”)1516, which can form part of the ROM1512, contains basic routines that help transfer information between elements within processor-based device1504, such as during start-up. Some implementations may employ separate buses for data, instructions and power. The processor-based device1504may also include one or more solid state memories, for instance Flash memory or solid state drive (SSD)1518, which provides nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the processor-based device1504. Although not depicted, the processor-based device1504can employ other nontransitory computer- or processor-readable media, for example a hard disk drive, an optical disk drive, or memory card media drive. Program modules can be stored in the system memory1508, such as an operating system1530, one or more application programs1532, other programs or modules1534, drivers1536and program data1538. The application programs1532may, for example, include panning/scrolling1532a. Such panning/scrolling logic may include, but is not limited to logic that determines when and/or where a pointer (e.g., finger, stylus, cursor) enters a user interface element that includes a region having a central portion and at least one margin. Such panning/scrolling logic may include, but is not limited to logic that determines a direction and a rate at which at least one element of the user interface element should appear to move, and causes updating of a display to cause the at least one element to appear to move in the determined direction at the determined rate. The panning/scrolling logic1532amay, for example, be stored as one or more executable instructions. The panning/scrolling logic1532amay include processor and/or machine executable logic or instructions to generate user interface objects using data that characterizes movement of a pointer, for example data from a touch-sensitive display or from a computer mouse or trackball, or other user interface device. The system memory1508may also include communications programs1540, for example a server and/or a Web client or browser for permitting the processor-based device1504to access and exchange data with other systems such as user computing systems, Web sites on the Internet, corporate intranets, or other networks as described below. The communications programs1540in the depicted implementation is markup language based, such as Hypertext Markup Language (HTML), Extensible Markup Language (XML) or Wireless Markup Language (WML), and operates with markup languages that use syntactically delimited characters added to the data of a document to represent the structure of the document. A number of servers and/or Web clients or browsers are commercially available such as those from Mozilla Corporation of California and Microsoft of Washington. While shown inFIG.15as being stored in the system memory1508, the operating system1530, application programs1532, other programs/modules1534, drivers1536, program data1538and server and/or browser1540can be stored on any other of a large variety of nontransitory processor-readable media (e.g., hard disk drive, optical disk drive, SSD and/or flash memory). A user can enter commands and information via a pointer, for example through input devices such as a touch screen1548via a finger1544a, stylus1544b, or via a computer mouse or trackball1544cwhich controls a cursor. Other input devices can include a microphone, joystick, game pad, tablet, scanner, biometric scanning device, etc. These and other input devices (i.e., “I/O devices”) are connected to the processor(s)1506through an interface1546such as touch-screen controller and/or a universal serial bus (“USB”) interface that couples user input to the system bus1510, although other interfaces such as a parallel port, a game port or a wireless interface or a serial port may be used. The touch screen1548can be coupled to the system bus1510via a video interface1550, such as a video adapter to receive image data or image information for display via the touch screen1548. Although not shown, the processor-based device1504can include other output devices, such as speakers, vibrator, haptic actuator, etc. The processor-based device1504may operate in a networked environment using one or more of the logical connections to communicate with one or more remote computers, servers and/or devices via one or more communications channels, for example, one or more networks1514a,1514b. These logical connections may facilitate any known method of permitting computers to communicate, such as through one or more LANs and/or WANs, such as the Internet, and/or cellular communications networks. Such networking environments are well known in wired and wireless enterprise-wide computer networks, intranets, extranets, the Internet, and other types of communication networks including telecommunications networks, cellular networks, paging networks, and other mobile networks. When used in a networking environment, the processor-based device1504may include one or more wired or wireless communications interfaces1514a,1514b(e.g., cellular radios, WI-FI radios, Bluetooth radios) for establishing communications over the network, for instance the Internet1514aor cellular network. In a networked environment, program modules, application programs, or data, or portions thereof, can be stored in a server computing system (not shown). Those skilled in the relevant art will recognize that the network connections shown inFIG.15are only some examples of ways of establishing communications between computers, and other connections may be used, including wirelessly. For convenience, the processor(s)1506, system memory1508, network and communications interfaces1514a,1514bare illustrated as communicably coupled to each other via the system bus1510, thereby providing connectivity between the above-described components. In alternative implementations of the processor-based device1504, the above-described components may be communicably coupled in a different manner than illustrated inFIG.15. For example, one or more of the above-described components may be directly coupled to other components, or may be coupled to each other, via intermediary components (not shown). In some implementations, system bus1510is omitted and the components are coupled directly to each other using suitable connections. The foregoing detailed description has set forth various implementations of the devices and/or processes via the use of block diagrams, schematics, and examples. Insofar as such block diagrams, schematics, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one implementation, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs). However, those skilled in the art will recognize that the implementations disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more controllers (e.g., microcontrollers) as one or more programs running on one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of this disclosure. Those of skill in the art will recognize that many of the methods or algorithms set out herein may employ additional acts, may omit some acts, and/or may execute acts in a different order than specified. In addition, those skilled in the art will appreciate that the mechanisms taught herein are capable of being distributed as a program product in a variety of forms, and that an illustrative implementation applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory. The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
66,192
11943489
DETAILED DESCRIPTION OF AT LEAST ONE EMBODIMENT OF THE INVENTION Introductory Overview: The present invention, in some embodiments thereof, relates to processing of high definition video data and, more particularly, but not exclusively, to apparatus and methods for modifying one or more high definition video data streams on a frame-by-frame basis to add content thereto by extracting color mapping information from high-resolution frames and performing frame modification at the level of the extracted color maps, to simultaneously create multiple different synchronized linear views from the same original video stream source(s), while keeping the source(s) intact for future use. The invention also relates to non-volatile machine-readable media containing software for implementing the other aspects of the invention. The invention has a variety of applications but will be described by way of example in the specific contexts of systems and methods for adding additional content, for example, targeted advertising to high definition video data streams on a frame-by-frame basis in real time, i.e., within the presentation duration of the individual frames. The method according to the invention can also be used for improved real-time modification of video data on a frame by frame basis where high definition as defined above is not needed. In simplest terms, the present invention uses an application software program to modify the individual pixels in a video data stream within the individual frames based on feature segmentation. According to an aspect of the invention, the feature upon which the segmentation is based is pixel color. According to some embodiments, the segmentation is performed by a GPU under control of the programming and the CPU of a conventional microprocessor-controlled computer. More particularly, the present invention addresses the problem that processing of increasingly higher resolution images requires an exponential increase of computing power, so it is impossible to perform the described graphic modifications in real-time on UHD frames on a CPU alone, such that multiple different synchronized views are created simultaneously, based on the same source with different added graphical elements. While it is possible to perform graphic modifications in real time on HD frames, even this requires a very powerful CPU not typically found in most computers. According to an aspect of the invention, the different modifications are done simultaneously to create multiple different views. According to some embodiments of the invention, the different modified video views can be immediately broadcasted to different viewers. According to some embodiments of the invention, the different modified video views can be saved as files. According to an aspect of the invention, the original video source(s) data remains intact and can be reused. According to an aspect of the invention, there is provided a programming model for a general-purpose computer which significantly improves its functionality in real-time processing of a video stream and frame-by-frame modification of the video stream according to selectable criteria. According to some embodiments, the criteria comprise the identity of the intended recipient of the modified video stream. According to some embodiments, the modification to the video data comprises targeted advertising. Such targeted advertising can be based on collected demographic information about the recipient and/or derived from a computer readable medium or information stored in the host computer. According to some embodiments, the modification instructions can be derived from what is known in the art as a “storyboard script”, i.e., a program written prior to processing and read during processing, and commands, include the definition of the added graphical content. According to some embodiments, the content of the advertising is determined by a human user, as part of the initial configuration of the system or in real-time during processing and broadcasting. According to other embodiments, the added content may be determined by an external system or organization, which makes such determinations by processing available data about the intended recipient. The content of the advertising can also be determined by non-human, business intelligence and artificial-intelligence systems, which analyze available recipient data and make decisions based on their algorithms. According to some embodiments, the programming operates the computer to automatically create color-based segmentation of a high-resolution video stream in real time, whereby the video stream can be modified according to one or more predetermined criteria to add desired modifications to each frame at the level of the color-based segments, within timeframe limits of a real-time video stream. According to some embodiments, the program includes a plurality of definitions of frame modifications, a list of mathematical functions, which implements the aforementioned modifications, rules for optimization of the mathematical functions and rules for application of the mathematical functions to assemble the respective frame modifications. The rules for assembly of the mathematical functions are configured to assemble all relevant mathematical functions, in a cascading manner, into a single composite process. As used herein, “cascading manner into a single composite process” means that the mathematical functions are arranged so the input of consecutive functions is the output of their respective predecessors, so the assembly of mathematical functions becomes a single calculation which is easily optimized by a common routine performed by a GPU, as opposed to list of discrete functions to be applied one after another. According to a further aspect of the invention, there is provided a non-volatile machine-readable medium, containing programming instructions for operating a computer to modify a video data stream in real time to insert selected content therein on a frame by frame basis. According to some embodiments, the programming instructions implement the image processing functions described above. According to a further aspect of the invention, there is provided a method of modifying a video data stream in real time to add selected content thereto on a frame by frame basis. According to some embodiments, the method implements the image processing functions described above. According to yet a further aspect of the invention, there is provided a method of synchronizing the creation of different views which are based on the same source, such that while the views differ in file format, resolution, and/or graphic modifications, they do not differ in timeline with respect to the original source. DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT Referring toFIG.1, there is shown a block diagram of a preferred system, generally denoted at10, for implementing some embodiments of the invention. The system includes one or more input sources12for one or more high resolution (HD or higher) video files, a microprocessor-controlled computer system generally designated at14, and an output interface, generally indicated at16. Input sources12can be a digital camera, a recorded video file, a streaming video file obtained over the internet or a commercial video program obtained directly off-the-air. Computer system14may be of conventional architecture and comprised of conventional components but is programmed according to the invention to provide a technical solution resulting in enhanced functionality as described herein not achievable by previously known methods and systems. One particularly advantageous application of the invention is the processing of HD or higher resolution video files, in real time, to embed into a succession of frames thereof, additional information, e.g., an advertising message, that can be targeted specifically toward individual recipients or groups of recipients sharing common interests or other characteristics. Accordingly, output interface16provides multiple synchronized targeted output video streams, by way of example, four being shown at16a-16d. Structurally, computer system14may be of any type suitable for high-speed video processing. In some exemplary embodiments, such a system may include an incoming video input interface18, such as network interface circuitry or video card, which receives the incoming video signal from source12, a graphics processing unit (GPU)20, a central processing unit (CPU)22, a shared high-speed memory24which will typically include mass storage capability, and an outgoing video output interface unit26which provides the multiple outgoing video stream16a,16b, etc. The video stream may be derived from any known or hereafter developed source, for example, a server, a digital camera, a mobile hand-held devices (smartphones), phablets, tablets, off the air television broadcasts, streaming video provided over the internet or video programming stored on a user's personal computer. The modified output video stream may be directed to any of the kinds of destinations described above as sources of the original video data, as will be understood by those skilled in the art. FIG.2illustrates an internal process flow diagram for a system such as shown inFIG.1according to some embodiments of the invention. The process begins at30with input of a high resolution (HD) video data stream, sometimes referred to as a high definition video file or camera stream, for example, from any of the possible sources mentioned above. At32, the video file is transcoded from its native format to the internal processing format employed by the system, for example, “raw” video, or alternatively YUV420, RGBA 32 bit. At34GPU20is operated to layer each graphical frame into separate preliminary layers according to color. The GPU optionally executes a self-feedback routine by accessing succeeding video frame(s) to tune GPU layer sensitivity. This helps eliminate false detection (light changes, stream quality etc.) Thus, it is preferably employed if the incoming video is a raw stream from a camera (as opposed to video which was post-processed with color calibration and smoothing). In such raw data streams, light artifacts may be present. Weighted average of several consecutive frames enables elimination of such artifacts and subsequent calculation errors. The program can invite the option to be turned on\off manually by the user or deduced automatically from selection of input (on for live camera and off for video from file). By calculating a weighted average per pixel of colors in a current frame and colors in previous frames, unwanted artifacts such as specks of dust disappear from view. At36, two processes proceed in parallel. At36a, GPU20is operated to perform pixel color modifications and, at36b, if an external computer program requires this, the GPU creates a duplicate of the original video stream and performs initial preprocessing on a duplicate and sends the result to the CPU. An important feature of some embodiments of the invention is the manner in which the color layer data is provided (between36band40b) from GPU20to CPU22such that the data is optimized for analysis. Referring still toFIG.2, at38, GPU20renders additional graphical elements by instructions generated from CPU22for one or more of the preliminary color layers; the instructions are created by the CPU from a predefined program or from data received from an external computer program. The data added to the individual frames is provided by a data storage source40acontaining a database of pre-defined graphical components and user related instructions for creating the desired recipient-specific views, or by an external computer program which combines user-intelligence with a database of graphical components40b. At42, the programming operates GPU20to merge the color layers into a composite frame (comprised of the original frame and added graphical components, as described in38) into a single-layer color frame with uniform resolution. The merging is done with respect to optional transparency of each layer. The act of merging several graphical layers into one, also called “flattening” is known to users of graphical editing software that enables layering, such as Photoshop® and Gimp. The result of the merge done by this invention matches the expectations of those users. Then, at44, GPU20can apply an addition frame color palette modification to create a unified frame by modifying the color of the frame created in42. As a result, even though the graphics components come from different sources, i.e. the main video stream and the composed elements, such as other video streams and added graphics (e.g. CGI), they undergo the same specific modifications, and the outcome seems to the observer to have come from a single source. In computers having sufficiently powerful CPUs, and enough active memory, instead of utilizing the GPU to perform parallel video stream processing, the program can be designed such that the CPU operates in a continuous loop over all the pixels within the frame. This option may be useful where real-time video stream processing is desired, but the incoming video stream has less than HD resolution. Finally, at46, GPU20renders multiple outputs of new video component streams with respect to intended recipient device resolution. The practice of outputting several resolution streams per video is known in the art. Each of the video component streams is encapsulated into a different file container (three of which are indicated at46a,46b, and46c), which are then made available to intended recipients48a,48b, and48c. An important feature of some embodiments of the invention is the manner in which different views of the same source video stream are made available synchronously. The software created according to the present invention that provides the real-time synchronization of video streams (whether original or preprocessed) is described in connection withFIG.3below. FIG.3is a program flow diagram illustrating implementation of the invention according to some embodiments, to synchronize video stream creation, such that any number of video stream which are created from the same source by application of different pixel modifications will display the same base frame at the same time, regardless of performed modifications or processing start time. In considering the following description, it is to be understood that the code defining the program may vary greatly without departing from the concepts of the invention, and that numerous variations of such code will be apparent to those skilled in the art. Turning now specifically toFIG.3, at60, the high-resolution video data is received from an input device such as described above and is demultiplexed into video and audio components. It is to be understood that multiple video streams can be demultiplexed in parallel. At70and80, two processes proceed in parallel. At70, video component stream is received by GPU20, such that a part of GPU internal memory and processing power is dedicated to applying a discrete set of instructions to a video component (three of such parts are indicated at20a,20b, and20c). It is to be understood that when multiple dedicated parts are configured to process the same input source, then graphical frame data is made available to each and all of them at the same time. At80, audio component stream or streams are passed through audio filters which combine multiple audio component streams into one. It is to be understood that when only a single audio component stream is passed, it is passed without combination. At90, video stream component is decoded from its codec to raw format and is passed as new texture (i.e., color data layers derived from each video frame) for application of various pixel modifications, including addition of graphical components, at100. Although the preferred embodiment of this invention utilizes GPU20to perform decoding, it is understood by those skilled in the art that the decoding can also be done by a CPU22. At110, the final frame is rendered at one or more resolutions and each frame, by resolution, is encoded at120. Although the preferred embodiment of this invention utilizes GPU20to perform encoding, it is understood by those skilled in the art that the encoding can also be done by a CPU22. Finally, at130, video streams and audio streams are multiplexed into one or more file containers (three of which are indicated at140a,140b, and140c), such that each file container contains an audio stream and a video stream which was modified according to initial configuration of the system in relation to this output. It is to be understood that when multiple outputs, which differ in file formats, resolution, and/or pixel modifications, are based on same source inputs, then the outputs are synchronized. According to the present invention, these technical capabilities are used in a unique way to achieve the enhanced functionality of synchronously creating multiple modified HID video files in real time, i.e., without noticeable delay. General Interpretational Comments: Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. It is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the above description and/or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. The term “consisting of” means “including and limited to”. The term “consisting essentially of” means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure. As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. The described methods and the required programming can be executed on any computer, which incorporates a suitable central processing unit (CPU) to execute the program, and a GPU in those embodiments for which it is utilized (for high-resolution video streams) to execute the complex frame processing, means for acquiring high-resolution input video-streams, as well as active memory and mass storage, for holding a collection of predefined features. Implementation of the method and/or system of embodiments of the invention can involve performing or completing some tasks such as selection of intended recipients, manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system. For example, hardware for performing selected tasks according to embodiments of the invention could be implemented on a general-purpose computer suitably programmed as described herein, or as one or more special-purpose chips or circuits, i.e., one or more ASICs. Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives and/or modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and broad scope of the appended claims.
19,952
11943490
DETAILED DESCRIPTION OF EMBODIMENTS The following detailed description of the present invention refers to the accompanying drawings, which show by way of illustration specific embodiments in which the present invention may be carried out, in order to clarify the objects, technical solutions, and advantages of the present invention. These embodiments are described in detail to enable a person of ordinary skill in the art to carry out the present invention. Throughout the detailed description and claims of the present invention, the word ‘comprise’ and its variations are not intended to exclude other technical features, additions, components, or steps. In addition, ‘one’ or ‘an’ is used in more than one meaning, and ‘another’ is limited to at least a second or more. In addition, terms such as ‘first’ and ‘second’ of the present invention are for distinguishing one component from other components, and the scope of rights should not be limited by these terms unless it is understood that the terms indicate an order. For example, a first component may be referred to as a second component, and similarly, the second component may also be referred to as the first component. When a certain component is referred to as being “connected” to another component, the component may be directly connected to the other component, but it should be understood that another component may be interposed therebetween. On the other hand, when a certain component is referred to as being “directly connected” to another element, it should be understood that another element does not exist in the middle. Meanwhile, other expressions describing the relationship between components, that is, “between” and “immediately between” or “neighboring to” and “directly adjacent to”, etc., should be interpreted similarly. In respective steps, identification symbols (e.g., a, b, c, etc.) are used for convenience of description, and the identification symbols do not describe the order of the respective steps unless it is necessarily logically concluded, and the respective steps may occur differently from the specified order. That is, the respective steps may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in a reverse order. Other objects, advantages, and characteristics of the present invention will become apparent to a person of ordinary skill in the art in part from this description and in part from carrying-out of the present invention. The following illustrative descriptions and drawings are provided by way of examples and are not intended to limit the present invention. Therefore, the details disclosed herein with respect to a specific structure or function are not to be construed in a limiting sense, but should be construed as representative basic materials that provide guidance for a person of ordinary skill in the art to variously carry out the present invention with virtually any suitable detailed structures. Furthermore, the present invention encompasses all possible combinations of the embodiments indicated herein. It should be understood that various embodiments of the present invention are different but need not be mutually exclusive. For example, the specific shapes, structures, and characteristics described herein in relation to one embodiment may be implemented in other embodiments without departing from the spirit and scope of the present invention. In addition, it should be understood that the position or arrangement of individual components in each disclosed embodiment may be changed without departing from the spirit and scope of the present invention. Accordingly, the following detailed description is not intended to be taken in a limiting sense, and the scope of the present invention, if properly described, is limited only by the appended claims, along with all scope equivalents to those claimed by the claims. Similar reference numerals in the drawings refer to the same or similar functions throughout the various aspects. Unless otherwise indicated or clearly contradicted in the context herein, items referred to as singular encompass the plural, unless otherwise required in the context. In addition, in describing the present invention, when it is determined that a detailed description of a related known configuration or function may obscure the gist of the present invention, the detailed description thereof will be omitted. Hereinafter, in order to enable a person of ordinary skill in the art to easily carry out the present invention, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, specific embodiments will be described in detail with reference to the accompanying drawings. The present invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art. In the figures, the dimensions of layers and regions are exaggerated for clarity of illustration. Like reference numerals refer to like elements throughout. It will also be understood that when a layer, a film, a region or a plate is referred to as being ‘on’ another one, it can be directly on the other one, or one or more intervening layers, films, regions or plates may also be present. Further, it will be understood that when a layer, a film, a region or a plate is referred to as being ‘under’ another one, it can be directly under the other one, and one or more intervening layers, films, regions or plates may also be present. In addition, it will also be understood that when a layer, a film, a region or a plate is referred to as being ‘between’ two layers, films, regions or plates, it can be the only layer, film, region or plate between the two layers, films, regions or plates, or one or more intervening layers, films, regions or plates may also be present. FIG.1is a conceptual diagram illustrating a camera network according to an exemplary embodiment. Referring toFIG.1, the camera network can comprise a plurality of photographing devices100a,100b, and100c. The photographing devices100a,100b, and100cmay be devices respectively installed at different locations to photograph a predetermined area. The photographing devices100a,100b, and100ccan comprise an internet protocol camera (hereinafter, referred to as an IP camera). The IP camera is a type of digital video camera and can transmit and receive data through a network or the Internet. The camera network can comprise a plurality of security devices200a,200b, and200c. The security devices200a,200b, and200ccan be connected to different photographing devices, respectively. For example, the first security device200acan be connected to the first photographing device100a, and the second security apparatus200bcan be connected to the second photographing device100b. The security devices200a,200b, and200cand the photographing devices100a,100b, and100ccan be respectively connected to each other through a local area network (LAN). For example, a first network interface card (LAN NIC) of the first security device200acan be connected to the first photographing device100a. The security devices200a,200b, and200ccan be connected to a decryption server400through a switch300. The switch300can deliver packets received from the security devices200a,200b, and200cor the decryption server400to a designated destination. In some cases, the switch300can be omitted. Second LAN NICs of the security devices200a,200b, and200ccan be connected to the decryption server400. When the switch300is comprised in the network, the second LAN NICs can be connected to the decryption server400through the port of the switch. The decryption server400can exchange data with the security devices100a,100b, and100c. The first LAN NIC of the decryption server400can be connected to the security devices100a,100b, and100cthrough the switch300. The second LAN NIC of the decryption server400can be connected to a video control device500. The decryption server400can form channels with the security devices200a,200b, and200c. Different channels can be respectively formed for the photographing devices100a,100b, and100c. For example, a first channel CH1can be formed between the first photographing device100a, the first security device200a, and the decryption server400, and a second channel CH2can be formed between the second photographing device100b, the second security device200b, and the decryption server400. A symmetric key for encrypting video data may be set differently for each of the channels CH1, CH2, and CH3. In addition, security socket layer (SSL) connection used in the process of exchanging the symmetric key for encrypting video data for each of the channels can be set differently. Therefore, even when the security of one channel is broken by an attacker, the other channels can be protected. The decryption server400can receive a request for video data or a control command for the photographing devices100a,100b, and100cfrom the video control device500, and transmit the received request or control command to the photographing devices100a,100b, and100cthrough the switch300and the security devices200a,200b, and200c. The decryption server400can receive video data encrypted by the security device200. The decryption server400can decrypt the encrypted video data and transmit the decrypted video data to the video control device500. Here, the video data may comprise an RTSP packet, a packet according to an open network video interface forum (ONVIF) standard, etc. FIG.2is a block diagram illustrating a configuration of the security device200according to an exemplary embodiment. Referring toFIG.2, the security device200can comprise a communication interface unit210, a processor220, and a memory230, and/or a storage device240. The processor220may mean a central processing unit (CPU), a graphic processing unit (GPU), or a dedicated processor by which the methods according to embodiments of the present invention are performed. Each of the memory230and the storage device240can be configured with at least one of a volatile storage medium and a non-volatile storage medium. For example, the memory130may be configured with at least one of a read only memory (ROM) and a random access memory (RAM). FIG.3is a flowchart illustrating a process in which the symmetric key for encrypting and decrypting video data is shared between the security device200and the decryption server400according to an exemplary embodiment. InFIG.3, the switch300ofFIG.1is omitted from the flowchart for convenience. If the switch300is comprised in the camera network, the switch300can be provided between the decryption server400and the security device200to relay communication between the decryption server400and the security device200. Referring toFIG.3, an initialization procedure between the photographing device100, the security device200, the decryption server400, and the video control device500can be performed, in step S100. In the initialization procedure, setting of a physical connection and a logical connection between the respective devices can be established. In this process, the first LAN NIC of the security device200can be connected to the photographing device100, and the second LAN NIC thereof can be connected to the decryption server400. The first LAN NIC of the decryption server400can be connected to the security device200, and the second LAN NIC of the decryption server400can be connected to the video control device500. InFIG.3, one photographing device100and one security device200are illustrated for convenience, but as illustrated inFIG.1, there may be a plurality of photographing devices and security devices, and a channel may be formed for each photographing device. In step S102, the decryption server400can register information on the security device200that has been subjected to the initialization procedure. The decryption server400can register information on at least one of an IP address and a MAC address of the security device200. As illustrated inFIG.1, a plurality of photographing devices and a plurality of security devices corresponding thereto may be comprised in the network. The decryption server400can register the IP address and MAC address of each of the plurality of security devices200a,200b, and200c. The IP addresses and MAC addresses of the security devices200a,200b, and200ccan be corresponded to different channels CH1, CH2, and CH3, respectively. In step S110, the decryption server400can receive a master key from a master key management unit450. The master key can be used to encrypt and decrypt a private key, which will be described later. The master key management unit450may be a hardware security system (HSM) or a key management system, but the embodiment is not limited thereto. The master key management unit450can be physically or logically separated from hardware of the decryption server400. In step S112, when a new master key is received from the master key management unit450, the decryption server400may destroy both the public key and the private key pairs managed by the master key before the change. If the decryption server400receives the master key for the first time, step S112may be omitted. In step S114, the decryption server400can temporarily store the master key in a memory. For example, the decryption server400can store the master key only for a certain period of time without permanently storing the master key. As another example, the decryption server400can temporarily store the master key in a volatile memory and delete information on the master key when a predetermined condition is satisfied or time elapses. In step S116, the decryption server400can generate a plurality of public key and private key pairs. In each pair, information encrypted with the public key can be decrypted with the private key. The decryption server400can generate several pairs of public key and private key. The decryption server400can assign the generated public key and private key pairs to security devices of different channels. For example, the decryption server400can assign a first certificate comprising a first public key to a first security device200aof a first channel CH1ofFIG.1and assign a second certificate comprising a second public key to a second security device200b. In step S116, the decryption server400can encrypt the private keys generated in step S116.FIG.3illustratively describe that an encryption operation of the private keys is performed in the decryption server400. However, the embodiment is not limited thereto. For example, the encryption operation of the private keys may be performed in the master key management unit450. In this case, the master key management unit450can obtain private key information from the decryption server400and provide the encrypted private key to the decryption server400. When the encryption operation of the private keys is completed in step S116, the decryption server400can store the encrypted private keys in the memory and delete all original information of the private keys. Accordingly, even if the decryption server400is attacked by hacking, the private key information may not be exposed to the outside. In step S118, the decryption server400can periodically or non-periodically examine validity of the master key. The decryption server400can transmit information capable of checking the validity of the master key to the master key management unit450, and can receive authentication of the validity of the master key from the master key management unit450. In step S120, the security device200can transmit an access request to the decryption server400. The decryption server400can check at least one of the IP address and the MAC address of a device that has transmitted the access request while receiving the access request. In step S122, the decryption server400can compare the IP address and MAC address of the device that has transmitted the access request with information registered in advance. When the IP address and MAC address of the device that has transmitted the access request correspond to the information registered in advance, the decryption server400can permit the access of the device. In addition, the decryption server400can check to which channel the security device200that has made the access request belongs from the IP address and MAC address. In step S124, the decryption server400can randomly select any one of a plurality of public key and private key pairs stored in advance. In this case, all of the private keys may already be encrypted by the master key described above. The decryption server400can manage the selected public key and private key pair in correspondence with the security device200and the channel to which the security device200belongs. In step S130, the security device200can set up the SSL connection with the decryption server400. The security device200can receive public key information from the decryption server400. Referring back toFIG.1for a moment, since a different key pair is selected for each of the channels CH1, CH2, and CH3illustrated inFIG.1, each of the security devices200a,200b, and200ccan be assigned a different public key from the decryption server400. Referring toFIG.3again, in step S140, the security device200can generate a symmetric key (or session key). The symmetric key can be used for encryption and decryption of video data, which will be described later. In step S150, the security device200can encrypt the symmetric key using the public key comprised in the certificate received in step S130. The security device200can transmit the encrypted symmetric key to the decryption server400. In step S155, the decryption server400can decrypt the encrypted symmetric key. In step S156, the decryption server400can request master key information from the master key management unit450, and the master key management unit450can provide the master key. In step S157, the decryption server400can decrypt the private key corresponding to the public key used for encryption of the symmetric key received in step S155. As described above, the private key can be encrypted and stored in the decryption server400. The decryption server400can identify the security device200that has transmitted the encrypted symmetric key and decrypt the private key corresponding to the channel to which the security device200belongs. The decryption server400can decrypt the private key using the master key obtained in step S156. When the decryption of the private key is completed, the decryption server400can delete the master key information from its own memory. FIG.3illustratively describes that a decryption operation of the private key is performed by the decryption server400. However, the embodiment is not limited thereto. For example, the decryption operation of the private key can be performed by the master key management unit450. The decryption server400can provide a private key to be decrypted to the master key management unit450, and the master key management unit450can decrypt the private key and provide the private key to the decryption server400. In step S158, the decryption server400can decrypt the encrypted symmetric key. Accordingly, the symmetric key can be securely shared between the security device200and the decryption server400. In addition, since the symmetric key is set differently for each channel to which the security device200belongs and the procedure for sharing the symmetric key is performed individually, even if the symmetric key of one channel is exposed to the outside, the security of other channels can be maintained. After decrypting the symmetric key, the decryption server400can delete the private key information used to decrypt the symmetric key. Only encrypted private key information may be stored in the decryption server400. Accordingly, even if the decryption server400is attacked later, the private key information may not be exposed to the outside. Steps S120to S158can be performed for each of the security devices200a,200b, and200cillustrated inFIG.1. Accordingly, even if the symmetric key of any one of the channels to which the security devices200a,200b, and200cbelong is exposed to the outside, the security of other channels can still be maintained. FIG.4is a flowchart illustratively describing a procedure for transmitting and receiving a control signal and a data packet between the photographing device100and the video control device500. Referring toFIG.4, a proxy channel can be formed between the security device200and the decryption server400, in step S160. In step S170, the video control device500can transmit a control signal for the photographing device100to the decryption server400based on the user's input or its own calculation result. The control signal may comprise a signal for controlling the operation of the photographing device100, a signal requesting the photographing device100to transmit video data, etc. The decryption server400can identify a destination address of the control signal and transmit the control signal to the security device200corresponding to the identified destination address. In the decryption server400, the processor can generate a first thread. The processor of the decryption server400can transmit a control signal from the video control device500to the security device200using the first thread. In step S180, the photographing device100can transmit video data to the security device200. In step S182, the security device200can encrypt the received video data packet with the symmetric key generated in step S140ofFIG.3. In step S185, the security device200can transmit the encrypted data packet to the decryption server400. Since the video data is encrypted and transmitted, even if the encrypted data packet is stolen, the video data may not be exposed to the outside. In step S190, the decryption server400can decrypt the data packet to restore the video data. In step S195, the decryption server400can deliver the decrypted data to the video control device500. Through this, the video control device500can safely obtain desired video data. The processor of the decryption server400can generate a second thread for receiving video data and decrypting the video data. The processor of the decryption server400can perform an operation of decrypting and transmitting the data packet in the direction from the security device200to the video control device500using the second thread. The first thread may not perform an encryption function. That is, the packet in the direction from the video control device500to the security device200may not be encrypted. The processor220of the decryption server400may separate the first thread and the second thread, and may not assign an encryption function to the first thread. Through this, the time and cost of generating the thread can be saved, and the end time for the first thread can be advanced. In addition, since the first thread and the second thread share a memory and a file, the threads may communicate with each other as needed without intervention of the kernel. The examples described above is merely illustrative and the embodiment is not limited thereto. For example, some of the control signals in the direction from the video control device500to the security device200may be encrypted like the video data. The apparatus and method and for maintaining security of video data according to exemplary embodiments have been described above with reference toFIGS.1to4. In at least one embodiment, security performance can be improved in the process of transmitting and receiving video data. According to at least one embodiment, encryption setting information of video data can be safely protected by the SSL protocol. According to at least one embodiment, since the private key for decrypting the symmetric key used for encryption and decryption of video data is encrypted and stored in the decryption server, it is possible to prevent private key information from being exposed to the outside. According to at least one embodiment, it is possible to suppress the occurrence of a delay time between the video control device and the photographing device while improving the security performance of the camera network. According to at least one embodiment, even if the security of any one of the channels formed between the security device and the photographing devices is breached, security stability of the video data network can be strengthened by maintaining the security of other channels. The embodiments described above can be implemented by a hardware component, a software component, and/or a combination of the hardware component and the software component. For example, the apparatus, method, and components described in the embodiments can be implemented using one or more general purpose or special purpose computers, such as, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate (FPGA) array, a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. A processing device can execute an operating system (OS) and one or more software applications running on the operating system. In addition, the processing device can also access, store, manipulate, process, and generate data in response to execution of software. For convenience of understanding, although one processing device may be described as being used, a person of ordinary skill in the art will recognize that the processing device may comprise a plurality of processing elements and/or a plurality of types of processing elements. For example, the processing device can comprise a plurality of processors or one processor and one controller. In addition, the processing device can also have other processing configurations, such as a parallel processor. Software can comprise a computer program, codes, instructions, or a combination of one or more of these, and can configure the processing device to operate as desired or can, independently or collectively, instruct the processing device to operate as desired. Software and/or data can be permanently or temporarily embodied on any kind of machine, component, physical device, virtual equipment, computer storage medium or device, or signal waves being propagated to be interpreted by the processing device or to provide instructions or data to the processing device. Software can be distributed over networked computer systems and stored or executed in a distributed manner. Software and data can be stored in one or more computer-readable recording media. The method according to the embodiment can be recorded in a computer-readable medium by being implemented in the form of program instructions that can be executed through various computer means. The computer-readable medium can comprise program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the computer-readable medium can be specially designed and configured for the embodiment, or may be known to and available to a person of ordinary skill in computer software. Examples of the computer-readable recording medium comprise a magnetic medium such as a hard disk, floppy disk, and magnetic tape, an optical medium such as a CD-ROM and DVD, and a magneto-optical medium such as a floppy disk, and a hardware device specially configured to store and execute program instructions, such as a ROM, RAM, flash memory, etc. Examples of the program instructions comprise not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like. The hardware device described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa. Although the embodiments have been described with reference to the limited drawings as described above, a person of ordinary skill in the art may apply various technical modifications and variations thereto based on the matters described above. Even if the described techniques are performed in an order different from the described method, and/or the components of the described system, structure, apparatus, circuit, etc. are coupled or combined in a form other than the described method or replaced or substituted by other components or equivalents, appropriate results can be achieved. In at least one embodiment, security performance can be improved in the process of transmitting and receiving video data. According to at least one embodiment, encryption setting information of video data can be safely protected by the SSL protocol. According to at least one embodiment, since the private key for decrypting the symmetric key used for encryption and decryption of video data is encrypted and stored in the decryption server, it is possible to prevent private key information from being exposed to the outside. According to at least one embodiment, it is possible to suppress the occurrence of a delay time between the video control device and the photographing device while improving the security performance of the camera network. According to at least one embodiment, even if the security of any one of channels formed between the security devices and the photographing devices is breached, security stability of the video data network can be strengthened by maintaining the security of other channels. Although the apparatus and method for maintaining security of video data have been described with reference to the specific embodiments, they are not limited thereto. Therefore, it will be readily understood by those skilled in the art that various modifications and changes can be made thereto without departing from the spirit and scope of the present invention defined by the appended claims.
30,183
11943491
DETAILED DESCRIPTION OF EMBODIMENTS With reference toFIG.1A, a conditional access server102has a processing environment and communications interface and communicates over a communications network104, for example the Internet, with a number of content consumption client devices106, which implement conditional access functionality. In some embodiments, the communication network104and conditional access server102(as well as any separate content server) are not under the control of the same entity. This is commonly referred to as over the top OTT provision of content. In other embodiments, the communication infrastructure for the communication network104and content provision is under the control of the same entity, for example an Internet service provider. In yet other embodiments, the communications network is a dedicated broadcast network, for example a satellite or cable network. In those circumstances, the conditional access server102is typically provided at the cable or satellite headend. Transmission of the content may be by broadcast, multicast, pointcast or on request. The conditional access server102provides a content key CK and access condition AC for a given content or stream to the client devices106. The CK and AC may be provided in a variety of different formats but are, typically, provided together in an entitlement control message ECM. The CK is provided encrypted with a transport key TK as [CK]TK(generally, the notation [x]yis used herein to indicate clear text x encrypted with key y) and the AC is provided in a form so that it can be authenticated by the client device106. The conditional access server102, in some embodiments, also provides encrypted media content, for example in the form of an encrypted media stream, to the client devices106. In other embodiments, encrypted media content is provided by a separate server, not illustrated inFIG.1A. Specifically, media content provided to the client devices106encrypted with a content key CK is referred to as [media content]CK. Various digital video transmission standards may be used for transmitting the media content, for example MPEG-DASH. In some embodiments, the conditional access server102has access to a subscriber database containing subscriber information, such as the content a subscriber is authorised to access, the identity and/or number of client devices a subscriber may use to access the content, and so forth. This information is used by the conditional access server to determine content consumption rights for a given user and to generate a CK and AC accordingly, for example in the form of an ECM. In some embodiments, the conditional access server102is configured, by way of a suitably programmed general purpose or dedicated processor, to periodically verify conditions for continued access by a client device106to a given content and transmit a token to the client device106indicating that continued access is authorised. The conditions may include a check whether licensing conditions are complied with, such as the number of client devices106associated with a given user consuming the content, a period of time since a transaction authorising access to the content was completed, a period of time since the content was first accessed, and so forth. The conditions may also include a check on other feedback signals, such as a tamper alarm providing an alert regarding the likelihood of an attempt at circumventing access control at the client device106or a signal indicating a change in an output control status at the client device106. An example of a change in output control status is in the context of a set top box being connected to a TV screen using a HDMI (High-Definition Multimedia Interface) cable, with content being protected in transit over the HDMI cable using the HDCP (High-bandwidth Digital Content Protection) protocol. A relevant output status change would, for example, be a change in the status of the HDCP protection, for example HDCP being disabled. The periodic verification may take place at predetermined intervals, for example determined by an elapsed time period, by an amount of data transmitted or consumed or by a change in cryptographic information (such as an initialisation vector IV for use in conjunction with the CK to decrypt the content). Specifically, in some conditional access implementations, the IV is a random number known to the client device106, which changes from one data portion (often referred to as a chunk) to the next, for example by adding the size of the previous chunk to the previous IV value. This change in the IV prevents a compromised device from simply treating the whole content as one portion/chunk and can further be used to trigger periodic verification and/or the periodic rendering unuseable of a stored CK, as described for some embodiments below in detail. With reference toFIG.1B, an embodiment of a client device106is now described. The client device can, for example, be a set-top-box without a display, an integrated receiver decoder, an integrated television, a personal computer, or a mobile device such as a smart phone or tablet. The client device106comprises a network communication interface108, a rich execution environment REE110, which comprises the device normal operating system, a network communications adapter, user interface functionality, graphics and video functionality and, according to some embodiments, a portion of a content processing and descrambling module for processing and outputting decrypted content to the user or to a video/display processor for further processing, in conjunction with content access functionality implemented in a trusted execution environment TEE112. The TEE112provides an execution environment that runs alongside and is isolated from the REE110. The TEE112is configured to protect its assets from general software attacks and defines rigid safeguards as to data and functions that a program can access from outside the TEE112. A TEE is a secure area that ensures that sensitive data is stored, processed and protected in a trusted environment. A TEE's ability to offer safe execution of authorized security software, known as ‘trusted applications’, enables it to provide end-to-end security by enforcing protection, confidentiality, integrity and data access rights. The TEE112further implements functionality which provides the CK to a descrambler, evaluates periodically whether access to the content is to be maintained, and prevents the descrambler from using the CK to decrypt the encrypted media content if the evaluation is negative. In some embodiments, the TEE112further comprises a secure element SE114, which implements those functions requiring the highest degree of security, for example the evaluation and authentication of the AC and the decryption of the CK. A SE provides enhanced security using software and tamper resistant hardware. It allows high levels of security and can work together with a TEE. The SE may include a platform onto which applications can be installed, personalized and managed. It comprises hardware, software, interfaces, and protocols that enable the secure storage of certificates and execution of applications, such as for access rights evaluation. The SE may be implemented in different forms such as a Universal Integrated Circuit Card (UICC), or a device linked by NFC (Near Field Communication). The SE may be provided as a separate chip or secure device, which can be inserted into a slot of the client device106. The SE can also be provided embedded in the client device106. The SE may include one or more security domains, each of which includes a collection of data that trust a common entity (i.e., are authenticated or managed using a common or global cryptographic key or token). Specifically, according to some embodiments, the TEE112provides access to the CK for the descrambler by:decrypting the received encrypted CK and storing it in clear text in a CK table accessible to the descrambler to load the CK into a corresponding register;decrypting the received encrypted CK, re-encrypting it with a session key SK and storing it in the table in that form, in conjunction with providing a mechanism to load in the CK into the register that includes decrypting the CK in the table with the SK. In either embodiment, the TEE112implements one or more of the following to prevent access of the descrambler to the CK:delete the CK from the CK table in response to loading the CK into the register, or at another point in time between evaluations;change the SK available for loading the CK or globally without re-encrypting the CK stored in the table, thereby rendering the CK unusable with the new SK, in response to loading the CK into the register, or at another point in time between evaluations;clear the CK from the register periodically, thereby forcing re-evaluation of the AC to re-decrypt the CK in time for when the CK is cleared (or shortly thereafter so as not to affect decryption), so that the CK can be loaded again into the register. In some or all of these embodiments, some or all of these functions are implemented in dedicated hardware to further reduce the risk of a successful attack on the conditional access system. Further, in some embodiments, a portion of the described functionality, in particular regarding the maintenance and clearing of the register, may be implemented in the TEE112, in some embodiments in dedicated hardware. The periodic evaluation of whether access is to be maintained or not is done, depending on the embodiment, based on device internal criteria inside the TEE112, or may be done server side, in which case the periodic evaluation includes testing for the receipt of an authenticated token from the server102over the communications network104by the communication modules in the REE110and evaluation of the token in the TEE112or SE114. Of course, in some embodiments, both approaches are combined. With reference toFIG.2, a server-side process in accordance with a specific embodiment is now described. At step202, the server102sends an encrypted content key [CK]TKencrypted with a transport key TK over the communications network104to the client106, together with access condition(s) AC that determine whether the client device106may decrypt CK with TK (the latter being already present in the client device106). At step204, the server102evaluates a server-side continued access condition, for example including one or more of the conditions discussed above, and, at step206, sends an authenticated token over the network104to the client device106, to enable the client device to benefit from continued access to the continent corresponding to CK. If the evaluation is negative, the token is not sent and the client device106will discontinue access, as discussed below. Subsequent to sending (or not sending) the token, a delay208is implemented such that steps204and206are sufficiently synchronised with the expectation of a token at the client device106to ensure continued access while evaluation at step204is positive. The delay may be implemented in terms of a time, an amount of data sent by the server102to the client device106, an estimate of the amount of data decrypted by the client device106to present corresponding content, a change in an IV sent to the client device106from server102or other server for use in decrypting the content in conjunction with CK, or an estimate of when a locally generated IV at the client device106may change. Further, in particular in the context of OTT implementations, the client device106, in some embodiments, regularly requests new data (rather than the data being pushed to the client device as for example in a broadcast context), and this request for data triggers and/or synchronises the sending of the token by the server102. With reference toFIG.3, a corresponding client-side process is now described. At step302, the received AC is authenticated and evaluated and, in the event of a positive evaluation, access to the CK is enabled for the descrambler at step302, with the encrypted media content being decrypted by the descrambler using the CK, as described above, at step304. After a predetermined interval, for example a predetermined period of time determined by a clock signal, a predetermined amount of data having been decrypted at step304, a change in an IV received or generated, and so forth, that this at a time t1the AC is re-evaluated at step306and, in case of a positive determination because a token has been received in time for the interval ending at t1, content continues to be decrypted at step308. This re-evaluation is carried out at each interval. If at a re-evaluation at time tn at step310the re-evaluation is negative because no token has been received in time for tn, the decryption of the media content fails at step312. It will be understood that the initial in evaluation of the AC may different from subsequent re-evaluations, for example the re-evaluations may only check for the receipt of an authenticated token since the last re-evaluation. Equally, in some embodiments, the re-evaluation may be exclusively, or in part, be based on internal factors inside the client device106, with or without reliance on receipt of a token. In order to allow further access, the received token must match an expected token. While in some embodiments the token is of a fixed value known to both the server102and the client device106, the security of token evaluation can be improved by changing the token from time to time, for example after each evaluation, in a way that can be predicted by the client device106. For example, a counter increment or timestamp could be used to change the token between evaluations or the token can be generated or derived by any suitable cryptographic method that is synchronised between the server102and the client device106. With reference toFIG.4, a specific embodiment of a client device106is now described. It will be appreciated that this embodiment can be used as a specific implementation of the embodiments described above, to implement the specific methods described above, and in particular to periodically reevaluate access rights by means of receipt of a token or otherwise. However, the embodiment is more widely applicable in any context in which it is desirable to enforce periodic re-evaluation of access conditions in a conditional access system, in particular in an OTT context. In this embodiment, the client device106comprises a REE402and a TEE404. The REE is configured to receive encoded content406for descrambling by a descrambler408and corresponding CK and AC410,412for storage in a conditional access kernel414. CK410is encrypted with a transport key TK and AC412comprises a flag referred to as a secure stop bit SSB herein, which indicates whether the availability of the CK is subject to periodic enforcement of a re-evaluation of the AC (or part of the AC, for example relating to the periodic receipt of a token) or not. This will be discussed in further detail below. The conditional access kernel414is configured to request evaluation of the AC and decoding of the CK in coordination with the encoded content406being provided to the descrambler408by the REE402, on request from the descrambler406, periodically triggered by an amount of data having been sent to or been decoded by the descrambler406, for example in good time for the end of a chunk, or in response to a change in a received or generated IV. In some embodiments the REE controls the pushing of a next content chunk to the DSC408for descrambling as well as sending the CK and AC to the TEE and synchronises these operations, for example sending the CK and AC in good time for when the next chunk is required and then, at the appropriate time and once the CK is ready in the CKTable422, pushing the next chunk to the DSC408for descrambling. The TEE404implements an AC evaluation module416configured to evaluate an AC passed to it by the conditional access kernel414for evaluation and a key decryption module418in possession of the TK and configured to decrypt [CK]TKpassed to it from the conditional access kernel414if cleared to do so by the AC evaluation module416. A first key protection module420is configured to receive the decrypted content key CK from the key decryption module418and to encrypt it with a session key SK, discussed further below. The first key protection module is also configured to encrypt AC with SK in some embodiments. A key table422is configured to receive the re-encrypted content key [CK]SKand encrypted access condition [AC]SKfrom the first key protection module420and to store [CK]SKand [AC]SK. By storing both CK and AC (which includes the SSB), the key table422can store CKs that are subject to periodic re-evaluation of access rights and those that are not, since the SSB will indicate to the relevant modules how the CK should be handled. The TEE404further implements a second key protection module424configured to read [CK]SKand [AC]SKfrom the key table422. As will be described below, to force periodic re-evaluation of the AC, the SK is changed periodically, so that each new SK needs to be negotiated and synchronised between the key protection modules422and424. To this end, each key protection module, or only one of them as appropriate, can request a new SK from an SK generator426. The SK generator426comprises a random or pseudo-random number generator to generate the SK. The SK generator426communicates the new SK to the key protection modules once it is generated. In some embodiments the SK generator426is incorporated in one of the key protection module and the key protection modules communicate directly to negotiate each new SK. In some embodiments each key protection module has a copy of the same predictable number generator and the key protection modules negotiate when to generate a new SK and then each key protection module generates the next value independently. The second key protection module426is further configured to decrypt the CK and AC, to extract the SSB from the AC and to store the CK and SSB in a register428of the descrambler408. A chunk counter430is configured to monitor an amount of data decoded by a decoding unit432(in some embodiments implemented in the REE402). The decoding unit432is configured to decode the encrypted content406to output “clear text” digital video content434for downstream processing by a video processor and display to a user of the client device406) and to clear the register428when a predetermined amount of data has been decoded, eg after decoding each chunk. In alternative embodiments, a change in IV used in conjunction with the CK is monitored and used to trigger clearing of the register428. In yet other embodiments, a clock signal is used for this purpose. It will be understood that the conditional access kernel414is adapted accordingly, so that the AC is re-evaluated to store a new [CK]SKin the key table in time for the key protection module to re-stock the register428with the CK. In some embodiments, the second key protection module424is configured to read [CK]SKand [AC]SKfrom the key table422in response to a request from the descrambler when the register428needs an AC written or re-written to the register428. In other embodiments, the key protection module424reads [CK]SKand [AC]SKfrom the key table422in response to them being stored there, for example triggered by a signal from the key protection module420. The CK and SSB are then inserted in the register428on detection that the register has been cleared, in response to a signal from the chunk counter430indication that the register has been cleared, or in response to a signal from the descrambler408. In some embodiments, the key table422stores a content identifier for each CK, identifying the corresponding content with which the CK is associated. A number of implementations, as illustrated inFIG.4B, are then possible to store CKs with differing protection levels:All CKs are stored encrypted with the same SK and require decryption by the second key protection module422. There is a global SK, meaning that the REE must request fresh decryption and re-encryption of CK with the current SK each time the relevant CK is to be provided to the descrambler408to decrypt a given content, irrespective of whether the SSB is set or not. For those CK where the SSB is set, the CK will be cleared from the descrambler408at the end of each chunk and the REE must request decryption and re-encryption each time it pushes a new chunk to the descrambler408.CKs for which the SSB are set are stored encrypted with the same global SK that changes periodically, for example in response to a chunk end as described above. For these CKs, the REE requests decryption and re-encryption of the CK each time the descrambler408requires a fresh CK (for each new chunk, as the SSB is set). Since the decryption and re-encryption of the CK is triggered just before the CK needs to be inserted in the descrambler408, the SK can change globally due to the short validity period that is required for [CK]SK. In implementations where this requirement is relaxed and longer validity periods are required for [CK]SK, an index of the correct SK has to be maintained for each content, for example in the key table422. For example, in some implementations, the change in the SK is achieved by incrementing the SK with a counter upon renegotiation or periodically and an index to the correct value of the counter is maintained in the key table422for each CK/corresponding content. CKs for which the SSB is not set are stored in clear text.As in the preceding implementation, CKs for which the SSB are set are stored encrypted with the same global SK that changes periodically. The change may be by renegotiation or by adding a counter to the SK, as above. CKs for which the SSB is not set are stored re-encrypted with a different SK that does not change or changes less frequently. In implementations where the SK change is achieved by a counter as described above, the different SK could be the SK without the counter added. The different, static SK could alternatively be a separate SK negotiated by the key protection modules420and424for non-SSB CKs. In either case, the SK (or all SKs as the case may be) may be re-negotiated periodically (but less frequently), or may be re-negotiated at device start-up, to force re-evaluation of all keys.In all of these options, the AC may be stored in clear text in the key table422so that the SSB is accessible, or the AC may be stored encrypted with, for example the SK or an incremented version of it, along with the SSB being stored separately to be accessible to the second key protection module424without knowledge of the SK (or the correct version of it). The AC may be signed by a cryptographical method to protect against its modification. For example, the AC and CK may be concatenated and the result encrypted with an algorithm using a chaining mode (e.g. AES in CBC mode (Advanced Encryption Standard in Cipher Block Chaining mode) using, for example, SK as the key. In another example, the AC is signed with a MAC (Message Authentication Code), AES CGM (Galois/Counter Mode) or AES ECC (Elliptic Curve Cryptography) algorithm. In some embodiments, AC evaluation module416and key decryption module418are implemented in a secure element436. In some embodiments, the AC, for example in the form of an ECM, is provided to the secure element436by way of a smart card and corresponding reader and the evaluation of the ECM and decryption of the CK is done in the secure element436. In some embodiments, the first key protection module420is also implemented in the secure element436. With reference toFIG.5, the operation of embodiments described above with reference toFIG.4Ais now described, in case the SSB is set so that period enforcement of re-evaluation of the AC is enabled. At step502, an SK is generated. In most of the described implementations (see above) a global SK is generated (and may be used together with a counter, as described above). A SK is first generated on boot up of the client device106and may then renegotiated periodically, as described above. At step504an encrypted media content is received and at step506a CK and AC are received and stored by the conditional access kernel414. At step508, the conditional access kernel414sends the AC to access evaluation module416and the CK to the decryption module418to trigger and evaluation of the AC and, if the evaluation is positive, the decryption of the CK. At step510, the decrypted CK is passed to the first key protection module420to encrypt the CK with the SK and store the result in the key table422. At step512, the CK is read and decrypted by the second key protection module424using the session key SK, and the result is stored in the register428. In addition, the second key protection module424also stores the SSB in the register to indicate that re-evaluation of the AC is being enforced. In response to the reading and/or storing, a new SK is negotiated and replaced in the key protection modules420and424at step514. For example, in some embodiments, the second key protection block424triggers this re-negotiation in response to reading and/or storing the CK. At this stage, if the CK needs to be placed in the register428again, the process of steps508to512needs to be repeated in order to store CK encrypted with the new SK in the key table422to enable it being decrypted and stored in the register428by the second key protection module428. At step516, a chunk or other predefined portion (predefined amount of data, a portion between two changes of IV) of the encrypted data stream is decrypted by the decryption module432using CK in the register428and output for processing and display downstream. At step518the chunk counter430detects that the predetermined portion of content has been decoded and, in response to that clears the CK from the register428if the SSB is present in the register428, thereby preventing further decoding of the content406unless the AC is evaluated again and the subsequent steps508to512are repeated to make [CK]SKencrypted with the new SK available to the second key protection block424again. Therefore, in order to continue decoding the content406, the process loops back to step508. To this end, the re-evaluation of the AC (and subsequent decryption of the CK, if successful) is triggered by the conditional access kernel414sending [CK]TKand AC to the decryption and access evaluation modules416,418, respectively, in response to a request from the descrambler408or, for example, in response to the conditional access kernel414independently requesting re-evaluation, for example based on monitoring IV changes or an amount of data input to or output from the descrambler408. The above description is made in terms of a process when enforced re-evaluation of the AC is enabled, that is the SSB is set to a value indicating that this should be enabled, eg1. In the event that the SSB is set to disable enforced re-evaluation, eg set to 0, the process described above with reference toFIG.5is modified in that, at step514one or more of the following alternative actions are taken:the trigger for renegotiation of SK in the second key protection module424is suppressed, so that SK is not re-negotiated in response to the second key protection module reading [CK]SKfrom the key table422(although renegotiation may occur at other times);on renegotiation of the SK, [CK]SKis reencrypted so that it remains accessible with the current SK;a global SK (which is not changing) is used for all content for which the SSB is not set/set to 0; a new SK is produced by incrementing the SK by a counter each time a new SK is required for content for which the SSB is set/set to 1;the key protection modules420,424act to merely pass through a clear text version of CK (involving also a modification of step510), which is stored and remains accessible in the key table422. Further, additionally or alternatively, in dependence on the embodiment, the register428is only cleared at step518if an active SSB, eg with value 1, is stored in it, as described above, and otherwise retains the CK until the content changes or the client device106is rebooted. With reference toFIG.6, an alternative specific embodiment is now described, which, in overview, removes the key protection modules420and424and the SK generator426described above with reference toFIG.4, and the corresponding periodic renegotiation of SK, and instead enforces re-evaluation of the AC by periodic deletion of CK from the key table422. However, in yet a further embodiment, key protection using SK together with SK renegotiation and deletion of CK from the key table422are combined and both occur to enforce periodic re-evaluation of the AC. Specifically, turning toFIG.6, in which the various modules have retained their reference numeral used above in describing embodiments with reference toFIG.4, the functionality of the key table422has changed in that it receives and stores the CK and SSB in clear text. The functionality of the descrambler408has changed in that it reads the CK and SSB directly into the register428and triggers the deletion of CK in the key table422, responsive to that. In some embodiments the deletion and/or its trigger are implemented in dedicated hardware for added security and resilience against attacks to circumvent the periodic re-evaluation of AC. Modules420,424and426are not present. With reference toFIG.7, a process implemented using the specific embodiment just described with reference toFIG.6is now described. The process is similar to the process described above with reference toFIG.5and like reference numeral are used for like process steps. Step506is omitted and steps510to514have been modified and will be referred to as steps710to714, respectively. Specifically, these steps are changed in light of the replacement of renegotiation of a SK with deletion of a clear text CK from the key table422. Describing, then, the differences inFIG.7, at step710CK is stored in the key table422in clear text and read into the register426at step712. At step714, the CK is deleted from the key table422in response to the CK having been read into the register426. It will be understood that in embodiments that combine the two approaches, the process described above with reference toFIG.5is modified in that in addition to the steps of that process, enforced deletion of the CK from the key table422occurs at some point in the process. It will be understood that the specific embodiments described above with reference toFIGS.4to7are suitable in any context in which periodic re-evaluation of an AC should be enforced, in particular if the corresponding CK has a prolonged period of validity and is stored persistently in the client device, as is the case in OTT applications. The re-evaluation of the AC may be based on conditions local to the client device106and/or may rely on the receipt of a token from a conditional access server102, as discussed above. In some embodiments, the AC is such that all but the receipt of token condition are evaluated only once at an initial step, with subsequent evaluations only requiring receipt and authentication of the token. In some embodiments, the conditional access is fully under the control of the server102and the AC at the client only requires the continued presence of an authenticated and up to date token at the client device to provide continued access to the CK to the descrambler408. Various modifications, combinations and juxtapositions of the features described above that are within the scope of the appended claims will occur to a person skilled in the art. To take a few examples, the SSB may be received and/or stored independently and/or separately from the AC, in which case the AC is not stored in the key table in some embodiments. The descrambler may be fully implemented in either the TEE or the REE, rather than straddling the two. In fact, all modules described may be implemented in either the TEE or REE depending on level of security required. Where the AC remains associated with the SSB, in the relevant embodiments it may be stored encrypted or in clear text in the second key protection module, or may be discarded with only the SSB stored in the second key protection module. Indeed, the AC and/or SSB may be stored elsewhere accessible to the second key protection module. More generally, where the location of storage of a particular quantity is referred to above, this is to be understood as a logical connection of accessibility, rather than a physical location, in some embodiments. Similarly, while a specific way of coordinating the negotiation of the SK between the key protection modules and the SK generator have been described above, it will be understood, that there are many possibilities for implementing this, including direct communication between these modules in addition to or instead of communication with the SK generator. Further, in embodiments described above, the register of the descrambler holding the CK is periodically cleared. This provides an arbitrary granularity in the enforcement of the re-evaluation of the AC and/or the receipt of a token by setting the periodicity accordingly. In alternative embodiments providing less control over the granularity of enforcing re-evaluation of the AC and/or the receipt of a token, the register content is only overwritten/changed when the content to be descrambled changes or the device is powered down, in which case re-evaluate of the AC can only be guaranteed on device start-up or when starting to descramble new content. In general, it will be understood that while the described embodiments are disclosed in different groupings and modules, some embodiments mirror the described groupings in terms of physical implementation, possibly with the implementation in dedicated hardware of some or all of the groupings and modules, while other embodiments regroup the described functionalities in different physical arrangements and the described modules and groupings are to be understood as logical groupings for the purpose of clarity of explanation of the associated functions, rather than for the purpose of limitation. Thus the described functions can be grouped differently in logical or physical groupings. Whether pertaining to the REE or TEE, the described functions can be implemented in one or more of software, firmware, middleware or hardware according to various embodiments. It will be understood that the above description has been made for the purpose of explanation of various embodiments and the disclosed techniques and not for the purpose of limitation of the scope of the appended claims.
34,859
11943492
DESCRIPTION OF EMBODIMENTS The following clearly describes technical solutions in embodiments of this application in detail with reference to accompanying drawings. In descriptions of embodiments of this application, unless otherwise stated. “/” indicates “or”. For example, A/B may indicate A or B. The term “or” in this specification merely describes an association relationship for describing associated objects, and indicates that three relationships may exist. For example, “A or B” may indicate the following three cases: Only A exists, both A and B exist, and only B exists. In addition, in the descriptions of embodiments of this application, “a plurality of” means two or more. The terms “first” and “second” mentioned below are merely intended for description, and shall not be understood as an indication or implication of relative importance or implicit indication of a quantity of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly or implicitly include one or more features. In the descriptions of embodiments of this application, unless otherwise specified, “a plurality of” means two or more. Next, the following describes a hardware architecture of a terminal330in embodiments of this application. The terminal330may be a device such as a smartphone, a tablet computer, a Bluetooth watch, or a Bluetooth headset. Embodiments of this application are described in detail herein by using the smartphone as an example. FIG.1is a schematic diagram of the hardware architecture of the terminal330. It should be understood that the terminal330shown inFIG.1is merely an example, and the terminal330may have more or fewer components than those shown inFIG.1, may have two or more components combined, or may have different component configurations. Various components shown in the figure may be implemented in hardware including one or more signal processing circuits or application-specific integrated circuits, software, or a combination of hardware and software. The terminal330may include a processor110, an external memory interface120, an internal memory121, a universal serial bus (universal serial bus, USB) interface130, a charging management module140, a power management module141, a battery142, an antenna1, an antenna2, a mobile communications module150, a wireless communications module160, an audio module170, a speaker170A, a receiver170B, a microphone170C, a headset jack170D, a sensor module180, a button190, a motor191, an indicator192, a camera193, a display194, a subscriber identity module (subscriber identification module, SIM) card interface195, and the like. The sensor module180may include a pressure sensor180A, a gyroscope sensor180B, a barometric pressure sensor180C, a magnetic sensor180D, an acceleration sensor180E, a range sensor180F, an optical proximity sensor180G, a fingerprint sensor180H, a temperature sensor180J, a touch sensor180K, an ambient light sensor180L, a bone conduction sensor180M, and the like. It can be understood that the structure illustrated in embodiments of the present invention does not constitute a specific limitation on the terminal330. In some other embodiments of this application, the terminal330may include more or fewer components than those shown in the figure, have some components combined, have some components split, or have different component arrangements. The components shown in the figure may be implemented by using hardware, software, or a combination of software and hardware. The processor110may include one or more processing units. For example, the processor110may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, or a neural-network processing unit (neural-network processing unit, NPU). Different processing units may be independent devices, or may be integrated into one or more processors. The controller may be a nerve center and a command center of the terminal330. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction fetching and instruction execution. A memory may be further disposed in the processor110, and is configured to store instructions and data. In some embodiments, the memory in the processor110is a cache. The memory may store instructions or data just used or cyclically used by the processor110. If the processor110needs to use the instructions or the data again, the processor may directly invoke the instructions or the data from the memory. This avoids repeated access and reduces waiting time of the processor110, thereby improving system efficiency. In some embodiments, the processor110may include one or more interfaces. The interface may include an inter-integrated circuit (inter-integrated circuit, I2C) interface, an inter-integrated circuit sound (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (general-purpose input/output, GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, a universal serial bus (universal serial bus, USB) interface, or the like. The charging management module140is configured to receive a charging input from the charger. The power management module141is configured to connect the battery142and the charging management module140to the processor110. A wireless communication function of the terminal330may be implemented by using the antenna1, the antenna2, the mobile communications module150, the wireless communications module160, the modem processor, the baseband processor, and the like. The antenna1and the antenna2are configured to transmit and receive electromagnetic wave signals. Each antenna in the terminal330may be configured to cover one or more communications frequency bands. Different antennas may be further multiplexed, to improve antenna utilization. The mobile communications module150can provide a solution, applied to the terminal330, to wireless communication including 2G/3G/4G/5G and the like. The mobile communications module150may include at least one filter, a switch, a power amplifier, a low noise amplifier (low noise amplifier, LNA), and the like. The mobile communications module150may receive an electromagnetic wave through the antenna1, perform processing such as filtering and amplification on the received electromagnetic wave, and transmit a processed electromagnetic wave to a modem processor for demodulation. The mobile communications module150may further amplify a signal modulated by the modem processor, and convert the signal into an electromagnetic wave for radiation through the antenna1. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium-high frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. The wireless communications module160may provide a solution, applied to the terminal330, to wireless communication including a wireless local area network (wireless local area networks. WLAN) (for example, a wireless fidelity (wireless fidelity, Wi-Fi) network), Bluetooth (Bluetooth, BT), a global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), a near field communication (near field communication, NFC) technology, an infrared (infrared, IR) technology, or the like. The wireless communications module160may be one or more components integrating at least one communications processing module. The wireless communications module160receives an electromagnetic wave through the antenna2, performs frequency modulation and filtering processing on the electromagnetic wave signal, and sends a processed signal to the processor110. The wireless communications module160may further receive a to-be-sent signal from the processor110, perform frequency modulation and amplification on the signal, and convert a processed signal into an electromagnetic wave for radiation through the antenna2. The terminal330implements a display function through the GPU, the display194, the application processor, and the like. The display194is configured to display an image, a video, and the like. The display194includes a display panel. The terminal330can implement a photographing function by using the ISP, the camera193, the video codec, the GPU, the display194, the application processor, and the like. The ISP is configured to process data fed back by the camera193. For example, during shooting, a shutter is pressed, and light is transmitted to a photosensitive element of the camera through a lens. The photosensitive element of the camera converts an optical signal into an electrical signal, and transmits the electrical signal to the ISP for processing, to convert the electrical signal into a visible image. The camera193is configured to capture a static image or a video. An optical image of an object is generated through the lens, and is projected to the photosensitive element. The digital signal processor is configured to process a digital signal, and may process another digital signal in addition to the digital image signal. The NPU is a neural-network (neural-network, NN) computing processor. The NPU quickly processes input information by referring to a structure of a biological neural network, for example, a transfer mode between human brain neurons, and may further continuously perform self-learning. The external memory interface120may be configured to be connected to an external storage card such as a micro SD card, to extend a storage capability of the terminal330. The internal memory121may be configured to store computer-executable program code. The executable program code includes instructions. The processor110runs the instructions stored in the internal memory121, to implement various function applications and data processing of the terminal330. The terminal330may implement audio functions such as music playing and recording through the audio module170, the speaker170A, the receiver170B, the microphone170C, the headset jack170D, the application processor, and the like. The audio module170is configured to convert digital audio information into analog audio signal output, and is also configured to convert analog audio input into a digital audio signal. The audio module170may further be configured to code and decode audio signals. The speaker170A, also referred to as a “loudspeaker”, is configured to convert an audio electrical signal into a sound signal. The terminal330may be used to listen to music or listen to a hands-free call through the speaker170A. The receiver170B, also referred to as an “earpiece”, is configured to convert an audio electrical signal into a sound signal. When a call is answered or a voice message is listened to by using the terminal330, the receiver170B may be put close to a human ear to listen to a voice. The microphone170C, also referred to as a “mike” or a “mic”, is configured to convert a sound signal into an electrical signal. When making a call or sending a voice message, a user may make a sound near the microphone170C through the mouth, to enter a sound signal to the microphone170C. At least one microphone170C may be disposed in the terminal330. In some other embodiments, two microphones170C may be disposed in the terminal330, to collect a sound signal and further implement a noise reduction function. In some other embodiments, three, four, or more microphones170C may alternatively be disposed in the terminal330, to collect a sound signal, reduce noise, identify a sound source, implement a directional recording function, and the like. In this embodiment, the terminal330collects a sound signal through the microphone170C and transmits the sound signal to an application program in the terminal330. The headset jack170D is configured to connect to a wired headset. The pressure sensor180A is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. The gyroscope sensor180B may be configured to determine a moving posture of the terminal330. The barometric pressure sensor180C is configured to measure barometric pressure. The magnetic sensor180D includes a Hall effect sensor. The terminal330may detect opening and closing of a flip cover by using the magnetic sensor180D. The acceleration sensor180E may detect values of accelerations of the terminal330in various directions (usually on three axes). The range sensor180F is configured to measure a distance. The optical proximity sensor180G may include, for example, a light-emitting diode (LED) and an optical detector such as a photodiode. The ambient light sensor180L is configured to sense ambient light brightness. The fingerprint sensor180H is configured to collect a fingerprint. The temperature sensor180J is configured to detect a temperature. The touch sensor180K is also referred to as a “touch panel”. The touch sensor180K may be disposed on the display194, and the touch sensor180K and the display194form a touchscreen, which is also referred to as a “touch screen”. The bone conduction sensor180M may obtain a vibration signal. In some embodiments, the bone conduction sensor180M may obtain a vibration signal of a vibration bone of a human vocal-cord part. The gyroscope sensor180B may be configured to determine a motion posture of the electronic device100. The barometric pressure sensor180C is configured to measure barometric pressure. The magnetic sensor180D includes a Hall effect sensor. The acceleration sensor180E may detect magnitudes of accelerations of the electronic device100in various directions (usually on three axes), and may detect magnitude and a direction of gravity when the electronic device100is still. The range sensor180F is configured to measure a distance. The optical proximity sensor180G may include, for example, a light-emitting diode (LED) and an optical detector such as a photodiode. The ambient light sensor180L is configured to sense ambient light brightness. The fingerprint sensor180H is configured to collect a fingerprint. The electronic device100may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. The temperature sensor180J is configured to detect a temperature. The touch sensor180K is also referred to as a “touch panel”. The touch sensor180K may be disposed on the display194, and the touch sensor180K and the display194form a touchscreen, which is also referred to as a “touch screen”. The touch sensor180K is configured to detect a touch operation performed on or near the touch sensor. The touch sensor may transfer the detected touch operation to the application processor, to determine a type of a touch event. A visual output related to the touch operation may be provided on the display194. In some other embodiments, the touch sensor180K may alternatively be disposed on a surface of the electronic device100at a location different from that of the display194. The bone conduction sensor180M may obtain a vibration signal. The button190includes a power button, a volume button, and the like. The motor191may generate a vibration prompt. The motor191may be configured to produce an incoming call vibration prompt and a touch vibration feedback. The indicator192may be an indicator lamp, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like. The SIM card interface195is configured to connect to a SIM card. For ease of understanding this application, the following describes terms in this application. 1. Multilingual Subtitles A video includes multilingual subtitles. For example, a video includes Chinese, English, and Japanese subtitles. When playing the video in a video application, a user may select subtitles in a specific language. 2. Multilingual Audio A video includes multilingual audio. For example, a video includes Chinese, English, and Japanese audio. When playing the video in a video application, a user may select audio in a specific language. 3. Video Transcoding A content source file of a video is transcoded into media asset media (including an audio medium file, a video stream medium file, and a subtitle medium file) with various encapsulation protocols and encoding formats to adapt to different types of terminals. The encapsulation protocols may include dynamic adaptive streaming over HTTP (dynamic adaptive streaming over HTTP, DASH), HTTP live streaming (HTTP live streaming, HLS), and the like. HTTP is an abbreviation for hypertext transfer protocol (hypertext transfer protocol). The encoding formats may include H.265, H.264, and the like. For example, the content source file of the video is transcoded into two media asset media: DASH+H.265 and HLS+H.264. Each media asset medium includes a plurality of resolutions, a plurality of bitrates, multilingual audio information, and multilingual subtitle information. 4. Media Asset Medium A media asset medium is a set of medium files of one or more streams with specific attributes of a same video. For example, the specific attributes may be an encapsulation protocol and an encoding format. If the same video has a plurality of streams (for example, a stream 1 and a stream 2), the streams have different attributes. The stream 1 is transcoded to obtain a medium file of the stream 1. The stream 2 is transcoded to obtain a medium file of the stream 2. Media asset media of the video include the medium file of the stream 1 and the medium file of the stream 2. 5. Index File One media asset medium corresponds to one index file. An index file of a media asset medium includes description information of streams (for example, multilingual audio, multilingual subtitles, and a plurality of video streams) corresponding to a medium file contained in the media asset medium. The description information may include at least one of a file identifier, resolution, bitrate, encoding format, and download uniform resource locator (uniform resource locator, URL). The file identifier may be “Chinese subtitles”, “Chinese audio”, “English audio”, or the like. The resolution may be 480p, 1080p, 4K, or the like. The bitrate may be 2.5 megabits per second (million bits per second, mbps), 3.5 Mbps, or the like. The index file may be an index plain text file encoded in 8-bit unicode transformation format (8-bit unicode transformation format, UTF-8), which is certainly not limited thereto. For example, an index file of a media asset medium to which a medium file corresponding to a stream whose encapsulation protocol is DASH belongs is in a media presentation description (media presentation description, MPD) format. An index file of a media asset medium to which a medium file corresponding to a stream whose encapsulation protocol is HLS belongs is in an M3U8 format. A CDN server downloads video playing content (including a video stream audio, and subtitles) through an index file. 6. Metadata Information of a Video Metadata information of a video may include a director name of the video, an identifier of the video (for example, a name of the video), a show time of the video, a media asset medium of the video, and the like. The media asset medium of the video may include at least one of a download URL of an index file of the media asset medium, an identifier of the media asset medium, and the like. The metadata information of the video may further include other information, and the media asset medium of the video may further include other information. This is not limited in this application. 7. Media Asset Server A media asset server introduces and processes a content source file of a video, stores the content source file of the video and media asset media obtained after the content source file of the video is transcoded, synchronizes the media asset media (including a video stream medium, an audio stream medium, and a subtitle file medium) obtained after transcoding to a CDN delivery system, and synchronizes metadata information of video content to a content management system. 8. CDN Server A CDN server is a content delivery system Relying on edge servers deployed in various places, the content delivery system enables a user to obtain required content nearby through functional modules, such as load balancing, content delivery, and scheduling, of a central platform. Network congestion is reduced, and a response speed and hit rate of user access are increased. The CDN server herein is configured to store a medium file and an index file of video content, and send the medium file and the index file of the target video to a terminal in response to a playing URL of the target video sent by the terminal. 9. Content Management Server The content management server is a system that manages content of one or more videos, including content presentation, content operation, and the like. The content management server obtains metadata information of a video from a media asset server. Currently, a method for adding new-language subtitles and new-language audio to a video that has been released in a video application may be as follows: For example, a video that has been released and operated in a video application has only English subtitles, Chinese subtitles, and English audio upon its release. Half a year after the video is released, an administrator of the video application purchases French audio and French subtitles of the video from a producer of the video. The administrator of the video application needs to add the French audio and French subtitles to the video that has been released, so that a user can switch to the French audio and French subtitles to watch the video during viewing. Solution 1: As shown inFIG.2, adding the French audio and French subtitles of the video includes the following steps: S201: A media asset server receives to-be-added content (including the French subtitles and French audio) uploaded by the user. S202: The media asset server performs format conversion on video content (including a video stream, the English subtitles, the Chinese subtitles, the English audio, the French subtitles, and the French audio) to obtain a new media asset medium corresponding to the video content. After the media asset server receives the to-be-added content (including the French subtitles and French audio) uploaded by the user, the media asset server first discontinues existing content (including the video stream, English subtitles, Chinese subtitles, and English audio) of the video, then performs format conversion on the existing content (including the video stream, English subtitles, Chinese subtitles, and English audio) and the to-be-added content (including the French subtitles and French audio) of the video, and packages the content into the new media asset medium. S203: The media asset server replaces an existing media asset medium with the new media asset medium and synchronizes the media asset medium to a CDN server. S204: The media asset server sends metadata information of the video to a content management server. S205: When the user plays the video content, the video application may obtain new-language audio or new-language subtitles from the CDN server for playing. Based on the solution 1, when video content is operated in different countries and regions, subtitles in other languages and audio in other languages often need to be added to content of a plurality of videos that have been released. A current method for fully retranscoding and replacing video content has the following major disadvantages: 1. Resource Consumption and Transcoding Costs are High. To add new-language subtitles or audio to content of a video that has been released, a source file of the video is re-encoded into an H.265 or H.264 medium file, which consumes a large amount of computing resources. For example, a 42C192G server requires 1 hour of computing resources to transcode video content to H.264 video content of 6 streams with different resolutions, different bitrates, and duration of 1 hour, or requires 2 hours of computing resources for encoding it to H.265 video content. If a batch of tens of thousands of hours of video content is retranscoded, a large amount of computing resources are consumed and transcoding costs are high. 2. Normal Content Operation and Release are Affected. A batch of sudden transcoding requirements due to addition of new-language subtitles or audio exerts great pressure on normal release and transcoding and affects a pace of normal operation and release of the video content. 3. Retranscoding and Release Efficiency is Low for Adding New-Language Subtitles or Audio. To add only new-language subtitles or audio to content of a video, existing content and the new-language subtitles or audio of the video are retranscoded and then released. The transcoding requires a long time. Consequently, release efficiency is low due to adding of the new-language subtitles or audio, and requirements for urgent addition of the new-language subtitles or audio cannot be supported. Solution 2: A subtitle server that is independently deployed is used, and new-language subtitles of a video are placed on the subtitle server. Specifically, to add the new-language subtitles to content of the video that has been released, a new-language subtitle file is placed on the independent subtitle server. When playing the content of the video, a video application obtains subtitle language information from the independent subtitle server and displays the information in a playing interface. The independent subtitle server is deployed and the new-language subtitles are placed on the subtitle server instead of being packaged into a media asset medium obtained after the video is transcoded. The new-language subtitles are uploaded to the independent subtitle server without retranscoding the content. This solution has the following major disadvantages: 1. The Independent Subtitle Server is Added. The External Subtitle Solution has Additional Deployment and Operations and Maintenance Costs. Although the content does not need to be retranscoded and transcoding costs are reduced in the independent external subtitle solution, the independent external subtitle server has the additional deployment and operations and maintenance costs, and increases a failure rate of a system. 2. Private Customization and Non-Standard Support of an Application or a Player are Required. The external subtitle solution requires non-standard support of the video application. When playing content, the system obtains multilingual subtitle information from a private subtitle delivery server. DASH and HLS solutions that use transcoding are global standards. Players that comply with the standards can normally switch between languages for playing. 3. Independent External Subtitles May Affect Playing Experience, and this Solution is not Applicable to Adding New-Language Audio. A delay for synchronizing the subtitles with a mouth shape of a character in the video content generally needs to be within 200 ms. Because the video application needs to access the independent subtitle server when obtaining the subtitles of the video content, the subtitle obtaining process may not be synchronized with a process of downloading a video stream and audio of the video content. If the subtitle file returned by the subtitle server is late, the playing experience is affected. A delay for synchronizing the mouth shape with the audio is smaller and needs to be within 90 ms. The audio and video streams need to be obtained when playing starts, to avoid a problem of synchronization of the audio with the video. If the solution that uses the independent external server causes a long playing start delay, the solution is not applicable to adding the new-language audio to the video content. To resolve the foregoing problems, the present invention provides a method and a system for adding new-language subtitles or new-language audio. The method includes: A media asset server receives an identifier and a new-language file of a target video and converts the new-language file into a new-language medium file. The media asset server finds a first index file based on the identifier of the target video, and adds a storage address of the new-language medium file on the media asset server to the first index file to obtain a second index file. The media asset server sends the new-language medium file and the second index file to a content delivery server. The content delivery server replaces the storage address of the new-language medium file on the media asset server in the second index file with a storage address of the new-language medium file on the content delivery server to obtain a third index file. The content delivery server generates a first URL of the target video. In the method, to add new-language subtitles or new-language audio of the target video, it is only necessary to transcode and release the new-language subtitles or new-language audio of the target video. In this way, costs are greatly reduced and operation efficiency is improved. FIG.3is a framework diagram of a system of a method for adding subtitles and/or audio according to an embodiment of this application. As shown inFIG.3, a system30includes a media asset server300, a CDN server310, a content management server320, and a terminal330. In some embodiments, the system30may further include a transcoding server. This is not limited in this embodiment of this application. The media asset server300includes a media asset library3001. The media asset library3001may be configured to store a content source file of a video, metadata information of the video, and an index file and a medium file of the video. The media asset server300is further configured to send the index file and the medium file of the video to the CDN server310, send the metadata information of the target video to the content management server320, and the like. The CDN server310is configured to receive and store the index file and the medium file of the video that are sent by the media asset server300, and is further configured to generate a playing URL of the target video and send the playing URL of the target video to the media asset server300. The CDN server310is further configured to send the medium file and the index file of the target video to the terminal in response to the playing URL of the target video sent by the terminal. As shown inFIG.3, the CDN server310includes a CDN origin server3101, a CDN edge node3102, and a CDN scheduling center3103. After obtaining the playing URL of the target video from the content management server in response to a playing request of a user, the terminal330sends a request (including the playing URL of the target video) to the CDN scheduling center. The CDN scheduling center schedules the request to a CDN edge node closest to the terminal330(there may be a plurality of scheduling methods, and a proximity scheduling principle is generally used; for example, if the user of the terminal330is located in Nanjing, the CDN server310schedules the video playing request to a CDN edge node located in Nanjing). The CDN edge node determines whether the index file of the target video to be downloaded exists on the node. If yes, the CDN edge node sends the index file of the target video to the terminal330. If no, the CDN edge node initiates a download request to the CDN origin server, where the request is used to indicate the CDN origin server to send the index file of the target video to the CDN edge node, and the CDN edge node then sends the index file of the target video to the terminal330. After the terminal330obtains the index file, the terminal330plays the target video based on the index file. The terminal330displays a video stream, first-language subtitles, and first-language audio of the target video in the following three manners: Manner 1: The terminal330obtains a download URL of a first-language audio medium file and a download URL of a first-language subtitle medium file from the index file based on the first-language audio and the first-language subtitles preset in a first video application. The terminal330obtains the first-language audio medium file and the first-language subtitle medium file of the target video based on the download URL of the first-language audio medium file and the download URL of the first-language subtitle medium file. The CDN server310integrates a video stream medium file, the first-language audio medium file, and the first-language subtitle medium file of the target video into a playing file. The CDN server310sends the playing file to the terminal330. The terminal330downloads the playing file in real time. The first video application displays the video stream, the first-language audio, and the first-language subtitles of the target video in real time. Manner 2: The terminal330obtains the download URL of the first-language audio medium file and the download URL of the first-language subtitle medium file from a third index file based on the first-language audio and the first-language subtitles preset in the first video application. The terminal330obtains the first-language audio medium file and the first-language subtitle medium file of the target video based on the download URL of the first-language audio medium file and the download URL of the first-language subtitle medium file. The CDN server310sends the video stream medium file, the first-language audio medium file, and the first-language subtitle medium file of the target video to the terminal330in real time. The terminal330receives the video stream medium file, the first-language audio medium file, and the first-language subtitle medium file of the target video in real time, and integrates them into a playing file. The terminal330plays the target video based on the playing file. The first video application displays the video stream, the first-language audio, and the first-language subtitles of the target video in real time. Manner 3: The CDN server310integrates the video stream medium file, audio medium files of various languages, and subtitle medium files of various languages of the target video into N playing files with subtitles in specified languages and audio in the specified languages. For example, the CDN server310integrates the target video into four playing files with subtitles and audio in specified languages: a playing file with the video stream. Chinese subtitles, and Chinese audio, a playing file with the video stream, English subtitles, and English audio, a playing file with the video stream, the Chinese subtitles, and the English audio, and a playing file with the video stream, the English subtitles, and the Chinese audio. The terminal330obtains the download URL of the first-language audio medium file and the download URL of the first-language subtitle medium file from the third index file based on the first-language audio and the first-language subtitles preset in the first video application. The terminal330obtains a playing file with the first-language subtitles and the first-language audio of the target video based on the download URL of the first-language audio medium file and the download URL of the first-language subtitle medium file. The terminal330plays the target video based on the playing file. The first video application displays the video stream, the first-language audio, and the first-language subtitles of the target video in real time. It should be noted that the foregoing embodiments are merely used to explain this application and shall not constitute a limitation. The content management server320may be configured to perform operation management on one or more videos, for example, to manage metadata information of the one or more videos. The content management server320is further configured to receive the metadata information of the one or more videos sent by the media asset server300, receive the request to play the target video from the terminal330, and send the playing URL of the target video to the terminal330. The terminal330may be configured to install the first video application. The terminal330may be configured to send the request to play the target video to the content management server, receive the playing URL of the target video sent by the content management server320, send the playing URL of the target video to the CDN server310, and receive the index file and the medium file of the target video sent by the CDN server310. Optionally, the transcoding server may be configured to transcode the content source file of the video into a type of file that can be played by the video application installed on the terminal330. In some embodiments, the media asset server300, the CDN server310, and the content management server320may all be independently located on one physical device, or any two or more of the servers may be integrated on a same physical device. It should be noted that the system30is merely used to explain this application and shall not constitute a limitation. The following clearly describes technical solutions in embodiments of this application with reference to the accompanying drawings in embodiments of this application. As shown inFIG.4. S401to S410are a flowchart of a method for adding subtitles and/or audio to a target video according to an embodiment of this application. FIG.4is a flowchart of a method for adding new-language subtitles or new-language audio according to an embodiment of this application. The method may be applied to the system30shown inFIG.3. The system30may include the media asset server300, the CDN server310, the content management server320, and the terminal330. For a specific description of the system30, references may be made to the embodiment shown inFIG.3, and details are not described herein again. The method may include: S401: The media asset server300receives indication information. The indication information may be input by an administrator of a video application. The indication message includes a subtitle addition instruction, an identifier of a target video (for example, a name of the target video), and new-language subtitles of the target video; and/or an audio addition instruction, the identifier of the target video, and new-language audio of the target video. The prompt information does not actually exist and is optional. The prompt information is introduced for ease of description. The foregoing description may be replaced by the following description: The media asset server300receives the subtitle addition instruction, the identifier of the target video (for example, the name of the target video), and the new-language subtitles of the target video; and/or the audio addition instruction, the identifier of the target video, and the new-language audio of the target video. The subtitle addition instruction is optional. This is not limited herein. The new-language subtitles of the target video may also be referred to as to-be-added subtitles of the target video. The new-language audio of the target video may also be referred to as to-be-added audio of the target video. The new-language subtitles of the target video and/or the new-language audio of the target video may also be referred to as a new-language file. The new-language subtitles are subtitles corresponding to a new language added on the basis of existing-language subtitles of the target video. The new-language audio is audio corresponding to the new language added on the basis of the existing-language audio of the target video. For example, a video is released and operated in the video application, and the video in the video application has only English subtitles. Chinese subtitles, and English audio upon its release. The administrator of the video application wants to release French subtitles and French audio of the video. The administrator of the video application sends indication information to the media asset server300, where the indication information includes an instruction for adding the French subtitles and French audio of the video, a name of the video, and the French subtitles and French audio of the video. S402: The media asset server300obtains a first index file of the target video based on the identifier of the target video. Optionally, the media asset server300stores a plurality of index files of a plurality of videos. The media asset server300may find, based on the identifier of the target video, an index file corresponding to the identifier of the target video. For example, the videos included in the media asset server300include “Sisters Who Make Waves” and “Back to Field”. In this case, the media asset server300includes an index file A of the video “Sisters Who Make Waves” and an index file B of the video “Back to Field”. In this embodiment, for ease of description, the identifier of the video and a download URL of each medium file are referred to as description information. The index file A includes description information, where the description information includes a video identifier “Sisters Who Make Waves”, an identifier “Chinese subtitles” of a Chinese subtitle medium file, an identifier “Chinese audio” of a Chinese audio medium file, a download URL of the Chinese subtitle medium file, a download URL of the Chinese audio medium file, and the like. The description information does not actually exist and is optional. The description information is introduced for ease of description. The foregoing description may be replaced by the following description: The index file A includes the video identifier “Sisters Who Make Waves”, the identifier “Chinese subtitles” of the Chinese subtitle medium file, the identifier “Chinese audio” of the Chinese audio medium file, the download URL of the Chinese subtitle medium file, the download URL of the Chinese audio medium file, and the like. The index file B includes description information, where the description information includes a video identifier “Back to Field”, an identifier “Chinese subtitles” of a Chinese subtitle medium file, an identifier “Chinese audio” of a Chinese audio medium file, a download URL of the Chinese subtitle medium file, a download URL of the Chinese audio medium file, and the like. For example, if the identifier of the target video is “Sisters Who Make Waves”, the media asset server300may find the index file A based on the identifier “Sisters Who Make Waves” of the target video. The media asset server300receives the indication information, and the media asset server300obtains the first index file based on the identifier of the target video. The media asset server300stores a source file of the target video (a video stream, existing-language subtitles, and existing-language audio of the target video), and a medium file and the first index file corresponding to the source file of the target video. The first index file includes description information of the medium file corresponding to the source file of the target video. A video stream medium file, an existing-language subtitle medium file, and an existing-language audio medium file of the target video may be referred to as an existing-language medium file. The existing-language medium file refers to the video stream medium file, the existing-language subtitle medium file, and the existing-language audio medium file in the media asset server300. The existing-language medium file may include the video stream medium file, and an existing-language subtitle medium file and/or an existing-language audio medium file of the target video. The description information of the medium file corresponding to the source file of the target video may include identifiers, download URLs, and the like of the video stream medium file, the existing-language subtitle medium file, and the existing-language audio medium file of the target video. The download URL of the existing-language subtitle medium file and/or the existing-language audio medium file may also be referred to as a URL of the existing-language medium file. The description information of the medium file corresponding to the source file of the target video may further include other information. This is not limited in this application. For example, in this embodiment, before French subtitles and French audio are added to a video, the media asset server300receives and stores a source file of the video (including a video stream, English subtitles, Chinese subtitles, and English audio of the video) uploaded by the administrator of the video application. In addition, the media asset server300transcodes the source file of the video to obtain a corresponding medium file, where the medium file is a type of file that can be played by the video application. Furthermore, the media asset server300generates a first index file, where the first index file includes description information of the medium file corresponding to the video stream, English subtitles, Chinese subtitles, and English audio of the video. For example, the description information in the first index file may include a name of the video, subtitle identifiers “Chinese subtitles” and “English subtitles”, an audio identifier “English audio”, and a download URL of each medium file. The description information in the first index file may further include an encoding format of a Chinese subtitle medium file, an encoding format of an English subtitle medium file, an encoding format of an English audio medium file, and the like. This is not limited herein in this application. S403: The media asset server300performs format conversion on the new-language subtitles of the target video to obtain a new-language subtitle medium file and/or performs format conversion on the new-language audio to obtain a new-language audio medium file. The media asset server300performs format conversion on the new-language subtitles of the target video to obtain the new-language subtitle medium file and/or performs format conversion on the new-language audio to obtain the new-language audio medium file. Herein, the format conversion is to convert the new-language subtitles and/or the new-language audio of the target video into a file format that can be recognized and played by the video application, for example, the new-language subtitle medium file and/or the new-language audio medium file encapsulated in an MP4 file format. S404: The media asset server300updates the first index file of the target video (including adding description information of the new-language subtitle medium file and/or description information of the new-language audio medium file to the first index file) to obtain a second index file. The description information of the new-language subtitle medium file may include an identifier (for example, French subtitles) of the new-language subtitle medium file, an encoding format of the new-language subtitle medium file, a first URL of the new-language subtitle medium file, and the like. The description information of the new-language audio medium file may include an identifier (for example, French audio) of the new-language audio medium file, an encoding format of the new-language audio medium file, a first URL of the new-language audio medium file, and the like. The first URL of the new-language subtitle medium file includes a storage address of the new-language subtitle medium file on the media asset server300. The first URL of the new-language audio medium file includes a storage address of the new-language audio medium file on the media asset server300. Herein, the storage address of the new-language medium file on the media asset server may also be referred to as the first URL of the new-language medium file. The media asset server300adds the description information of the new-language subtitles and/or the description information of the new-language audio to the first index file to update the first index file. For ease of description, an updated first index file is referred to as the second index file. S405: The media asset server300sends the new-language subtitle medium file and/or the new-language audio medium file and the second index file to the CDN server310. S406: The CDN server310receives and stores the new-language subtitle medium file and/or the new-language audio medium file and the second index file, and updates the second index file (changes the first URL of the new-language subtitle medium file to a second URL of the new-language subtitle medium file and/or changes the first URL of the new-language audio medium file to a second URL of the new-language audio medium file) to obtain a third index file. The second URL of the new-language subtitle medium file includes a storage address of the new-language subtitle medium file on the CDN server310. The second URL of the new-language audio medium file includes a storage address of the new-language audio medium file on the CDN server310. The media asset server300sends the new-language subtitle medium file and/or the new-language audio medium file and the second index file of the target video to the CDN server310based on the identifier of the target video (for example, the name of the target video). The CDN server310receives the new-language subtitle medium file and/or the new-language audio medium file and the second index file of the target video. Specifically, the media asset server300sends an address for obtaining the new-language subtitle medium file of the target video and/or an address for obtaining the new-language audio medium file and an address for obtaining the second index file to the CDN server310based on the identifier of the target video (for example, the name of the target video). The CDN server downloads the new-language subtitle medium file of the target video and/or the new-language audio medium file and the second index file from the media asset server300based on the address for obtaining the new-language subtitle medium file of the target video and/or the address for obtaining the new-language audio medium file and the address for obtaining the second index file. The CDN server310stores the new-language subtitle medium file and/or the new-language audio medium file of the target video together with the video stream medium file, the existing-language subtitle medium file, and the existing-language audio medium file based on the identifier of the target video (for example, the name of the target video). The CDN server310replaces the first index file with the second index file. The CDN server310also updates the second index file by changing the first URL of the new-language subtitle medium file in the second index file to the second URL of the new-language subtitle medium file and/or changing the first URL of the new-language audio medium file to the second URL of the new-language audio medium file. For ease of description, an updated second index file is referred to as the third index file. S407: The CDN server310generates a first URL of the target video. The first URL of the target video includes a download address of the third index file and security verification information for preventing playing through hotlinking. S408: The CDN server310sends the first URL of the target video to the media asset server300. S409: The media asset server300sends the identifier of the target video and a metadata information update amount of the target video to the content management server320. S410: The content management server320updates first metadata information of the target video based on the metadata information update amount of the target video to obtain second metadata information. The metadata information of the target video may include the identifier of the target video, a director name of the target video, a show time of the target video, media asset medium information of the target video, and the like. The metadata information of the target video may further include other content. This is not limited in this application. The metadata information update amount of the target video may include the first URL of the target video, the description information of the new-language subtitle medium file and/or the new-language audio medium file, and the like. The content management server320receives and stores the identifier of the target video and the metadata information update amount of the target video. The content management server320obtains the first metadata information of the target video based on the identifier of the target video. The first metadata information includes the identifier of the target video, the director name of the target video, the show time of the target video, a second URL of the target video, and the like. The second URL of the target video includes a download address of the first index file of the target video on the CDN server310and security verification information for preventing playing through hotlinking. The content management server320updates the first metadata information of the target video based on the metadata information update amount of the target video to obtain the second metadata information. Specifically, the content management server320replaces the second URL of the target video in the first metadata information with the first URL of the target video, and adds the description information of the new-language subtitle medium file and/or the description information of the new-language audio medium file to the first metadata information. In some embodiments, “S409and S410” may be replaced by “S409: The media asset server300updates the first metadata information of the target video based on the metadata information update amount of the target video to obtain the second metadata information. S410: The media asset server300sends the second metadata information to the content management server320”. It can be understood that the foregoing embodiments are merely used to explain this application and shall not constitute a limitation. FIG.5is a flowchart of a method in which the terminal330plays a target video and receives a request of a user to switch to second-language subtitles or second-language audio of the target video. S501: The terminal330starts a first video application and receives a playing request of a user for the target video. The first video application is installed on the terminal330. The first video application provides an entry for triggering the playing request for the target video. The first video application may be a video player, such as Huawei Video. This is not limited in this application. S502: The terminal330sends a video obtaining request (including an identifier of the target video) to the content management server320in response to the playing request. The terminal330responds to the playing request for the target video, and the terminal330sends the video obtaining request to the content management server320. The video obtaining request carries the identifier of the target video. The identifier of the target video may also be a name of the target video. S503: The content management server320queries a first URL of the target video based on the video obtaining request, where the first URL of the target video is in second metadata information. The content management server320receives the video obtaining request and parses out the identifier of the target video carried in the video obtaining request. The content management server320queries the first URL of the target video based on the identifier of the target video. S504: The content management server320sends the first URL of the target video to the terminal330. S505: The terminal330sends the first URL of the target video to the CDN server310. S506: The CDN server310obtains a third index file based on the first URL of the target video. S507: The CDN server310sends the third index file of the target video to the terminal330. Before sending the third index file to the terminal330, the CDN server310receives a first obtaining request sent by the terminal330for the first URL of the target video. The first obtaining request is used by the CDN server310to obtain the third index file based on the first URL of the target video. A CDN scheduling center sends a request with the first URL of the target video to a CDN edge node. The CDN edge node determines whether an index file of the target video to be downloaded exists on the node. If yes, the CDN edge node sends the index file of the target video to the terminal. If no, the CDN edge node initiates a download request to a CDN origin server. The CDN origin server sends the index file of the target video to the CDN edge node. The CDN edge node sends the index file of the target video to the terminal. S508: The terminal330plays the target video based on the third index file (including a video stream, first-language subtitles, and first-language audio of the target video), and displays all language subtitle identifiers and all language audio identifiers of the target video. The terminal330receives the third index file of the target video sent by the CDN server310. The third index file records description information of a video stream medium file of the target video, subtitle medium files of all languages, and audio medium files of all languages. The description information may be identifiers of the medium files. The identifiers of the medium files may include “video stream”, “Chinese subtitles”, “Chinese audio”, “English subtitles”, “English audio”, “French subtitles”, “French audio”, download URLs of the medium files, and the like. Certainly, the description information of the medium files of the target video may further include other content. Details are not described herein. The terminal330plays the first-language audio and the first-language subtitles based on preset settings of the first video application. The terminal330obtains a download URL of the first-language audio medium file and a download URL of the first-language subtitle medium file of the target video from the third index file based on the first video application. The terminal330sends a download request to the CDN scheduling center, where the download request is used to download the first-language audio medium file and the first-language subtitle medium file, and the download request carries the download URL of the first-language audio medium file and the download URL of the first-language subtitle medium file. The CDN scheduling center sends the download request to the CDN edge node. The CDN edge node determines whether the first-language audio medium file and the first-language subtitle medium file of the target video to be downloaded exist on the node. If yes, the CDN edge node sends the first-language audio medium file and the first-language subtitle medium file of the target video to the terminal330. If no, the CDN edge node initiates a download request to the CDN origin server. The CDN origin server sends the first-language audio medium file and the first-language subtitle medium file of the target video to the CDN edge node. The CDN edge node sends the first-language audio medium file and the first-language subtitle medium file of the target video to the terminal330. It may be understood that the first-language audio and the first-language subtitles are preset-language subtitles and preset-language subtitles of the first video application. For example, if the first-language audio of the first video application is Chinese audio, and the first-language subtitles of the first video application are Chinese subtitles, the terminal330displays video stream pictures, the Chinese subtitles, and the Chinese audio of the target video. The terminal330parses out identifiers of the subtitle medium files of all languages and the identifiers of the audio medium files of all languages of the target video in the third index file. A user interface of the first video application may display the identifiers of the subtitle medium files of all languages and the identifiers of the audio medium files of all languages of the target video. The identifiers may be used by the user to choose to switch to the second-language subtitles or second-language audio. It can be understood that the first-language subtitles and the second-language subtitles are subtitles in different languages; and the second-language audio and the second-language audio are audio in different languages. For example, the first language may be Chinese, and the second language may be English. This is not limited in this application. The following describes a process in which the first video application displays the video stream, the subtitles, and the audio of the target video with reference to three implementations. The terminal330receives and responds to the playing request of the user for the target video. The terminal330sends the video obtaining request (including the identifier of the target video) to the content management server320. The content management server320queries a first playing URL of the target video based on the video obtaining request. The content management server320sends the first playing URL of the target video to the terminal330. The terminal330sends the first playing URL of the target video to the CDN server310. The CDN server310obtains the third index file based on the first playing URL of the target video. S509: The terminal330receives the request of the user to switch to the second-language audio or the second-language subtitles in the first video application. The terminal330parses out a download URL of a second-language subtitle medium file or a download URL of a second-language audio medium file based on the third index file. Specifically, the terminal330parses out the download URL of the second-language subtitle medium file in the third index file in response to the request to switch to the second-language subtitles or parses out the download URL of the second-language audio medium file of the target video in the third index file in response to the request to switch to the second-language audio. S510: The terminal330sends a request to download the second-language subtitle medium file or the second-language audio medium file to the CDN server310(the request carries the download URL of the second-language subtitle medium file or the download URL of the second-language audio medium file). In some embodiments, after the terminal330sends the request to download the second-language subtitle medium file or the second-language audio medium file to the CDN server310, the terminal330sends the download URL of the second-language subtitle medium file or the download URL of the second-language audio medium file to the CDN server310. It should be noted that the foregoing embodiments are merely used to explain this application and shall not constitute a limitation. The CDN server310receives the download request. S511: The CDN server310obtains the second-language subtitle medium file based on the download URL of the second-language subtitle medium file or obtains the second-language audio medium file based on the download URL of the second-language audio medium file. S512: The CDN server310sends the second-language subtitle medium file or the second-language audio medium file to the terminal330in real time. After sending the third index file to the terminal, the CDN server310receives a second obtaining request sent by the terminal330based on the download URL of the second-language subtitle medium file or the download URL of the second-language audio medium file in the third index file. The second obtaining request is used by the CDN server310to obtain the second-language subtitle medium file based on the download URL of the second-language subtitle medium file or obtain the second-language audio medium file based on the download URL of the second-language audio medium file. That is, the second obtaining request is used to switch to subtitles or audio in another language. S513: The terminal330displays the second-language subtitles or the second-language audio of the target video. The terminal330switches the first-language subtitles displayed by the first video application to the second-language subtitles based on the second-language subtitle medium file in response to the request to switch to the second-language subtitles. Alternatively, the terminal330switches the first-language audio displayed by the first video application to the second-language audio based on the second-language audio medium file in response to the request to switch to the second-language subtitles. It should be noted that the foregoing embodiments are merely used to explain this application and shall not constitute a limitation. For example, the first video application installed on the terminal330is playing a video “Brave step”, and the first video application displays the Chinese subtitles and Chinese audio of the target video. In this case, the user wants to switch the Chinese subtitles of the target video to English subtitles.FIG.6AtoFIG.6Dare diagrams of UIs in which the user switches the Chinese subtitles of the target video to the English subtitles. FIG.6Ais a diagram of a user interface600in which the first video application plays the video “Brave step”. The user interface600includes a video name601, a start/pause control602, a playing progress bar603, a backward control604, a forward control605, a next video playing control606, a subtitle prompt bar607, an audio/subtitle selection control608, and a video image609at a specific moment. The video name601includes “Brave step”. The start/pause control602displays a playing start state. The start/pause control602may receive a tap by the user, and then the video is paused. The playing progress bar603displays duration of the video that has been played. The subtitle prompt bar607displays the Chinese subtitles of the target video. The subtitle prompt bar607includes a subtitle “Yong Gan Zhi Xu Xiang Qian Yi Bu”. It can be understood that when the start/pause control602displays the playing start state, the playing progress bar603continuously changes with the playing of the target video, subtitle information of the target video displayed by the subtitle prompt bar607also continuously changes with the playing of the target video, and the video image609at the specific moment also continuously changes with the playing of the target video. The audio/subtitle selection control608may receive the tap by the user, and the first video application displays the user interface600shown inFIG.6Bin response to the tap by the user. The user interface600further includes a prompt box610. The prompt box610includes a subtitle language option and an audio language option. The subtitle language option includes a Chinese subtitle selection control6101and an English subtitle selection control6102. The audio language option includes a Chinese audio selection control6103and an English audio selection control6104. It can be understood that the first video application displays the Chinese subtitles and the Chinese audio of the target video. Therefore, both the Chinese subtitle selection control6101in the subtitle language option in the prompt box610and the Chinese audio selection control6103in the audio language option are displayed in bold. Alternatively, the Chinese subtitle selection control6101in the subtitle language option in the prompt box610and the Chinese audio selection control6103in the audio language option are displayed in another color (for example, blue). This is not limited in this application. The Chinese subtitle selection control6101and the English subtitle selection control6102may receive a tap operation by the user. The first video application switches, in response to the tap operation by the user, the subtitles in the language displayed in the subtitle prompt bar607to the subtitles in the language selected by the user. The Chinese audio selection control6103and the English audio selection control6104may receive a tap operation by the user. The first video application switches, in response to the tap operation by the user, the played audio of the target video to the audio in the language selected by the user. For example, the Chinese subtitles and Chinese audio are displayed for the video “Brave step” played by the first video application. The English subtitle selection control6102may receive a tap operation by the user, and the displayed Chinese subtitles are switched to the English subtitles for the video “Brave step” played by the first video application. The Chinese subtitles and Chinese audio are displayed for the video “Brave step” played by the first video application. The English audio selection control6104may receive a tap operation by the user, and the played Chinese audio is switched to the English audio for the video “Brave step” played by the first video application in response to the tap operation by the user. FIG.6CandFIG.6Dare diagrams of UIs in which a language of the subtitles displayed in the subtitle prompt bar607in the first video application is switched from Chinese to English. FIG.6Cis a diagram of a UI in which the first video application is switching the Chinese subtitles of the video content to the English subtitles.FIG.6Cshows the user interface600. The subtitle prompt bar607in the user interface600displays the Chinese subtitles. Subtitle content in the subtitle prompt bar607is “Yi Zhong Shi Chao Tuo Zi Zai Zi You Hao Mai De Ren”. FIG.6Dis a diagram of a UI in which the first video application has switched the Chinese subtitles of the video content to the English subtitles.FIG.6Dshows the user interface600. The subtitle prompt bar607in the user interface600displays the English subtitles. Subtitle content in the subtitle prompt bar607is “One is a person who is free and heroic”. FIG.7is a framework diagram of a system of a method for adding subtitles and/or audio according to another embodiment of this application. As shown inFIG.7, a system70includes a media asset server300, a CDN server310, a content management server320, a terminal330, and a transcoding server340. For descriptions of the media asset server300, the CDN server310, the content management server320, and the terminal330that are included in the system shown inFIG.7, references may be made to the embodiment inFIG.3, and details are not described herein again. The transcoding server340may be configured to transcode a content source file of a video into a type of file that can be played by a video application installed on the terminal330; and further configured to receive new-language subtitles or new-language audio and an index file sent by the media asset server300, perform format conversion on the new-language subtitles or the new-language audio, update the index file, and finally send a new-language subtitle medium file or a new-language audio medium file and an updated index file to the media asset server300. In some embodiments, the media asset server300, the CDN server310, the content management server320, and the transcoding server340may all be independently located on one physical device, or any two or more of the servers may be integrated on a same physical device. It should be noted that the system30is merely used to explain this application and shall not constitute a limitation. FIG.8AtoFIG.8Care a flowchart of a method for adding subtitles and/or audio according to another embodiment of this application. The method may be applied to the system70shown inFIG.7. The system70may include the media asset server300, the CDN server310, the content management server320, the terminal330, and the transcoding server340. For a specific description of the system70, references may be made to the embodiment shown inFIG.7, and details are not described herein again. The method may include: S801: The media asset server60receives indication information (including a subtitle adding instruction, an identifier of a target video, and new-language subtitles of the target video; and/or an audio adding instruction, the identifier of the target video, and new-language audio of the target video). S802: The media asset server60queries a first index file of the target video based on the identifier of the target video. For specific descriptions of S801and S802, references may be made to S401and S402in the embodiment shown inFIG.4, and details are not described herein again. S803: The media asset server300sends a request to perform format conversion on the new-language subtitles or new-language audio of the target video to the transcoding server340. S804: The transcoding server340receives and responds to the format conversion request. S805: The transcoding server340sends a file obtaining request to the media asset server300. S806: The media asset server300sends the new-language subtitles and/or new-language audio and the first index file of the target video to the transcoding server340. In some embodiments. S803, S804, S805, and S806may alternatively be replaced by “S803: The media asset server300sends a format conversion request message for the new-language subtitles or new-language audio of the target video to the transcoding server340, where the request message includes the new-language subtitles and/or new-language audio of the target video, and the first index file of the target video”. An example in which the new-language subtitles and/or new-language audio of the target video and the first index file of the target video are all carried in the same message (specifically, the request message) for sending is used for description. It may be extended that any two of these files may be carried in the same message for sending, or may be carried in different messages for sending. If the files are carried in different messages for sending, the different messages may or may not be sent simultaneously. In some embodiments, S806may alternatively be replaced by “S806: The media asset server300sends a download address of the new-language subtitles or new-language audio of the target video and a download address of the first index file of the target video to the transcoding server340”. The transcoding server340receives the download address of the new-language subtitles and/or new-language audio of the target video and the download address of the first index file of the target video sent by the media asset server300. The transcoding server340obtains the new-language subtitles and/or new-language audio of the target video and the first index file of the target video based on the download address of the new-language subtitles and/or new-language audio of the target video and the download address of the first index file of the target video. S807: The transcoding server340receives the first index file, and the new-language subtitles and/or new-language audio of the target video; and performs format conversion on the new-language subtitles of the target video to obtain the new-language subtitle medium file and/or performs format conversion on the new-language audio to obtain the new-language audio medium file. The media asset server300performs format conversion on the new-language subtitles of the target video to obtain the new-language subtitle medium file and/or performs format conversion on the new-language audio to obtain the new-language audio medium file. Herein, the format conversion is to convert the new-language subtitles and/or the new-language audio of the target video into a file format that can be recognized and played by the video application, for example, the new-language subtitle medium file and/or the new-language audio medium file encapsulated in an MP4 file format. S808: The transcoding server340updates the first index file of the target video (including adding description information of the new-language subtitle medium file and/or description information of the new-language audio medium file to the first index file) to obtain a second index file. The description information of the new-language subtitle medium file and/or the description information of the new-language audio medium file may include an identifier of the new-language subtitle medium file (for example, French subtitles) and/or an identifier of the new-language audio medium file (for example, French audio), an encoding format of the new-language subtitle medium file and/or an encoding format of the new-language audio medium file, a first URL of the new-language subtitle medium file and/or a first URL of the new-language audio medium file, and the like. The media asset server300adds the description information of the new-language subtitles and/or the description information of the new-language audio to the first index file to update the first index file. For ease of description, an updated first index file is referred to as the second index file. S809: The transcoding server340sends the new-language subtitle medium file and/or new-language audio medium file and the second index file to the media asset server300. S810: The media asset server300receives and stores the new-language subtitle medium file and/or new-language audio medium file of the target video, and replaces the first index file of the target video with the second index file. In some embodiments, S806, S807, S808, S809, and S810may be replaced by “S806: The media asset server300sends the new-language subtitles and/or new-language audio of the target video to the transcoding server340”. S807: The transcoding server340receives the new-language subtitles and/or new-language audio of the target video; and performs format conversion on the new-language subtitles of the target video to obtain the new-language subtitle medium/or performs format conversion on the new-language audio to obtain the new-language audio medium file. S808: The transcoding server340sends the new-language subtitle medium file and/or new-language audio medium file to the media asset server300. S809: The media asset server300updates the first index file (including adding the description information of the new-language subtitle medium file and/or description information of the new-language audio medium file to the first index file) to obtain the second index file”. In this way, the media asset server300does not need to send the first index file of the target video to the transcoding server340, and the media asset server300updates the first index file of the target video to obtain the second index file. This can reduce file transmission between the media asset server300and the transcoding server340and reduce network transmission resources. S811: The media asset server300sends the new-language subtitle medium file and/or the new-language audio medium file and the second index file to the CDN server310. S812: The CDN server310receives and stores the new-language subtitle medium file and/or the new-language audio medium file and the second index file, and updates the second index file (changes the first URL of the new-language subtitle medium file to a second URL of the new-language subtitle medium file and/or changes the first URL of the new-language audio medium file to a second URL of the new-language audio medium file) to obtain a third index file. S813: The CDN server310generates a first URL of the target video (including a download address of the third index file and security verification information for preventing playing through hotlinking). S814: The CDN server310sends the first URL of the target video to the media asset server300. S815: The media asset server300sends the identifier of the target video and a metadata information update amount of the target video to the content management server320. S816: The content management server320updates metadata information of the video based on the identifier of the target video and the metadata information update amount of the target video. For specific descriptions of S811, S812, S813, S814, S815, and S816, references may be made to S405, S406, S307, S408, S409, and S410in the embodiment shown inFIG.4, and details are not described herein again. In this embodiment, the transcoding server340performs format conversion on the new-language subtitles and/or new-language audio of the target video to obtain the new-language subtitle medium file and/or new-language audio medium file of the target video. In this way, conversion efficiency can be improved. The foregoing embodiments are merely intended for describing the technical solutions of this application instead of limiting this application. Although this application is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions described in the foregoing embodiments or make equivalent replacements to some technical features thereof, without departing from the scope of the technical solutions of embodiments of this application.
81,919
11943493
DETAILED DESCRIPTION For simplicity and clarity of illustration, the drawing figures illustrate the general manner of construction, and descriptions and details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the discussion of the described embodiments. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of the described embodiments. Certain figures may be shown in an idealized fashion in order to aid understanding, such as when structures are shown having straight lines, sharp angles, and/or parallel planes or the like that under real-world conditions would likely be significantly less symmetric and orderly. The same reference numerals in different figures denote the same elements, while similar reference numerals may, but do not necessarily, denote similar elements. In addition, it should be apparent that the teaching herein can be embodied in a wide variety of forms and that any specific structure and/or function disclosed herein is merely representative. In particular, one skilled in the art will appreciate that an aspect disclosed herein can be implemented independently of any other aspects and that several aspects can be combined in various ways. The present disclosure is described below with reference to functions, engines, block diagrams and flowchart illustrations of the methods, systems, and computer program according to one or more exemplary embodiments. Each described function, engine, block of the block diagrams and flowchart illustrations can be implemented in hardware, software, firmware, middleware, microcode, or any suitable combination thereof. If implemented in software, the functions, engines, blocks of the block diagrams and/or flowchart illustrations can be implemented by computer program instructions or software code, which may be stored or transmitted over a computer-readable medium, or loaded onto a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine, such that the computer program instructions or software code which execute on the computer or other programmable data processing apparatus, create the means for implementing the functions described herein. Embodiments of computer-readable media includes, but are not limited to, both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. As used herein, a “computer storage media” may be any physical media that can be accessed by a computer or a processor. In addition, the terms “memory” and “computer storage media” include any type of data storage device, such as, without limitation, a hard drive, a flash drive or other flash memory devices (e.g. memory keys, memory sticks, key drive), CD-ROMs or other optical data storage devices, DVDs, magnetic disk data storage devices or other magnetic data storage devices, data memory components, RAM, ROM and EEPROM memories, memory cards (smart cards), solid state drive (SSD) memories, and any other form of medium able to be used to transport or store or memorize data or data structures able to be read by a computer processor, or a combination thereof. Furthermore, various forms of computer-readable media may transmit or carry instructions to a computer, such as a router, a gateway, a server, or any data transmission equipment, whether this involves wired transmission (via coaxial cable, optical fibre, telephone wires, DSL cable or Ethernet cable), wireless transmission (via infrared, radio, cellular, microwaves) or virtualized transmission equipment (virtual router, virtual gateway, virtual tunnel end, virtual firewall). According to the embodiments, the instructions may comprise code in any computer programming language or computer program element, such as, without limitation, the languages of assembler, C, C++, Visual Basic, HyperText Markup Language (HTML), Extensible Markup Language (XML), HyperText Transfer Protocol (HTTP), Hypertext Preprocessor (PHP), SQL, MySQL, Java, JavaScript, JavaScript Object Notation (JSON), Python, and bash scripting. Unless specifically stated otherwise, it will be appreciated that throughout the following description discussions utilizing terms such as processing, computing, calculating, determining, or the like, refer to the action or processes of a computer or computing system, or similar electronic computing device, that manipulate or transform data represented as physical, such as electronic, quantities within the registers or memories of the computing system into other data similarly represented as physical quantities within the memories, registers or other such information storage, transmission or display devices of the computing system. The terms “comprise,” “include,” “have,” and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Additionally, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “in particular”, “for example”, “example”, “typically” are used in the present description to denote examples or illustrations of non-limiting embodiments that do not necessarily correspond to preferred or advantageous embodiments with respect to other possible aspects or embodiments. The terms “operationally coupled”, “coupled”, “mounted”, “connected” and their various variants and forms used in the present description refer to couplings, connections and mountings that may be direct or indirect, and comprise in particular connections between electronic equipment or between portions of such equipment that allow operations and modes of operation as described in the present description. In addition, the terms “connected” and “coupled” are not limited to physical or mechanical connections or couplings. For example, an operational coupling may include one or more wired connection(s) and/or one or more wireless connection(s) between two or more items of equipment that allow simplex and/or duplex communication links between the equipment or portions of the equipment. According to another example, an operational coupling or a connection may include a wired-link and/or wireless coupling for allowing data communications between a server of the proposed system and another item of equipment of the system. “Server” or “platform” in the present subject disclosure means any (virtualized or non-virtualized) point of service or computer device or system performing data processing operations, one or more databases, and/or data communication functions. For example, and without limitation, the term “server” or the term “platform” may refer to a physical processor operationally coupled to associated communication, database and data storage functions, or refer to a network, a group, a set or a complex of processors and associated data storage and networking equipment, and to an operating system and one or more database system(s) and application software supporting the services and functions provided by the server. A server or platform may be configured to operate in or as part of a cloud computing environment. A computer device or system may be configured so as to send and receive signals, via wireless and/or wired transmission networks(s), or be configured so as to process and/or store data or signals, and may therefore operate as a server. Equipment configured so as to operate as a server may thus include, by way of non-limiting example, dedicated servers mounted on a rack, cloud-based servers, desktop computers, laptop computers, service gateways (sometimes called “box” or “home gateway”), multimedia decoders (sometimes called “set-top boxes”), integrated equipment combining various functionalities, such as two or more of the abovementioned functionalities. The servers may vary greatly in terms of their configuration or their capabilities, but a server will generally include one or more central processing unit(s) and a memory. A server may also include one or more item(s) of mass memory equipment, one or more electric power supply/supplies, one or more wireless and/or wired network interface(s), one or more input/output interface(s), one or more operating system(s), such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, or an equivalent. The terms “application” or “application program” (AP) and their variants (“app”, “web app”, etc.) as used in the present description correspond to any tool that operates and is operated by way of a computer in order to provide or execute one or more function(s) or task(s) for a user or another application program. In order to interact with an application program and control it, a user interface may be provided on the equipment on which the application program is implemented. For example, a graphical user interface (or GUI) may be generated and displayed on a screen of the user equipment, or an audio user interface may be played back to the user using a speaker, a headset or an audio output. The term “multimedia content” as used in the present description corresponds to any audio and/or video or audiovisual content, with or without closed captions, open captions, subtitles, timed text or visual descriptors. In the present description, the terms “real-time” distribution, distribution “in linear mode”, distribution “in linear TV mode”, distribution “in dynamic mode” and “live” distribution or distribution “in live mode” are used interchangeably to denote the distribution in live mode or dynamic mode of multimedia content in a content distribution system to terminals, comprising in particular the distribution of the content as it is generated, as opposed to distributing content generated previously, upon an access request from a user (distribution upon an access request or “static” distribution or distribution in static mode), such as for example content recorded on a server and made available to users by a video on demand (VOD) service. In the present description, the terms “real-time” are also used in the context of video encoding or compressing video content, to denote the encoding or compression of video content at least at the same speed, for example expressed in frames per second, as it is generated by one or more video content sources. For instance, if content is generated at 50 frames per second (fps) it will be deemed encoded in real-time as long as it is also encoded at at least 50 fps. In the present description, the term “live content” refers to content, for example multimedia content, that is distributed, for example using an OTT distribution mode, in dynamic mode (as opposed to the static distribution mode). Live content will typically be generated by a television station, or by any type of television medium, and may also be distributed on a multimedia content broadcast network, in addition to being made available on content servers in an OTT distribution system. In the present description, the terms “terminal”, “user equipment”, “reader”, “reading device”, “reading terminal” and “video reader” are used interchangeably to denote any type of device, implemented by one or more items of software, one or more items of hardware, or a combination or one or more items of software and one or more items of hardware, configured so as to use multimedia content distributed in accordance with a distribution protocol, for example a multi-screen distribution protocol, in particular by loading and by reading the content. The terms “client” and “video-reading client” are also used interchangeably to denote any type of device, software and/or hardware, or any function or set of functions, implemented by software and/or hardware within a device and configured so as to use multimedia content distributed in accordance with a distribution protocol, for example a multi-screen distribution protocol, in particular by loading the content from a server and by reading the content. The terms “network” and “communication network” as used in the present description refer to one or more data links that may couple or connect possibly virtualized equipment so as to allow electronic data to be transported between computer systems and/or modules and/or other devices or electronic equipment, such as between a server and a client device or other types of devices, including between wireless devices that are coupled or connected via a wireless network, for example. A network may also include a mass memory for storing data, such as a NAS (network attached storage), a SAN (storage area network) or any other form of computer-readable or machine-readable medium, for example. A network may comprise, in full or in part, the Internet, one or more local area networks (LAN), one or more wide area networks (WAN), wired connections, wireless connections, cellular connections or any combination of these various networks. Similarly, subnetworks may use various architectures or conform with or be compatible with various protocols, and interoperate with larger networks. Various types of equipment may be used to make various architectures or various protocols interoperable. For example, a router may be used to provide a communication link or a data link between two LANs that would otherwise be separate and independent. The methods proposed in the present subject disclosure may be implemented by any video source encoder, video source decoder, or video codec configured for encoding and/or decoding images (or frames) of input video data, such as, for example a video encoder and/or decoder compliant with any of the H.261, MPEG-1 Part 2, H.262, MPEG-2 Part 2, Alliance for Open Media (AOM) AV1, H.264/AVC, H.265/HEVC, MPEG-4 Part 2, and SHVC (Scalable HEVC) specifications or standards, whether in their existing versions and/or their evolutions, as the case may be adapted for implementing one or more embodiments of the proposed methods. FIG.1shows an exemplary remote MCR monitoring system according to one or more embodiments of the subject disclosure. Shown onFIG.1is a system100acomprising a master control room101adapted for implementing one or more embodiments of the proposed method connected to a data communication network102to which a remote monitoring computing system103is connected. The remote monitoring computing system103is also adapted for implementing the one or more embodiments of the proposed method. The master control room101may typically comprise a headend sub-system101ato which video sources101bare input, and which outputs video streams101cto end customers. The video sources101amay include various audio and/or video content, for example generated by a news studio or a movie studio, and metadata corresponding to content, for example representing closed captions, subtitles, etc. The exemplary master control room (MCR)101illustrated onFIG.1comprises two monitoring points101d1and101d2, that is, points along the headend workflow at which a video stream is probed for monitoring purposes. A first monitoring point101d1is configured to monitor video content of the video sources101b, while a second monitoring point101d2is configured to monitor video content of the video outputs101c. On premises, monitoring will typically be configured at multiple points along the headend workflow to be able to localize or identify the point at which any issues, such as disruptions and/or quality issues, occur. Even though the following focuses on a non-limiting example wherein two monitoring points101d1and101d2are configured, a person of ordinary skill in the art would understand that the proposed processes, apparatuses and computer programs of the present subject disclosure may be implemented for a different number of monitoring points configured at various points along the headend workflow, and that such proposed processes, apparatuses and computer programs of the present subject disclosure are not limited to the use of any specific number or configuration of monitoring points, and in particular to the system ofFIG.1with the two monitoring points101d1and101d2, which are provided as an example only. Content may be uncompressed or very lightly compressed along some of the links in the headend workflow. For example, the monitoring point101d1is configured to monitor uncompressed video content corresponding to video sources101b, which may be transported using a data communication link suitable for carrying raw video content, such as an SDI link, a TSoIP link or a HDMI link. As illustrated inFIG.1, uncompressed content can be monitored at the MCR premises, that is, viewed by a monitoring user, using SDI links. A person of ordinary skill in the art would understand that any type of link suitable for transporting uncompressed or very lightly compressed video content may be used in place of the SDI or TSoIP links ofFIG.1, which are provided as an example only. Video content captured through the monitoring point101d2, which is located at the end of the headend workflow, may be transported using a data communication link suitable for carrying compressed video content, such as for example a TSoIP link as illustrated onFIG.1. As illustrated onFIG.1, in one or more embodiments the MCR101comprises one or more monitoring servers104(“Live Servers” onFIG.1) typically provided at the premises where the MCR101is physically located, which may comprise one or more encoding and/or transcoding engines, also referred to in the following as source encoders. These one or more servers104are preferably configured for encoding content (such as video content and/or audio content) at each monitoring point along the head-end workflow in real-time. In other embodiments the servers104may be configured for encoding in real-time different type of content (for example different types of video content) at corresponding monitoring points along the head-end workflow. In one or more embodiments, the data communication network102may be a Content Distribution Network or a Content Delivery Network (CDN). Advantageously, the monitoring servers104may be configured to continuously upload encoded content to a server102ainterfacing the MCR101with the CDN102. The MCR101may therefore be provided in some embodiments with an interface with a server102a(for example an origin server) of a CDN102for outputting to the server102acompressed content, for example in push mode, that is, as it is generated by the one or more monitoring servers104of the MCR101. In one or more embodiments, the data communication between the monitoring servers104and the CDN102through the origin server102amay use a low latency data communication mechanism, such as for example provided by the CMAF (Control Media Application Format) data communication mechanism, as illustrated onFIG.1. The CMAF mechanism provides a container format and a specific means for formatting and packaging audio/video content which allows transmission of audio/video content with low latency between the time content to be monitored is received at the monitoring point and the time such content is displayed at a remote monitoring computer device103ato a user/technician. Content formatted according to the CMAF may advantageously be transferred over IP-based communication network and IP-based communication links. Low latency is achieved because CMAF formatted content can be transferred in smaller chunks (atomic units) as compared to other, traditional formats. As an example, the streaming latency for encoding, distributing, decoding and displaying video content is typically in the order of 30 seconds or more. When using the proposed low latency data communication mechanisms (such as CMAF), a reduced latency for encoding, distributing, decoding and displaying video content of approximately 3-5 seconds may be achieved. A person of ordinary skill in the art would understand that other suitable formats providing low-latency data communication of video content, such as Low Latency HTTP Live Streaming (LL HLS), in its current or future versions, may be used in place or as a complement of CMAF, in its current or future versions, which is given only as an example. In order to further reduce the latency between the time content to be monitored is received at the monitoring point (that is, prior to its encoding) and the time at which the content is displayed to the technician in a remote monitoring site, the data communication link between the MCR101and the CDN102may be chosen to be high bitrate. The server102aof the CDN102may be configured for receiving data outputted by the monitoring servers104and formatted using the CMAF low latency data communication mechanism. As mentioned above, a “push”-type data communication mechanism may also be used between the monitoring servers104and the CDN102, so as to further reduce the latency. Indeed, each of the monitoring servers104knows when content (e.g. depending on the embodiment, segments, or chunks as discussed below) have been encoded. Therefore encoded content may be pushed towards the interface between the relevant encoding server and the CDN102as soon as it is encoded, that is, without waiting for a request for data generated by the server102a. In this case, data communication is guided by the live server(s)104instead of the origin server102aas illustrated onFIG.1. Alternatively, a “pull”-type data communication mechanism may be used, according to which the server102amay request data from the encoders104as it is ready to transfer such data through the CDN102. In this case, data communication would be guided by the origin server102ainstead of the live server(s)104. The server102amay be an origin server which interfaces with the CDN network102, and is configured for data communication between the monitoring servers104and the CDN network102. Depending on the embodiment, the origin server102amay or may not reside in the MCR premises. As illustrated onFIG.1, the server102amay be configured as a streaming server, for low latency delivery of streaming video content to a computer system103alocated in a remote monitoring site103and configured with a remote monitoring application, such as for example a remote monitoring client software. Alternatively, the server102amay be configured for data communication with one or more streaming servers of the CDN network102. At the remote monitoring site103, a user (e.g. a technician) may monitor on the computer system103avideo content that may correspond to one or more of the monitoring points101d1and101d2configured at the MCR101. A person of ordinary skill in the art would understand that any type of network suitable for providing fast delivery of Internet content, such as video content, may be used in place of the CDN102ofFIG.1, which is provided as an example only, and that any format suitable for transmission of content, such as video content, with very low latency, may be used in place of the CMAF format ofFIG.1, which is provided as an example only. Using a CDN advantageously allows leveraging the existing CDN infrastructure, whether public or private, for data communication between the monitoring servers104of the MCR101and the computer system103aof the user at the remote monitoring site103. Advantageously, according to the present subject disclosure, the monitoring servers104and the video streams they deliver are completely separate from the actual live distribution of the channels, so that they do not impact their operation. In one or more embodiments, uncompressed and/or lightly compressed content may be encoded using advanced video compression standards (such as, but not limited to, HEVC, AV1 or VVC, or typically any video compression scheme which performs a bitrate compression by a factor of 100 or more as compared to a SDI bitrate) to minimize bandwidth requirements while maintaining a high quality for the video. Similarly, audio content, such as corresponding to audio accompanying the video content, may be compressed using advanced audio codecs such as, but not limited to, HE-AAC or AC-4. To protect the content from unauthorized usage, content protection schemes (such as encryption) may in some embodiments be applied prior to uploading to the server102aof the CDN network102to which the content is outputted by the monitoring servers104. At the remote monitoring site103—such as a satellite office or a home office—a technician may use in one or more embodiments a low latency streaming enabled player (such as dash.js) to receive and monitor the data streams received from CDN network102. In one or more embodiments, a remote monitoring Management System (MS)103ais proposed to implement aspects of the proposed methods related to the remote monitoring site. The remote monitoring MS103amay be configured with various user interfaces, including a Graphical User Interface (GUI) and a dashboard to enable the technician to pick and choose which stream or streams to monitor (e.g. monitor point 2 of channel 5, or a mosaic of all monitoring points for channel 7). FIGS.2aand2billustrate exemplary methods for monitoring video streams according to one or more embodiments of the present subject disclosure.FIG.2aillustrates operations200athat may preferably be carried out at a master control room site or at any multimedia content distribution site where monitoring points may need to be setup, such as for example the MCR101illustrated onFIG.1, whereasFIG.2billustrates operations200bthat may preferably be carried out at a remote monitoring site, that is, a site distinct from the site in which monitoring points are setup, such as for example the remote monitoring site103illustrated onFIG.1. As shown onFIG.2a, in some embodiments, encoded multimedia content signals may be generated201aby one or more encoders, based on the encoding of monitored video streams. Preferably, the monitored video streams may respectively correspond to one or more monitoring points along a workflow chain of a video broadcasting headend which are configured for one or more video channels broadcasted by the headend. The video channels may typically correspond to TV channels broadcasted using one or more of various TV broadcast technologies (such as over the air, satellite, cable and over the top broadcasting technologies). Depending on the embodiment, the video encoding may be configured, for example dimensioned, based on the number of TV channels to be monitored and/or the number of monitoring points to be configured for each monitored TV channels. The encoded multimedia content may then be transmitted202ato a remote monitoring device, for example through a content distribution network, such as the CDN network102illustrated onFIG.1. Preferably, the remote monitoring device configured for remote (for example, home) monitoring of video streams. As shown onFIG.2b, for example at a remote monitoring site comprising the remote monitoring device, the encoded multimedia content may be received201bby a player comprised in the remote monitoring device. One or more of the received encoded multimedia content signals may then be displayed202bfor monitoring, by a technician, the corresponding monitored video streams. In one or more embodiments, the encoded multimedia content signals may advantageously be transmitted to the remote monitoring device using low-latency chunk encoding and transfer techniques. Indeed in video encoding, more particularly for video streaming usage, a video sequence to be transmitted may be encoded by segmenting the video sequence, that is, dividing the video sequence into segments, prior to encoding. The use of segment-based transfer in current systems takes into account constraints that are intended to ensure that the change in profile from one level of quality to another does not have a visual impact that could be inconvenient to a user. A video sequence comprising a set of images may thus be divided into consecutive segments, each segment corresponding to a subset of the set of images, in particular when the video sequence is distributed in encoded form in dynamic (live) mode or in static mode (for example in on-demand distribution mode). The segments of the sequence may be encoded sequentially, for example as they are distributed. Each segment that is transmitted, for example during video streaming, thus represents an encoded part of the video sequence or of the initial video stream for a given quality, and can be decoded independently of other segments of the video sequence. The phase of encoding the segments prior to transmission aims to compress the data of the video sequence in order to limit the resources required to transmit it and/or to store it. This encoding phase also makes it possible to obtain a plurality of lower and higher quality/resolution profiles in one and the same video sequence. These profiles of the video sequence are then transmitted via the segments, which makes it possible to adapt the quality of the video sequence to the constraints of the client (bitrate, hardware resources of the viewing apparatus, etc.). Therefore segments may be used as switching boundaries between bandwidths. Therefore, using segments allows a player device to start reading a video sequence even though the video sequence has not been completely downloaded, but also to switch to a different profile having a lower or higher quality, depending on the fluctuations in bandwidth experienced by the player device during reception of the video stream. In the field of video streaming, segments may be further divided into a sequence of subsets, usually called chunks, for example according to a low latency transport format such as CMAF. Various video stream transport protocols, such as HLS (HTTP Live Streaming) or DASH (Dynamic Adaptive Streaming over HTTP), use stream segmentation. The use of segments divided into low latency chunks has been introduced more recently for these video stream transport protocols, for example through the introduction of CMAF for DASH, and the specification of Low Latency HLS (LL-HLS). Each segment may include a plurality of chunks, where each chunk can be encoded and transmitted separately. In contrast to the segments, however with the exception of the first chunk in each segment which is independently decodable, chunks belonging to the same segment are dependent on each other for decoding. For example, 2-second long CMAF segments may each be divided into four 500-ms long chunks. The first chunk of each segment may be independently decodable, while the remaining chunks within the same segment may not. Because the chunks are much smaller than the segments, the chunks may be transmitted much faster than the segments. Therefore the proposed methods may in some embodiments advantageously use available segmented stream encoding and transfer schemes, and preferably schemes using low latency chunks on segmented streams, even though applied to video streams that are not broadcasted to end user, to facilitate and improve the monitoring capacities of a technician in a remote monitoring site (typically a technician who is not working in the MCR, but in a different facility—for example a corporate monitoring site other than the MCR or working from home). In embodiments where the encoded multimedia content signals are transmitted to the remote monitoring device using low-latency chunk encoding and transfer techniques, the player may be advantageously configured to receive and decode content that is transmitted using low latency chunk encoding and transfer techniques. Any suitable segmentation-based encoding and/or decoding technique, such as for example specified for coding standards, such as H.264/AVC, H.265/HEVC and MPEG-2, may be used according to embodiments of the present subject disclosure. In a typical MCR, a single technician is responsible for monitoring several channels. Given that each channel is monitored at multiple monitoring points, the technician would need to have access to a potentially large number of streams. For instance, if the technician needs to monitor 8 channels, and each channel has 4 monitoring points, that adds up to 32 streams. Even using efficient encoders in the above-mentioned monitoring servers104, these many streams may still overwhelm the technician's home office connection. FIG.3illustrates other aspects of an exemplary remote MCR monitoring system according to one or more embodiments of the subject disclosure for enabling simultaneous monitoring of multiple streams. Shown onFIG.3is a system100bsimilar to the system100aofFIG.1in that it also comprises a master control room101adapted for implementing one or more embodiments of the proposed method connected to a data communication network106to which a remote monitoring computing system103aat a remote monitoring site103is connected. The remote monitoring computing system103ais also adapted for implementing the one or more embodiments of the proposed method. The master control room101may typically comprise a headend sub-system101ato which video sources101bare input, and which outputs video streams101cto end customers. The video sources101amay include various audio and/or video content, for example generated by a news studio or a movie studio, and metadata corresponding to content, for example representing close captions, subtitles, etc. The exemplary master control room (MCR)101illustrated onFIG.3also comprises two exemplary monitoring points101d1and101d2, that is, points along the headend workflow at which a video stream is probed for monitoring purposes. A first monitoring point101d1is configured to monitor video content of the video sources101b, while a second monitoring point101d2is configured to monitor video content of the video outputs101c. On premises, monitoring will typically be configured at multiple points along the headend workflow to be able to localize or identify the point at which any issues, such as disruptions and/or quality issues, occur. Even though the following focuses on a non-limiting example wherein two monitoring points101d1and101d2are configured, a person of ordinary skill in the art would understand that the proposed processes, apparatuses and computer programs of the present subject disclosure may be implemented for a different number of monitoring points configured at various points along the headend workflow, and that such proposed processes, apparatuses and computer programs of the present subject disclosure are not limited to the use of any specific number or configuration of monitoring points, and in particular to the system ofFIG.1with the two monitoring points101d1and101d2, which are provided as an example only. In the one or more embodiments of the proposed methods illustrated onFIG.3, one or more of the monitored streams are downscaled by a downscaling engine105, also referred to as a thumbnail generation engine105in the following (which, depending on the embodiment, may be hosted in head-end transcoders or may be implemented in the monitoring servers104ofFIG.1) to much lower resolution thumbnails, instead of being transmitted at full resolution video. These thumbnails may then be encoded at much lower bitrates prior to transmission to the remote monitoring computer system103aat the remote site103. Depending on the embodiment, the thumbnails may be generated by the thumbnail generation engine105which may be implemented separately from the encoding server(s)104illustrated onFIG.1, or as an engine in one or more of such encoding server(s)104. Therefore, in one or more embodiments one or more of the monitored video streams are downscaled by the one or more encoders prior to being encoded to generate the encoded multimedia content signals transmitted to the remote monitoring computer system at the remote monitoring site. The transmission of encoded thumbnails to a remote monitoring system103alocated in a remote monitoring site103may use the same data communication mechanisms, protocols and equipment as described above with respect to transferring encoded content with low latency mechanisms to a remote monitoring computer system and illustrated onFIG.1. The generation of thumbnails based on downscaling the monitored video streams advantageously significantly reduces the bitrate required for transmission of streams to be monitored to a remote monitoring site, such as the home of a technician. Indeed, while the downscaling process causes loss of detail and overall quality, the thumbnails are still good enough to detect major problems such as loss of signal, pixelation, loss of sound or poor lip synch, etc., that is, to monitor at a remote site the monitored video streams. As illustrated onFIG.3, in one or more embodiments, a VPN established between the MCR101and the remote monitoring computer system103amay advantageously be used to transmit configuration commands from the remote monitoring computer system103ato the one or more engines implementing the proposed method of the present subject disclosure for configuration thereof, in particular to transmit configuration commands to the thumbnail generation engine to configure the generation of the service thumbnails. In one or more embodiments, control data may be exchanged between one or more servers located in the MCR101and configured for implementing the proposed method, such as the servers104illustrated onFIG.1and the servers104-105, and/or107illustrated onFIGS.3and4, and remote monitoring computer system103alocated in a remote monitoring site103, through a data communication network106. In one or more embodiments, the exchange of control data may be performed using one or more control and management servers106aconfigured for implementing control and management (including service configuration and alarm management) functions associated with the monitoring of video content, and in particular the remote monitoring according to a proposed methods. For example, a control and management server106amay be configured to provide access control, en masse configuration of encoders, consolidation of alarms and logs, automatic detection of failures and recovery, specifying redundancy, etc. In one or more embodiments, a Virtual Private Network (VPN) may be used for the exchange of control data, between the servers104-105and/or107of the MCR101and the control and management server106a, and/or between the control and management server106aand the remote monitoring computer system103a. For example, in some embodiments of the proposed method, a user (e.g. a technician) of a remote monitoring computer system or device may input a command for configuring the encoding of one or more of the video streams to be remotely monitored. The remote monitoring device may receive a first user input for configuring one or more source encoders used at the MCR for encoding video streams to be remotely monitored. The remote monitoring device may be configured for, upon receiving the first user input, generating source encoding configuration data based on the first user input, and transmitting to one or more source encoders at the MCR the source encoding configuration data. The one or more source encoders may in turn be configured for receiving the source encoding configuration data and for updating their configuration with respect to encoding based on received source encoding configuration data. As another example, in some embodiments, the technician at the remote site103may have the ability to remotely configure the generation of the thumbnails (including various parameters, such as for example resolutions and bitrates) via a two-way control plane using a VPN as shown inFIG.3. For example, in some embodiments of the proposed method, a user (e.g. a technician) of a remote monitoring computer system or device may input a command for configuring the downscaling of one or more of the video streams to be remotely monitored. The remote monitoring device may receive a second user input for configuring one or more source encoders used at the MCR for downscaling video streams to be remotely monitored. The remote monitoring device may be configured for, upon receiving the second user input, generating downscaling configuration data based on the second user input, and transmitting to one or more source encoders at the MCR the downscaling configuration data. The one or more source encoders may in turn be configured for receiving the source encoding configuration data and for updating their configuration with respect to downscaling based on received downscaling configuration data. In one or more embodiments, the control and management functions described in the present subject disclosure, in particular in connection with the examples illustrated onFIGS.3and4, such as the alarm and/or configuration functions described in the present subject disclosure, may be implemented and used with a remote monitoring system and method such as described in connection with the example ofFIG.1. Through these one or more remote configuration capabilities, the monitoring of video streams may therefore advantageously be further expanded to enable a richer control monitoring interface for remote monitoring streams in a transparent manner from a remote site, that is as if the technician was on-site (for example at the MCR). In one or more embodiments, a VPN established between the MCR101and the remote monitoring computer system103may advantageously be used to receive at the remote monitoring computer system103ahigh-priority and low bitrate alerts and alarms through this connection. In one or more embodiments, a plurality of downscaled monitored video streams may be multiplexed to form a single mosaic stream prior to being encoded into the encoded multimedia content signals, thereby advantageously providing a more efficient transmission to the remote monitoring site103. The mosaic stream may be generated according to the following acts: a) a plurality of video streams are downscaled, b) the downscaled streams are stitched together to form a single mosaic video, c) the mosaic is then encoded as if it were one single large resolution video. Therefore, depending on the embodiment, thumbnails streams may be distributed independently or multiplexed. The multiplexing of thumbnail streams into one or more mosaic streams may be performed as illustrated onFIG.3by a multiplexing engine107, possibly embedded in one or more servers (“Mux Servers” onFIGS.3and4). The multiplexing engine107may be configured to receive a plurality of thumbnails, generated by downscaling corresponding uncompressed video streams, for example at the Live Servers104-105, through a data communication link between the Live Servers104-105and the multiplexing engine107. The multiplexing engine107may further be configured to generate one or more mosaic streams based on the received data. A plurality of thumbnails may then be multiplexed into one or more mosaic streams, which are then provided to the monitoring servers104for encoding prior to transmission to the remote monitoring device103a. Likewise the source encoding and downscaling operations discussed above, the multiplexing of thumbnail streams may be remotely configured in some embodiments by a technician based on a user interface provided on the remote monitoring device103a. For example, in some embodiments of the proposed method, a user (e.g. a technician) of a remote monitoring computer system or device may input a command for configuring the multiplexing of one or more thumbnails generated based on video streams to be monitored. The remote monitoring device may receive a third user input for configuring one or more multiplexing engines used at the MCR for multiplexing one or more thumbnails. The remote monitoring device may be configured for, upon receiving the third user input, generating multiplexing configuration data based on the third user input, and transmitting to one or more multiplexing engines at the MCR the multiplexing configuration data. The one or more multiplexing engines may in turn be configured for receiving the multiplexing configuration data and for updating their configuration with respect to multiplexing thumbnails based on received multiplexing configuration data. Depending on the embodiment, the system100aofFIG.1and the system100bofFIG.3may be used independently from each other, in that the encoding may be performed on thumbnail streams instead of uncompressed streams with no downscaling. That is, the proposed remote monitoring method according to embodiments illustrated with the system100aofFIG.1may be used independently from the proposed remote monitoring method according to embodiments illustrated with the system100bofFIG.3. For example, the system depicted inFIG.3can be used independently of the components shown inFIG.1, especially when the remote technician has an Internet connection with a relatively modest bandwidth. In other words, in some embodiments the system depicted inFIG.3may be used as a standalone remote monitoring solution. In such case, all video streams to be monitored may be downscaled to generate respective thumbnail streams, which may be encoded and distributed through an origin server and a CDN network to a remote monitoring computer system located in a remote monitoring site. FIG.4illustrates other aspects of an exemplary remote MCR monitoring system according to one or more embodiments of the subject disclosure. FIG.4shows a remote monitoring system100cwhich combines the remote monitoring components of the system100aofFIG.1with that of the system100bofFIG.3. The system100cofFIG.4therefore advantageously combines the full resolution low latency streaming remote monitoring ofFIG.1with the thumbnail streams and alarms remote monitoring ofFIG.3. This comprehensive system100cmay advantageously be used by remote technicians with a high bandwidth connection and enables a more comprehensive form of remote monitoring. MCRs are typically equipped with numerous systems and tools for automatic detection of common audio or video quality issues—such as loss of video (black frame detection), loss of audio, video pixelation due to lossy channels, etc. Upon detection of such conditions, alarms are automatically sent to technicians who can then verify the severity of the issue and take appropriate action. In a remote monitoring system, such as the ones described in the present subject disclosure, these automated tools can advantageously be leveraged to prioritize the streams that are pushed to the remotely located technician. In one or more embodiments, an MCR monitoring engine, for example implemented in a monitoring server of the MCR, may be configured for, upon detection of an issue for a particular channel, sending an alarm to a user interface (for example, a dashboard) of a remote monitoring application implemented in a remote monitoring computer system or device used by a technician at a remote monitoring location. In some embodiments, the alarm may be transmitted to the remote monitoring application along with the feeds from the two monitoring points immediately before and after the point in the headend workflow at which the issue occurs. In some embodiments, these feeds may be highlighted on the remote monitoring application to further draw the attention of the technician, for instance by expanding their window to occupy a larger portion of a viewing screen (as illustrated inFIG.5), or by flashing the window on the screen or other such measures. To further prioritize the channel(s) where issues have been automatically detected, in one or more embodiments, streaming of the other channels (where no issues are detected by the automated tools) to the remote location can either be halted (until manually overridden by the technician) or done at a lower bit-rate so as to provide maximum bandwidth to the channel(s) experiencing issues. In particular, the alarms themselves—which may be sent in the form of short text messages—may be prioritized over the video feeds and sent immediately. The remote monitoring application may therefore be configured to update accordingly the configuration of the source encoding, the thumbnail generation and/or the thumbnail multiplexing in order to prioritize one or more channels for which issues have been detected. Such configuration update may be performed upon a user input from the technician operating the remote monitoring, or automatically upon receipt of alert data related to an issue occurring for a channel. For purposes of encoding a video stream, the images (also referred to as “frames”) of a video stream to be encoded may be considered sequentially and divided into blocks of pixels that are processed sequentially, for example starting at the top left and finishing at the bottom right. These blocks of pixels are predicted based on a correlation that exists between the pixels, which correlation may be either spatial (in the same frame) or temporal (in previously encoded frames). This prediction makes it possible to calculate the lowest possible pixel residual that is then transmitted to the decoder after transformation and quantization, in order to limit the side of the binary stream at the output of the encoder. Video encoding/compression standards typically define and use different prediction modes depending on whether they use spatial and/or temporal correlation-based prediction: In a first mode called the “intra” mode, a block of pixels of a current frame is predicted based on previously coded adjacent pixels present in the current frame. Depending on the encoder, different prediction directions may be used to predict an intra block. The encoder may generate, as part of the output encoding data which is provided to the decoder, data relating to the pixel residual and to the corresponding intra direction. In another mode called the “inter” mode, a block of pixels of a current frame is predicted using pixels from one or more previously coded frames (other than the current frame), usually called reference frames. The block may be predicted based on one or on a plurality of reference frames. A motion estimation may be carried out, and a motion vector determined based on each reference frame. It is therefore possible to use a single motion vector or two motion vectors that point to potentially different reference frames to predict a block in inter mode. The encoder may generate, as part of the output encoding data which is provided to the decoder, data relating to the pixel residual and to the motion vector(s). In yet another prediction mode used by some encoders, called the “skip” mode, as in the case of the inter mode, a block of pixels is predicted using pixels from previously coded frames. However for the skip mode only data related to the motion vectors may be transmitted to the decoder, the pixel residual being zero. This means that the block chosen in another frame will be copied exactly in order to obtain the current block. The second difference is that the vectors used for this mode are determined by the decoder in a list of possible vectors, called predictor vectors, thereby making it possible to transmit only the position of the predictor in a dictionary rather than transmitting its value. Usually, a combination of all three modes is employed to provide an optimal balance between coding efficiency, random access capability, and robustness to errors and packet drops. Video compression schemes also usually use the group of pictures (GOP) concept to define an order in which the images of a video chunk are arranged. A GOP is typically repeated periodically until the end of the encoding, and includes the following types of frame: Type I (Intra) frames: this type of frame uses only the intra prediction mode. It therefore characterizes a frame irrespective of the other frames of the stream, in that it is sufficient in itself to be decoded, Type P (Predictive) frames: This type of frame uses the three intra, inter and skip prediction modes. The temporal correlation-based prediction is performed using only past frames in the stream among frames that have already been encoded. Type B (Bidirectional) frames: This type of prediction also uses the three intra, inter and skip prediction modes, but the temporal correlation-based prediction may use past frames and/or future frames, as long as these are previously encoded frames. There are two types of GOP: closed GOPs, that can be decoded independently, and open GOPs that are more efficient in terms of compression, but some frames of which may not be decoded in the following GOP. A fixed GOP is a GOP whose order is constant every N images, N being a predetermined number. Therefore the pattern of encoding modes is repeated in a GOP, each pattern being referred to as a GOP structure. In one or more embodiments, in cases where one or more issues impacting one or more channels are detected, to further reduce the latency of the impacted channels, the GOP structure used to encode the impacted channels may automatically be modified to only include the first two modes (i.e. Intra coded frames (I-frames) and frames predicted from previously encoded frames (P-frames)). This may advantageously minimize encoder latency, possibly at the expense of a loss in compression efficiency—which however can be compensated for by reducing the bitrate of other channels. As discussed above, a chunked encoding and transfer mechanism may be used to achieve low latency streaming. In this mechanism, the encoder would encode several frames (amounting to typically a time period of 100 to 200 ms) and then packetize them and send them out to a server of a CDN network such as the server102aof the CDN102ofFIGS.1and4. To further reduce latency in case an issue is detected, the chunking mechanism of the monitoring streams of the impacted channels may in one or more embodiments be automatically modified to include a single frame per chunk, enabling each frame to be immediately transmitted upon encoding and encryption. While this may reduce compression efficiency, the encoder latency may advantageously be reduced from around the 100 ms-200 ms range down to approximately 30 ms. In one or more embodiments, once the technician has fixed the issue and cleared the alarm, both the GOP structure and chunking mechanism can revert to their default settings meant to optimize the combination of compression efficiency and media quality. The proposed method may in some embodiments use an alarm probing mechanism according to which a technician probes one or more proposed servers/engines for status and may receive in response one or more alarms. Alternatively, the proposed servers/engines may be configured to, upon generating an alarm, send an alarm signal to the technician at the remote monitoring location. Depending on the embodiment, each of the servers/engines used for implementing the proposed method, including without limitation the monitoring servers104(such as illustrated onFIGS.1,3and4), the multiplexing engine107(such as illustrated onFIGS.3and4), and the thumbnail generation engine105(such as illustrated onFIGS.3and4), may be configured to be remotely configured by a technician based in the remote monitoring location through a user interface configured on the remote monitoring computer system used by the technician, possibly through a management and control server106asuch as illustrated onFIGS.3and4. FIG.6illustrates an exemplary apparatus or unit1configured to serve as a video monitoring headend device in accordance with embodiments of the present subject disclosure. The apparatus1, which may comprise one or more computers, includes a control engine2, a remote monitoring engine3, a source encoding engine4, a thumbnail generation engine5, a multiplexing engine6, a data communication engine7, and a memory8. In the architecture illustrated onFIG.6, all of the remote monitoring engine3, source encoding engine4, thumbnail generation engine5, multiplexing engine6, data communication engine7, and memory8are operatively coupled with one another through the control engine2. In some embodiments, the remote monitoring engine3is configured to perform various aspects of embodiments of one or more of the proposed methods for remote monitoring as described herein, such as without limitation related to issue detection, alarm/alert analysis and management, remote monitoring service configuration and monitoring service configuration. In some embodiments, the source encoding engine4is configured to perform various aspects of embodiments of one or more of the proposed methods for remote monitoring as described herein, such as without limitation related to source encoding of video streams to be monitored. In some embodiments, the thumbnail generation engine5is configured to perform various aspects of embodiments of one or more of the proposed methods for remote monitoring as described herein, such as without limitation related to video stream downscaling, thumbnail generation based on video streams to be monitored, and transmission thereof. In some embodiments, the multiplexing engine6is configured to perform various aspects of embodiments of one or more of the proposed methods for remote monitoring as described herein, such as without limitation related to multiplexing a plurality of thumbnails generated based on video streams to be monitored and generating one or more mosaic streams based on multiplexed thumbnails. In some embodiments, the data communication engine7is configured to perform aspects of embodiments of one or more of the proposed methods for remote monitoring as described herein related to data communication, such as without limitation transmission of encoded video data to a remote monitoring device located at a remote monitoring site, for example using one or more data plane communication protocols over a CDN network through a CDN server, and control data communication with the remote monitoring device for transmission of service thumbnails, alarm data, configuration data, for example using one or more control plane communication protocols, possibly over a VPN and through a control and management server. The control engine2includes a processor, which may be any suitable microprocessor, microcontroller, Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuits (ASIC), Digital Signal Processing chip, and/or state machine, or a combination thereof. According to various embodiments, one or more of the computers used for implementing the proposed methods can be configured as a multi-processor computer having multiple processors for providing parallel computing. The control engine2may also comprise, or may be in communication with, computer storage media, such as, without limitation, the memory8, capable of storing computer program instructions or software code that, when executed by the processor, causes the processor to perform various acts of the methods described herein. In addition, the memory5may be any type of data storage computer storage medium, capable of storing a data structure representing a computer network to which the apparatus1belongs, coupled to the control engine2and operable with the remote monitoring engine3, the source encoding engine4, the thumbnail generation engine5, the multiplexing engine6, and the data communication engine7to facilitate management and processing of data stored in association therewith. In embodiments of the present subject disclosure, the apparatus1is configured for performing one or more of the remote monitoring methods described herein. The apparatus1may in some embodiments be part of an MCR as a remote monitoring apparatus of the MCR, and configured to operate with the video stream monitoring apparatus of the MCR or be implemented as a sub-system of such video stream monitoring apparatus. It will be appreciated that the apparatus1shown and described with reference toFIG.6is provided by way of example only. Numerous other architectures, operating environments, and configurations are possible. Other embodiments of the apparatus may include fewer or greater number of components, and may incorporate some or all of the functionality described with respect to the apparatus components shown inFIG.6. Accordingly, although the control engine2, remote monitoring engine3, source encoding engine4, thumbnail generation engine5, multiplexing engine6, data communication engine7, and memory8are illustrated as part of the apparatus1, no restrictions are placed on the location and control of components2-8. In particular, in other embodiments, components2-8may be part of different entities, servers, or computing systems. For example, a video monitoring headend device according to embodiments of the present subject disclosure may comprise a source encoder or a video encoding engine (comprised in a video broadcasting headend), which comprises a source encoder processor and a source encoder memory operatively coupled to the source encoder processor, wherein the source encoder processor may be configured for implementing a video monitoring method as proposed in the present subject disclosure, and in particular a method comprising: generating encoded multimedia content signals based on the encoding of monitored video streams respectively corresponding to one or more monitoring points along a workflow chain of the video broadcasting headend configured for one or more video channels broadcasted by the headend, and transmitting the encoded multimedia content signals to a remote monitoring device through a content distribution network. Likewise, a remote video monitoring device according to the present subject disclosure may comprise a remote monitoring device processor and a remote monitoring device memory operatively coupled to the remote monitoring device processor, wherein the remote monitoring device processor may be configured for: receiving the encoded multimedia content signals; and displaying one or more of the received encoded multimedia content signals for monitoring the corresponding monitored video streams. In one or more embodiments, a remote video monitoring device, such as the one103aillustrated onFIGS.1,3, and4, may be implemented in software, in hardware, or as a combination thereof. When implemented as a software, the remote video monitoring device may be implemented as an application configured to be executed on a computer platform, such as a general-purpose computer system (for example a personal computer). In some embodiments, the remote video monitoring application may be web-based, and may be executed through a web browser. In such cases the application may advantageously be usable on computer equipment that is suitable for remote working (e.g. home working), such as a laptop computer. In other embodiments, the remote video monitoring application may be cloud-based, in which cases it may comprise multiple parts, among which a software executed on a computer system suitable for use in a remote working context. The proposed method may be used for the monitoring of one or more video streams based on one or more predefined monitoring points, for example over a workflow applied to video streams. While various embodiments have been described, those skilled in the art will readily appreciate that various changes and/or modifications can be made without departing from the spirit or scope defined by the appended claims. It should be understood that certain advantages, features and aspects of the systems, devices, and methods may be realized in a variety of other embodiments. Additionally, it is contemplated that various aspects and features described herein can be practiced separately, combined together, or substituted for one another, and that a variety of combination and sub-combinations of the features and aspects can be made and still fall within the scope of the disclosure. Furthermore, the systems and devices described above need not include all of the modules and functions described in the preferred embodiments. Information and signals described herein can be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips can be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Depending on the embodiment, certain acts, events, or functions of any of the methods described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the method). Moreover, in certain embodiments, acts or events may be performed concurrently rather than sequentially.
65,421
11943494
DETAILED DESCRIPTION As noted previously, video EQAM (VEQ) devices are used to receive a large number of channels of video, and output an RF-modulated (i.e. QAM or quadrature amplitude modulated) signal combining the multiple different channels that the VEQ receives.FIG.1, for example, shows a traditional architecture10by which an HFC network12includes a head end14that delivers content to subscriber equipment24as subscriber premises, shown in the figure as a cable modem but those of ordinary skill in the art will understand that subscriber equipment could include set-top boxes, gateways, wireless phones, computers, etc. The HFC network12includes a head end14, a plurality of hubs20, and associated with each hub, a plurality of nodes22and a plurality of subscriber equipment24such as cable modems. The head end14typically includes a cable modem termination system (CMTS)13and a plurality of video EQAM units16. Each of the nodes22has one or more corresponding access points, and each subscriber may have one or more corresponding network elements24, shown inFIG.1as a cable modem. As also noted previously, in these traditional HFC architectures10, video is modulated onto the RF network by VEQs16, which receives Internet-Protocol (IP) encapsulated Single & Multiple Program Transport Streams (SPTSs & MPTSs) from various sources (content providers, etc.) through content delivery network26. The content delivery network is typically a switching network by which packetized IP data is routed from one address to another and may exhibit unpredictable and variable delays in the packets received. Therefore, the VEQ16preferably removes this jitter from the network ingress stream before mapping and modulating the video data onto a plurality of QAM channels. As also noted earlier, to deliver an MPTS stream onto a QAM channel in accordance with ISO 13818-1 requires that the VEQ recover the ingress Program Clock Reference (PCR) values encoded within each SPTS and re-stamp it with the VEQ's internal 27 MHz clock so that all streams are delivered with the same time base. FIG.2shows an alternate distributed access architecture (DAA) in which the functionality of the VEQ is moved to a node. Specifically,FIG.2shows what is known as n Remote-Physical Architecture (R-PHY)50in which a video/CCAP core54sends data to a Remote Physical Device (RPD)56, which is in turn connected to one or more “consumer premises equipment (CPE) devices18such as a set-top box, cable modem, etc. Though an R-PHY architecture is illustrated inFIG.2, it should be understood that the description herein is equally applicable to other DAA architectures, such as R-MACPHY architectures, for example. In some embodiments, a timing grandmaster device52may be available to provide timing information to both the video/CCAP core54and the RPD56. Specifically, the timing grandmaster52has a first master port60aconnected to a slave clock62in the CCAP core54and a second master port60bconnected to a slave clock64in the RPD56, though alternatively the respective slave clocks of the CCAP core54and the RPD56may both be connected to a single master port in the timing grandmaster device52. The CCAP core54may be connected to the timing grandmaster52through one or more switches66while the RPD56may be connected to the timing grandmaster52through one or more switches68. AlthoughFIG.2shows only one RPD56connected to the timing grandmaster52, many such RPDs may be simultaneously connected to the grandmaster52, with each RPD having a slave clock64receiving timing information from a port60bin the grandmaster clock52. Even though the architecture ofFIG.2shows a common grandmaster device52capable of synchronizing the video/CCAP core54to the RPD56, the architecture ofFIG.2may be also configured to operate asynchronously, where the grandmaster device52does not send common timing information to the core54/RPD56. For example, the RPD56may be configured to operate asynchronously if the video/CCAP core54does not support IEEE1588 timing protocols, or if the RPD56is desired to be more resilient to holdover periods in the case the RPD and/or the core loses connection to the timing grandmaster. Moreover, in an R-MACPHY system, an RMD will typically be set to async mode by default to eliminate the need for 15888 timing, since DOCSIS services do not need it although the RMS may instream be switched to sync mode if other services such as wireless backhaul requires IEEE 1588 services, or if the oscillator of the video core54is of poor quality and needs an external timing source. Therefore, the system shown inFIG.2may be configured to either operate in sync mode or in async mode to process video content, and the video/CCAP core54and RPD (RMD)55each therefore preferably include hardware capable of operating in either mode, with software that enables configuration by a video core of itself and connected downstream devices into either alternate one of these modes when setting up video channels. In sync (synchronous) mode, the RPD (or RMD) and its video core are synchronized in time to the same reference clock. In this sync mode, the RPD is required merely to detect lost video packets using the Layer 2 Tunneling Protocol v. 3 (L2TPv3) sequence number monitoring and insert MPEG null packets for each missing packet.FIG.3A, for example, shows a system in a first configuration100where a video core102communicates with an RPD104in synchronous mode using a common grandmaster timing server106. The timing server106maintains an identical timing lock (i.e., frequency and phase) with both the clock108in the video core102and the clock110in the RPD104. The video core102has a video streamer112that forwards video data packet to the RPD104via a Downstream External PHY Interface (DEPI) using L2TPv3. The video packets sent from the video core102to the RPD104will typically include all information necessary to decode the packetized elementary video transport stream, such as Program Identifiers (PIDs), Program Clock Reference (PCR) data, etc. The RPD110in turn, receives the video packets sent from the video core108in a dejitter buffer116of a processing device114. The dejitter buffer116receives and outputs packet data at a rate that removes network jitter resulting from differing paths of received packet data, or other sources of varying network delay between the video core and the RPD. Because some packets sent by the video streamer112may be lost or misplaced during transport to the RPD104, the packets output from the dejitter buffer116may preferably be forwarded to a module118that, in the case of sync mode, inserts null packets in the data stream to account for those lost packets, so as to maintain the proper timing rate of the transmitted video. The transport stream, with any necessary insertion of null packets is then forwarded to a PHY device120, which may decode the packetized elementary stream into a sequence of decoded video frames for downstream delivery to end-users by outputting QAM-modulated data in a format expected by customer-premises equipment, like set-top boxes. Alternatively, the PHY device may simply forward the packetized data, without decoding, to e.g., a cable modem for decoding by a user device such as a computer, tablet, cell phone, etc. In sync mode, because the RPD104and its Video Core102must be synchronized to the same reference clock, the frequency of the PCR clock contained within the ingress MPTS matches that of the local clock on the remote device. Therefore, there is no frequency offset on the RPD between the ingress and egress streams, and as noted earlier, to maintain proper timing information in the video data being transmitted, the RPD104need only remove network jitter, detect lost video packets using the L2TPv3 Sequence number monitoring, and insert MPEG NULL packets for each missing packet. Alternatively, however, the RPD and video core may be configured to operate in an asynchronous (async) mode. In async mode, the RPD104and its video core102are not synchronized in time to the same reference clock. Instead, the RPD104is required to detect the difference between its own clock110and the clock108of the video core102and be able to either insert or remove MPEG packets as necessary to maintain expected MPEG bitrate, and also adjust the MPEG PCR values due to the removal/insertion of the MPEG packets. FIG.3B, for example, shows the hardware ofFIG.2configured to instead operate in async mode. In this configuration101, the clock108of the video core102and the clock110of the RPD104are not synchronized and may therefore drift relative to each other. The video streamer112of the video core102forwards packets of the packetized video data elementary stream to the RPD104, which again receives the data in dejitter buffer116to remove network jitter, as described previously. However, unlike the configuration ofFIG.2, the packets output from the dejitter buffer116are forwarded to the module118which both adds null packets when needed, and drops packets when needed, in order to maintain the proper constant bit rate of the data received from the dejitter buffer116. Further, because the RPD and its video core are not synchronized in time to the same reference clock, the frequency of the PCR in the ingress MPTS will be offset from that of local RPD clock. Thus, as well as performing the above functions common to those performed in sync mode, the RPD must also detect the magnitude of the frequency offset from the video core and correct for it. To this end, after packets are added/dropped as needed, a PCR module119re-stamps the data packets with updated PCRs due to the removal/insertion of MPEG packets before forwarding the re-stamped packets to the PHY device120. Another consideration in async mode is the limited size of the dejitter buffer. Since an offset between the ingress frequency and the egress frequency exists, left unchecked the jitter buffer may tend to overflow/empty depending on the sign of the frequency difference. Therefore, systems and methods must be employed to prevent the buffer from either overflowing or emptying. The subsequent disclosure discloses novel methods of detecting and correct for this frequency offset in async mode of operation, taking into consideration its limited memory (buffer) size, while simultaneously maintaining an accurate synchronization of the video data being processed. As already noted, network jitter is removed by using a ‘dejitter’ buffer116shown inFIG.3B. This dejitter buffer116is preferably filled initially to its mid-point as the MPTS stream delivery starts. Dejitter is usually accomplished using a low-pass filter that averages delays over a sufficiently long interval, hence the dejitter buffer116is preferably sized large enough to absorb the fluctuations in the buffer depth caused by jitter on the ingress stream without underflowing or overflowing. Frequency differences between the ingress PCR and the local RPD clock (i.e. the egress rate) will manifest as a drift on the de-jitter buffer depth after low-pass filtering. This will produce the drift rate of the queue depth caused by the frequency offset. This drift rate is directly proportional to the frequency offset between the ingress PCR and the local clocks Specifically, ingress frequency Fi is directly proportional to the ingress bitrate Bi FiαBi and the output frequency Fo is directly proportional to the egress bitrate Bo F0αB0 where the differential between the ingress and egress frequencies is expressed in terms of a dimensionless parts-per-million (PPM) frequency offset. Fi-F0F0×106=Δ⁢ppm. Therefore, FiF0=BiB0(Eqn.1)FiF0-1=BiB0-1Fi-F0F0=Bi-B0B0Δ⁢ppm106=Bi-B0B0⁢where⁢Fi-F0F0×106=Δ⁢ppmΔ⁢ppm106=dQdtB0⁢where⁢dQdt⁢is⁢rate⁢of⁢change⁢of⁢queue⁢depthdQdt=Δ⁢ppm106⁢B0. To halt the growth/depletion in the dejitter buffer occupancy, the RPD must slew its egress frequency to match the ingress frequency. ISO/IEC 13818-1 madidates a maximum value for this frequency slew rate. Therefore, the value of the system clock frequency, measured in Hz, should and shall meet the following constraints: 27 000 000−810<=system clock frequency<=27 000 000+810rate of change of system clock frequency with time <=75×10-3 Hz/s A typical frequency offset for a hardware-based video engine is +/−5 ppm. However, for a software-based video engine where the timing is given by a standard crystal-based oscillator, this accuracy is likely to be substantially less than that. The ISO13818-1 spec allows for a +/−810 Hz accuracy on the 27 MHz clock, which equates to a 30 ppm offset. If the video core102were to deliver a MPTS asynchronously, with a 30 ppm frequency offset and the RPD clock offset were 5 ppm, in the opposite direction, the relative frequency offset would be 35 ppm. If no correction was done on this frequency offset, the time taken to hit a buffer overrun/underrun condition is dependent on the size of the dejitter buffer in the RPD device. The available working depth of the dejitter buffer is given by: Qlen/2−Jmax, whereJmax is the max jitter Therefore, if no frequency correction is applied, the time overflow/underflow the dejitter buffer is given by: t=(Qlen/2-Jmax)/dQdt and by substituting from Eq. 1, t=(Qlen/2-Jmax)/(Δ⁢ppm106⁢B0)(Eq.2) Systems and methods described herein preferably slew the egress frequency to match that of the ingress frequency, at a high enough rate that will prevent the dejitter buffer from overflowing/underflowing, and do so at a rate that is as close as possible to the 75 mHz/S limit, although if the buffer size is limited, the actual frequency slew rate may have to exceed this limit. As mentioned previously, VEQs generally recover the PCR clock of the ingress streams, apply the required slew to correct for any frequency offset between that clock and the local VEQ 27 MHz clock, and re-stamp the PCRs output from the VEQ with this corrected clock. An alternative to re-stamping PCRs may be to apply an accumulating offset to each PCR that compensates for the frequency offset. When this accumulating PCR offset exceeds the transmission time of a single Transport Stream Packet (TSP), a TSP can be added/removed from the egress MPTS stream and the PCR offset value can be adjusted back towards zero by this transmission time: PCR⁢ticks⁢per⁢T⁢S⁢P=188*8*27*106QAM⁢channel⁢bitrate.Eqn.3 The frequency offset applied may preferably vary over time until the ingress and egress MPTS bitrate are equal, i.e., synchronized. This initial rate of change of the PCR offset is proportional to the observed frequency slew seen on the egress stream. Avoiding the need for an RPD/RMD to recover and re-stamp the MPTS PCR clocks, beneficially removes a large computational and memory overhead. The frequency slew rate applied is dependent on an estimation of the ppm frequency offset. As shown previously, the frequency offset is directly proportional to the rate of change of the dejitter buffer occupancy i.e., Eq. 1. Therefore, after a short setting period during which high frequency network jitter can be averaged out, the rate of change of the dejitter buffer occupancy can be calculated, thereby giving an approximation of the current ppm frequency offset. According to preferred systems and methods disclosed in the present specification, this frequency offset may be reduced/eliminated over time in a manner that does not result in a buffer overrun/underrun. More specifically, preferred embodiments as herein described employ an adaptive frequency slew rate adjustment, which means varying the frequency slew over time based upon a measured state of the dejitter buffer. In some embodiments, the measured state of the dejitter buffer may indicate a current frequency offset, and that may be the basis of varying the slew over time. Alternatively, or additionally, the measured state of the dejitter buffer may be based on the remaining available buffer occupancy. Referring toFIG.4, a first embodiment may comprise a method150that, at step152determines an initial, or current, frequency offset between input data entering the dejitter buffer116and output data leaving the dejitter buffer116. The frequency offset may be determined, for example, by measuring a fullness state of the dejitter buffer116over an interval and applying a low pass filter over that interval to determine a drift on the depth of the dejitter buffer. In a preferred embodiment, the grift may be used to determine a current frequency offset value measured in ppm. In step154, the determined initial, or current, frequency offset is used to select from a plurality of predetermined scalar slew rate values. As one example, a predetermined slew rate may be associated with each of a plurality of frequency offset ranges, e.g., one slew rate may be applied if the measured frequency offset is less than or equal to 10 ppm, another slew rate may be applied if the measured frequency offset is above 10 ppm but less than or equal to 35 ppm, and a third slew rate may be selected if the measured frequency offset is above 35 ppm. Those of ordinary skill in the art will appreciate that other slew rate values for each of these ranges may be used, and a larger number of ranges may be used in various embodiments. Preferably, the slew rates preselected for each of the ranges are pre-calculated to guarantee that the frequency slew rate is sufficiently high so that the frequency offset is corrected before a dejitter buffer overrun/underrun event occurs. At step156the selected frequency slew rate is applied, and after a period of time has elapsed, the procedure returns to step152so that another measurement may be taken of the frequency offset, which will have been reduced relative to the previous iteration, and the method may thereby continue until the frequency offset has been eliminated. Notably, the rate of change of the dejitter buffer depth will decrease as the frequency offset decreases, so the initial frequency slew rate will have a more dramatic effect on the buffer occupancy. As the frequency offset approaches zero, the chosen slew rate will have less of an effect. Thus, the periodic updating of the frequency slew can be performed at a relative low rate because the frequency offset correction is a relatively slow process (i.e., possibly >60 minutes for large ppm frequency offsets). Instead of merely adjusting slew rate based upon a frequency offset, as measured by changes to the depth of the dejitter buffer116, and alternate implementation may adjust a slew rate based upon both the measured frequency offset as well as a measured remaining working depth of the dejitter buffer. In some specific embodiments, a calculation may be used to determine a stepwise change in slew rate as a function of a measured frequency offset and a measured state of the working depth of a buffer. For example, slew rate (dF/dT) may be based on a fractional measured frequency drift as flows: dFdt=+/-(D·F)⁢where⁢D⁢is⁢the⁢freq⁢drift⁢rate⁢as⁢a⁢fraction,dFdt/F∫dFF=∫+/-Ddtln⁡(F)=+/-D.t+constFt=e+/-D.t+constFt=F0⁢e+/-D.t,where⁢F0=econstln⁡(Ft/F0)=+/-D.t+/-D=ln⁡(Ft/F0)/((Qlen/2-Jmax)/(Δ⁢ppm106⁢B0))⁢(substituting⁢from⁢Eq.2)+/-D=(ln⁡(Ft/F0)·Δ⁢ppm106·B0)/(Qlen/2-Jmax) Thus, a selected frequency slew rate can be represented by the equation dFdt=((ln⁡(Ft/F0)·Δ⁢ppm106·B0·F0)/(Qlen/2-Jmax))*S(Eq.4) where S is a linear approximation for the low-pass filter's Phase Locked Loop (PLL) (e.g. S=0.5). The value (Qlen/2−Jmax) represents the available working depth of the buffer, where Qlen/2 represents the time averaged (jitter removed) distance the buffer is from being completely full or completely empty, and Jmaxrepresents the maximum experienced jitter. Thus, application of this equation can produce a desired initial/updated slew rate based on a measured frequency offset and a measured available working depth of the buffer. Referring toFIG.5, for example, another embodiment for applying an adaptive frequency slew rate to a dejitter buffer may use method160in which at step162a buffer state is measured over an interval of time sufficient to average out network jitter so as to determine drift in the buffer due to a frequency offset. At step164, from the measurements taken in step162, values are calculated for a measured frequency offset between data entering and exiting the buffer, as well as for a working buffer depth, which in some embodiments, will reflect a maximum amount of jitter. At step166an initial/updated slew rate is determined. In some embodiments, the slew rate may be determined based on Eqn. 4, above. AT step168the determined slew rate is applied. After a period of time, the procedure then reverts to step162and continues until the frequency offset has been eliminated. FIGS.6A and6Bshow the results of the systems and procedures described in this specification. These figures show that the disclosed systems and methods quickly adjust to prevent buffer underruns/overruns, while also eliminating frequency offset across a jitter buffer over time. Once the adaptive frequency slew process described above is completed, the egress frequency will match that of the ingress frequency. This implies that the ingress and egress bitrates will also match, and therefore the drift on the depth of the dejitter buffer116is eliminated. However, the dejitter buffer116will be offset from its center point, while for optimal performance of the dejittering function, the dejitter buffer should be maintained at a 50% fullness state. To recenter the dejitter buffer116, the RPD/RMD104can utilize the allowable tolerance on the PCR accuracy to accumulate DOCSIS ticks, which will facilitate the addition/removal of TSPs to/from the egress stream. ISO/IEC 13818-1 defines this PCR tolerance as “the maximum inaccuracy allowed in received PCRs. This inaccuracy may be due to imprecision in the PCR values or to PCR modification during remultiplexing. It does not include errors in packet arrival time due to network jitter or other causes. The PCR tolerance is +/−500 ns.” Applying a deliberate +/−500 mS error to successive PCRs, on a per PID basis, equates to adjusting the PCR value by +/−13.5 ticks i.e. (500×10−9×27×106). Once this accumulated value exceeds the PCR ticks per TSP value (see Eq. 3), a packet can be added/removed from the egress stream and the PCR adjust value incremented/decremented by the PCR ticks per TSP value. Repeating this process, will allow the dejitter buffers to be gradually re-centered, without contravention of the ISO 13818-1 specification. The foregoing specification described systems and methods by which one embodiment of an RPD/RMD204operating in async mode within a DAA architecture could apply a PCR offset to incoming video rather than restamp the video data with time values from its own clock as a less-computationally intensive means of maintaining synchronized presentation of the video data. Those of ordinary skill in the art, however, will appreciate that all of the foregoing techniques can also be applied by a VEQ unit in a head end as shown inFIG.1, for example. It will be appreciated that the invention is not restricted to the particular embodiment that has been described, and that variations may be made therein without departing from the scope of the invention as defined in the appended claims, as interpreted in accordance with principles of prevailing law, including the doctrine of equivalents or any other principle that enlarges the enforceable scope of a claim beyond its literal scope. Unless the context indicates otherwise, a reference in a claim to the number of instances of an element, be it a reference to one instance or more than one instance, requires at least the stated number of instances of the element but is not intended to exclude from the scope of the claim a structure or method having more instances of that element than stated. The word “comprise” or a derivative thereof, when used in a claim, is used in a nonexclusive sense that is not intended to exclude the presence of other elements or steps in a claimed structure or method.
24,091
11943495
DETAILED DESCRIPTION OF THE INVENTION While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts, which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention. Referring initially toFIG.1, therein is depicted one embodiment of a system10utilizing set-top boxes12with guest-specific interactive portals being employed within a hospitality lodging establishment H. The hospitality lodging establishment or more generally, hospitality property, may be a furnished multi-family residence, dormitory, lodging establishment, hotel, hospital, or other multi-unit environment. As shown, by way of example and not by way of limitation, the hospitality environment is depicted as the hotel H having various rooms, including room R and back of the house operations O. The set-top boxes12are communicatively disposed with various amenities associated with the hospitality environment, including televisions14, which as mentioned is depicted as the hotel H. The set-top boxes12may be deployed throughout the rooms R of the hotel H and are in communication with a property server15, which is co-located at the hotel14. As shown, in one embodiment, within a room R, the system10includes a set-top box12and a display illustrated as television14having a screen16. A connection, which is depicted as an HDMI connection18, connects the set-top box12to the television14. Other connections include a power cable20coupling the set-top box12to a power source, a coaxial cable22coupling the set-top box12to external cable source, and a category five (Cat 5) cable24coupling the set-top box12to external pay-per-view source at a hotel or other lodging establishment, for example. As shown, the set-top box12may include a dongle26providing particular technology and functionality extensions thereto. That is, the set-top box12may be set-top box-dongle combination in one embodiment. More generally, it should be appreciated that the cabling connected to the set-top box12will depend on the environment and application and the cabling connections presented inFIG.1are depicted for illustrative purposes. Further, it should be appreciated that the positioning of the set-top box12will vary depending on environment and application and, with certain functionality, the set-top box12may be placed more discretely behind the television14. A television remote control30includes an array of buttons for adjusting various settings such as television channel and volume. In one embodiment, the television remote control30may be a consumer infrared (IR) or other protocol, such as Bluetooth, device configured as a small wireless handheld object that issues commands from a distance to the set-top box12in order to control the television14via the set-top box12, for example. A proximate wireless-enabled interactive programmable device32may be a wireless-enabled interactive handheld device that may be supplied or carried by the guest and may be selected from a range of existing devices, such as, for example iPads®, iPhones®, iPod Touch®, Android® devices, Blackberry® devices, personal computers, laptops, tablet computers, smart phones, and smart watches, for example. As will be discussed in further detail below, in one implementation, an application installed from a server enables the set-top box12and the proximate wireless-enabled interactive programmable device32to be wirelessly paired. In another embodiment, a challenge-response is utilized to wirelessly pair the set-top box12and the proximate wireless-enabled interactive programmable device32. Similar to the proximate wireless-enabled interactive programmable device32, a personal computer34and game console36are also depicted in the room R. As shown, a default interactive portal40is displayed on the screen16, unless a guest configuration profile38is loaded within the set-top box12. The guest configuration profile38may be loaded from the operations, e.g., the front desk or hotel headend, by use of the remote control30, or by a proximate device, such as the proximate wireless-enabled interactive programmable device32, personal computer34, or game console36. In one implementation, as illustrated, the set-top box12extends a physical authorization interface, shown as area A, from the set-top box to an area easily accessible to transitory guests' convenience such as in front or side of the television12. This physical authorization interface A may include a short range wireless data connection that is enabled only when very close physically to the proximate wireless-enabled interactive programmable device32, for example. Further, once the pairing is established, the set-top box12provides a secure wireless interface to communicate transitory guest user device authorization information to the set-top box12to accomplish verification. Once authorization information is communicated to the set-top box, the set-top box enables the guest configuration profile. Enabled, the guest configuration profile38provides a customized set-top box experience. More particularly, the guest configuration profile includes guest identification, a guest channel preference presentation, and a guest service preference presentation with guest account information. The guest configuration profile38is a guest-specific, guest-customized set-top box generated environment referencing an explicit digital representation of a guest's identity. The set-top box generates a guest interactive portal42including a guest indication acknowledgement44, the guest channel preference presentation46, and the guest service preference presentation48, which includes premium programming, game, and music content, for example. Further, personal area network and local area network connectivity is provided to the proximate wireless-enabled interactive programmable device32, personal computer34, and game console36as shown by the WiFi indicator W. Referring toFIG.2A,FIG.2B,FIG.2C, andFIG.3, as used herein, set-top boxes, back boxes and set-top/back boxes may be discussed set-top boxes. By way of example, the set-top box12may be a set-top unit that is an information appliance device that generally contains set-top box functionality including having a television-tuner input and displays output through a connection to a display or television set and an external source of signal, turning by way of tuning the source signal into content in a form that can then be displayed on the television screen or other display device. Such set-top boxes are used in cable television, satellite television, and over-the-air television systems, for example. The set-top box12includes a housing50having a rear wall52, front wall54, top wall56, bottom base58, and two sidewalls60,62. It should be appreciated that front wall, rear wall, and side wall are relative terms used for descriptive purposes and the orientation and the nomenclature of the walls may vary depending on application. The front wall includes various ports, ports64,66,68,70,72,74,76,78, and80that provide interfaces for various interfaces, including inputs and outputs. In one implementation, as illustrated, the ports64through80include inputs82and outputs84and, more particularly, an Rf input86, a RJ45 input88, universal serial bus (USB) input/outputs90, an Ethernet category 5 (Cat 5) coupling92, an internal reset94, an RS232 control96, an audio out98, an audio in100, and a debug/maintenance port102. The front wall54also includes various inputs82and outputs84. More particularly, ports110,112,114, and116include a 5V dc power connection120, USB inputs/outputs122, an RJ-45 coupling124, and an HDMI port126. It should be appreciated that the configuration of ports may vary with the set-top box depending on application and context. As previously alluded to, the housing50may include a housing-dongle combination including, with respect to the dongle26, a unit130having a cable134with a set-top box connector132for selectively coupling with the set-top box12. Within the housing50, a processor140, memory142, storage144, the inputs82, and the outputs84are interconnected by a bus architecture146within a mounting architecture. The processor140may process instructions for execution within the computing device, including instructions stored in the memory142or in storage144. The memory142stores information within the computing device. In one implementation, the memory142is a volatile memory unit or units. In another implementation, the memory142is a non-volatile memory unit or units. Storage144provides capacity that is capable of providing mass storage for the set-top box12. Various inputs82and outputs84provide connections to and from the computing device, wherein the inputs82are the signals or data received by the set-top box12, and the outputs84are the signals or data sent from the set-top box12. A television content signal input138and a television output150are also secured in the housing50in order to receive content from a source in the hospitality property and forward the content, including external content such as cable and satellite and pay-per-view (PPV) programing, to the television located within the hotel room. A transceiver152is associated with the set-top box12and communicatively disposed with the bus136. As shown the transceiver152may be internal, external, or a combination thereof to the housing. Further, the transceiver152may be a transmitter/receiver, receiver, or an antenna for example. Communication between various amenities in the hotel room and the set-top box12may be enabled by a variety of wireless methodologies employed by the transceiver152, including 802.11, 3G, 4G, Edge, WiFi, ZigBee, near field communications (NFC), Bluetooth low energy and Bluetooth, for example. Also, infrared (IR) may be utilized. The memory142and storage144are accessible to the processor140and include processor-executable instructions that, when executed, cause the processor140to execute a series of operations. The processor-executable instructions receive a guest configuration profile, which as previously discussed may include guest identification, a guest channel preference presentation, and a guest service preference presentation with guest account information. Also, as previously discussed, the guest configuration profile is a guest-specific, guest-customized set-top box generated environment referencing an explicit digital representation of a guest's identity. In response to receiving the guest configuration profile at the set-top box12, the processor-executable instructions cause the processor to temporarily override the default profile and generate a guest interactive portal including a guest indication acknowledgement, the guest channel preference presentation, and the guest service preference presentation. Further, in response to receiving the guest configuration profile, the processor-executable instructions cause the processor to activate a local area wireless connection for a guest device, such as the proximate wireless-enabled interactive programmable device32, to a network associated with the hospitality establishment. Alternatively, in response to a default profile, the processor-executable instructions cause the processor to generate a default interactive portal prior to forwarding one of the guest interactive portal and the default interactive portal, as appropriate, to the television via the television output. In implementations with multiple set-top boxes disposed in respective multiple rooms, at least one of the set-top boxes will generate a guest interactive portal and at least one of the plurality of set-top boxes will potentially generate a default interactive portal. Referring now toFIG.4, a method for using a set-top box with an interactive portal is shown. At block160, multiple set-top boxes are disposed in a respective number of rooms within a lodging establishment. At decision block162, if a guest configuration profile is not available then the methodology advances to block164where a standard gateway and interactive experience is provided before the method returns to start. On the other hand, if a guest configuration profile is available, then the process advances to decision block166where the guest configuration profile is installed from the appropriate source. At block168, installation is provided from hotel operations, such as a front desk or hotel headend. At block170, the remote control in the room may provide the guest configuration profile. As a third alternative, at block172, a proximate wireless-enabled interactive programmable device may be further verified at block174and provide the guest configuration profile. Following blocks168,170, and172-274, the methodology continues to block176, wherein a customized interactive portal is built based on the guest configuration profile. The customized interactive portal may include the guest's name or similar information. Continuing with blocks178,180,182,184, and186, the methodology customizes the channel preferences, channel services, local area network connectivity, e.g., WiFi, for devices, customizes room amenities, and customizes the hotel experience in accordance with the guest configuration profile. That is, in one implementation, following this methodology, various guest devices, such as the aforementioned proximate wireless-enabled interactive programmable devices and personal computers, may be registered and associated with the set-top box for the purpose of joining personal area networks or local area networks to enable various services on that network requiring authorization. Referring now toFIG.5, at a guest's home, a wireless access point200provides the networking hardware device, such as router, that allows Wi-Fi compliant devices to connect to a wired network by way of a private wireless network, which is illustrated as home wireless network202. The home wireless network has a network configuration204, which provides the network management protocol and mechanisms to install, manipulate, and delete the configuration of various network devices. Such a network configuration may also include a network identification, which is shown as a Service Set Identifier (SSID)206. In one implementation, the SSID may be a series of 0 to 32 octets that is used as an identifier for the wireless Local Area Network (LAN) and is intended to be unique for the particular home wireless network202. Various login credentials208are also associated with the home wireless network202. The login credentials208may include user names and passwords that permit various devices210, applications212, and services214to operate over the home wireless network202. The devices210may include the proximate wireless-enabled interactive programmable device32, the personal computer34, or the game console36. The applications212may include a computer program designed to perform a group of coordinated functions, tasks, or activities operating on the devices210for the benefit of the user, which in this instance is the guest. The services214may include various subscription or non-subscription services that provide access to streaming or archived content, such as literature, music, television, and movies, for example. The services214may enabled by the devices210. It should be appreciated that overlap between the devices210, the applications212, and the services214may exist. The home wireless network202permits the users or guests devices210, applications212, and services214to work seamlessly at the home L without the need for continuous new configuration. The aforementioned guest configuration profile38associated with the set-top box12within the room R of the hotel H configures a guest private wireless network220that acts as an access point having the same network configuration204, SSID206, and login credentials208for the devices210, the applications212, and the services214. This creates a home away from home environment for the guest, where all of the guest carry devices, applications, and services, including streaming movie services, work at within the room R as the guest carry devices, applications, and services work at the home wireless network202at the home L. In one embodiment, the guest configuration profile38provides the guest private wireless network220with substantially identical behavior as the home wireless network202. Therefore, no new configuration is required at the room R. In one implementation, the guest configuration profile38may access the information and data necessary to provision the guest private wireless network220from the server15or a server222, which may be located offsite or within a cloud C, for example. In operation, in response to receiving the guest configuration profile, a local area wireless network may be activated for a guest device to a network associated with the hospitality establishment. The local area wireless network may have substantially identical behavior to a home wireless network belonging to the guest such that the local area wireless network provides substantially identical network configuration and device, application, and service login credentials as the home wireless network. In particular, the guest configuration profile may enable the creation of a local area wireless network with substantially the same behavior as the guest's home wireless network. As previously alluded, the guest configuration profile38not only establishes the guest private wireless network220, but may also provision room-specific guest preferred features such as room temperature, television lineup, and other amenity preferences. The order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution. While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is, therefore, intended that the appended claims encompass any such modifications or embodiments.
18,846
11943496
DESCRIPTION OF EMBODIMENT Hereinafter, an embodiment of the present invention will be described in detail on the basis of the drawings.FIG.1is a functional block diagram of a communication system100according to the embodiment. The communication system100according to the embodiment is a game system including, for example, a head-mounted display and a game machine for wirelessly transmitting a video signal to the head-mounted display. As illustrated inFIG.1, the communication system100includes a transmission unit200and a reception unit300. The transmission unit200transmits generation time information and image information to the reception unit300for each frame image. The reception unit300sequentially receives the image information transmitted by the transmission unit200at a given first interval by wireless communication. In addition, the transmission unit200includes a first control unit202, a first clock generation unit204, an image generation unit206, a first processing unit208, and a first communication unit210. In addition, the reception unit300includes a second control unit302, a second clock generation unit304, a second communication unit306, a second processing unit308, a storage unit310, and a display unit400. The first control unit202is, for example, a program control device such as a Central Processing Unit (CPU) that operates in accordance with a program installed in a memory (not illustrated). The first control unit202controls the operation of each unit included in the transmission unit200. The first clock generation unit204generates a first clock used in the first communication unit210. Specifically, for example, the first clock generation unit204is an oscillation circuit including a crystal oscillator and generates the first clock that is a given frequency. Note that the first clock generation unit204may generate clocks to be used in other units included in the transmission unit200. The image generation unit206generates a frame image. Specifically, for example, the image generation unit206generates, per second, pieces of image information the number of which corresponds to a frequency (hereinafter, referred to as a frame frequency) of updating the frame image. For example, when the frame frequency is 120 Hz, the image generation unit206generates 120 pieces of image information per second. Here, it is desirable that the image information includes generation time information indicating the time when the image information is generated for each piece of image information. For example, when the image information is information that conforms to the Moving Pictures Experts Group-4 (MPEG-4) standard, the image information includes the generation time information called PTS (presentation time stamp). The first processing unit208performs processing for each piece of image information generated by the image generation unit206. The processing is, for example, modulation or compression. Specifically, for example, the first processing unit208is an encoder that compresses the image information. When the first processing unit208modulates or compresses the image information, the transmission unit200can transmit, per second, the pieces of image information the number of which corresponds to the frame frequency to the reception unit300. Note that when the image information is transmitted without being processed, the first processing unit208may be omitted. The first communication unit210transmits the image information at the given first interval by wireless communication using the first clock. Specifically, the first communication unit210transmits the image information compressed or modulated by the first processing unit208to the reception unit300in accordance with a given wireless communication standard. The given wireless communication standard may be any communication standard as long as a communication speed at which the pieces of image information the number of which corresponds to the frame frequency can be transmitted per second is secured. The second control unit302is a program control device such as a CPU that operates in accordance with a program installed in the storage unit310. The second control unit302controls the operation of each unit included in the reception unit300. In addition, the second control unit302calculates a second interval indicating the interval at which the frame images are displayed on the basis of plural pieces of display time information, and controls the update cycle of the frame image in accordance with a display clock such that the second interval approximates to the first interval. Here, the second control unit302may further control, on the basis of processing time information and the display time information, such that a period of time from the time when the processing is performed to a time when the frame image is displayed approximates to a predetermined value. The specific control will be described later. The second clock generation unit304generates a second clock used in each unit included in the reception unit300. Specifically, for example, the second clock generation unit304is an oscillation circuit including a crystal oscillator, and generates the second clock that is a given frequency. Here, the first clock generation unit204and the second clock generation unit304are configured such that the frequencies of the first clock and the second clock become the same frequency in design. However, even if the specifications of each part included in the first clock generation unit204and the second clock generation unit304are the same, an error exists in the characteristics of each part. In addition, the transmission unit200and the reception unit300are structurally separated from each other, and are put in different environments (for example, temperatures and the like). In this case, since the characteristics of the same parts included in the first clock generation unit204and the second clock generation unit304are different from each other, the frequencies of the first clock and the second clock are usually different from each other. The second communication unit306receives the image information at a given second interval by wireless communication using the second clock. Specifically, the second communication unit306receives the image information from the first communication unit210in accordance with the above-described wireless communication standard. Here, it is desirable that the first interval that is the interval at which the first communication unit210transmits the image information and the second interval that is the interval at which the second communication unit306receives the image information are the same. However, since the first communication unit210and the second communication unit306perform communications by wireless communication, there is a risk that the communications are affected by a disturbance. In addition, as described above, there is a risk that the first clock used in the first communication unit210and the second clock used in the second communication unit306are different from each other. Thus, the first interval and the second interval are usually different from each other. The difference causes problems such as missing frames and an increase in display delay. The second processing unit308performs processing for each piece of received image information and generates processing time information in accordance with the time when the processing is to be performed. The processing is, for example, demodulation or decompression. Specifically, for example, the second processing unit308is a decoder that decodes the encoded image information, and generates the processing time information indicating the time when the decoding of the image information is completed. When the second processing unit308processes the image information, the display unit400can display the frame image on the basis of the image information. Note that when the image information is transmitted without being processed, the second processing unit308may be omitted. In this case, the processing time information may be information indicating the time when the second communication unit receives the image information. The storage unit310is a storage element such as a Read-Only Memory (ROM) or a Random Access Memory (RAM), or a hard disk drive. The storage unit310stores programs and the like to be executed by the second control unit302. In addition, the storage unit310stores at least a part of each of the image information of the n-th frame and the image information of the n+1-th frame. Then, when the second communication unit306receives the image information of the n+1-th frame, the storage unit310updates the image information of the n−1-th frame to the image information of the n+1-th frame. Specifically, for example, when the second communication unit306first receives the image information of the first frame, the storage unit310stores the image information of the first frame. Next, when the second communication unit306receives the image information of the second frame, the storage unit310stores the image information of the second frame. Further, when the second communication unit306receives the image information of the third frame, the storage unit310overwrites with and stores the image information of the third frame in the region where the image information of the first frame is stored. Thereafter, when the second communication unit306receives the image information of the n-th frame, the storage unit310overwrites with and stores the image information of the n-th frame in the region where the image information of the n−2-th frame is stored. Accordingly, the storage unit310stores the image information for two frames. The storage unit310functions as a buffer when the image information of the n+1-th frame is read out to the display unit400at the same time when the image information of the n-th frame is rewritten. The display unit400sequentially displays the frame images on the basis of the image information in accordance with the display clock and generates the display time information indicating the time corresponding to the update time of the frame image. Specifically, for example, the display unit400is a display device such as a liquid crystal display device or an organic Electroluminescence (EL) display device. The display unit400has a display clock generation unit402, a panel unit404, and a third control unit406. When the second communication unit306receives the image information of the n+1-th frame, the display unit400displays the frame image of the n-th frame stored by the storage unit310. The display clock generation unit402generates a display clock. Specifically, for example, the display clock generation unit402generates the display clock used by the panel unit404on the basis of the second clock and a control amount C generated by the second control unit302. The panel unit404is a glass substrate or a resin substrate on which an electronic circuit necessary for display is formed. The panel unit404displays the frame image of the n-th frame stored in the storage unit310in accordance with an instruction of the third control unit406. The third control unit406is a device for controlling the operation of each unit of the display unit400. Specifically, for example, the third control unit406controls the update cycle of the frame image on the basis of the control amount C generated by the second control unit302. The update cycle is controlled by changing a frequency or a blanking period of the display clock. Specifically, when the update cycle is shortened, the third control unit406controls to increase the frequency of the display clock or to shorten the blanking period. On the other hand, when the update cycle is made longer, the third control unit406controls to lower the frequency of the display clock or to make the blanking period longer. Next, generation of the control amount C by the second control unit302will be described.FIG.2is a diagram for describing calculation of the control amount C. In addition,FIG.3is a diagram for illustrating a relationship among a display time, a processing completion time, and an image information generation time. The letters enclosed in the square frames included inFIG.3indicate the order of the frames. The display time indicates the time when each frame starts. The processing completion time indicates the time when the second processing unit308completes the processing. The video generation time indicates the time when the image generation unit206generates the image information. Note that the display time indicates the time when each frame starts inFIG.3, but may indicate the time when the display is completed or the time when a predetermined time elapses after the start of the display. In addition, the processing completion time may be not the time when the processing is completed but the time when the processing starts or the time when the second communication unit306completes the reception of the image information. Note that a specific example in which the second communication unit306is in the process of receiving the image information of the n+1-th frame in a state where the storage unit310has stored at least a part of each of the frame image information of the n−1-th frame and the frame image information of the n-th frame will be described below. First, the second control unit302calculates the first interval. Specifically, for example, the second control unit302calculates the first interval on the basis of the generation time information included in each of plural pieces of image information. The second control unit302subtracts the time indicated by the generation time information (PTS (n−1)) included in the image information of the n−1-th frame from the time indicated by the generation time information (PTS (n)) included in the image information of the n-th frame. The second control unit302acquires the subtracted value as the first interval (D_SrcTS). As illustrated inFIG.3, the first interval (D_SrcTS) is an interval between the time when the image information of one frame is generated and the time when the image information of the next frame is generated. Next, the second control unit302calculates the second interval. Specifically, for example, the second control unit302calculates the second interval on the basis of plural pieces of display time information. The second control unit302subtracts the time indicated by the generation time information (DispTS (n−1)) included in the image information of the n−1-th frame from the time indicated by the display time information (DispTS (n)) of the n-th frame. The second control unit302acquires the subtracted value as the second interval (D_DispTS). The display time information is, for example, a vertical synchronization signal acquired from the display unit400by the second control unit302. The vertical synchronization signal is a signal having a period for displaying one frame image as a cycle, and is, for example, a signal including one pulse in one cycle. The second control unit302acquires the second interval (D_DispTS) by subtracting the time when the vertical synchronization signal of the n−1-th frame is acquired from the time when the vertical synchronization signal of the n-th frame is acquired. As illustrated inFIG.3, the second interval (D_DispTS) is an interval between the time when the display of the n-th frame starts and the time when the display of the n+1-th frame starts. Then, the second control unit302subtracts the second interval from the first interval to calculate a coefficient F (D_SrcTS-DispTS). The coefficient F is information indicating a deviation between a time interval at which the image information is generated and a time interval of display actually performed on the basis of the image information. That is, the coefficient F is information indicating a frequency deviation. Note that the times when the first interval and the second interval are calculated need not be synchronized with the time when the display unit400updates the frame image. In addition, it is desirable that the second control unit302calculates the first interval and the second interval plural times in one frame period. Specifically, in a case where the ideal frame update frequency of the display unit400is 120 Hz, the cycle in which the display unit400updates the frame image is approximately 8.33 ms. On the contrary, the generation time information is generated at, for example, 90 kHz. In this case, the second control unit302may calculate the first interval and the second interval at 90 kHz. Here, as illustrated inFIG.3, an interval (D_SrcTS+jitter) between the processing completion times of the n-th frame and the n+1-th frame usually includes an inconstant component (jitter) due to the influence of a disturbance or the like. Since the display unit400sequentially displays the frame images on the basis of the image information, the second interval is not constant in a case where the processing completion time differs for each frame. Accordingly, the second control unit302may perform processing such that the second interval (D_DispTS) falls within a predetermined range. Specifically, when the second interval (D_DispTS) is a value smaller than a set lower limit value, the second control unit302may set the second interval (D_DispTS) as the lower limit value, and when the second interval (D_DispTS) is a value larger than a set upper limit value, the second interval (D_DispTS) may be set as the upper limit value. For example, in a case where the ideal frame update frequency of the display unit400is 120 Hz, the lower limit value is set to 8.00 ms, and the upper limit value is set to 8.66 ms. In addition, the second control unit302may perform processing for smoothing the second interval (D_DispTS). For example, the second control unit302may calculate the moving average of the second interval (D_DispTS) calculated in a period of 8.33 ms. Next, the second control unit302calculates a third interval (Raw_Ph). Specifically, for example, the second control unit302calculates the third interval in the n-th frame on the basis of the time when the processing by the second processing unit308is completed and the display time information. The second control unit302subtracts the time indicated by the time (DecTS (n)) when the processing of the n-th frame is completed from the time indicated by the display time information (DispTS (n)) of the n-th frame. The second control unit302acquires the subtracted value as the third interval (Raw_Ph). Next, the second control unit302performs processing for smoothing the third interval. Specifically, for example, the second control unit302performs moving average processing and low-pass filter processing (Phase=LPF (Raw_Ph)) by an operation. Techniques known from the past may be applied to the processing. Hereinafter, the smoothed third interval (Raw_Ph) is written as a third interval (Phase). As illustrated inFIG.3, the third interval (Phase) is an interval between the time when the processing of the n-th frame is completed and the time when the display of the frame image starts on the basis of the image information of the n-th frame. Then, the second control unit302calculates a coefficient P (Phase-Target) by subtracting a target value (Target) of a period of time required from the time when the processing is performed to the time when the display of the frame image starts from the third interval (Phase). The target value (Target) is a value set as a period of time required from the time when the processing is performed to the time when the display of the frame image starts under an ideal environment. That is, the coefficient P is information indicating a difference between the actually-required period of time and the ideal period of time from the time when the processing is performed to the time when the display of the frame image starts. That is, the coefficient P is information indicating a phase deviation. Here, the first interval and the second interval match each other and do not change with time in an ideal environment where a disturbance or the like does not exist. In addition, the third interval is also constant. Therefore, a period of time from the time when the image information of the n-th frame is stored in the storage unit310after the second processing unit308processes the image information of the n-th frame to the time when the display by the display unit400starts is constant. In this case, the coefficient P is 0. However, the period of time required from the time when the processing is performed to the time when the display of the frame image starts actually varies due to the disturbance or the like as described above. Therefore, the coefficient P calculated by subtracting the target value (Target) from the third interval is usually not 0. Next, in a case where a period of time in which an absolute value of the difference between the first interval and the second interval becomes larger than a first value continues for a first period of time or more, the second control unit302controls the update cycle using the first interval and the second interval. In addition, in a case where a period of time in which the absolute value of the difference between the first interval and the second interval becomes smaller than a second value continues for a second period of time or more, the second control unit302controls the update cycle using the first interval, the second interval, the processing time information, and the display time information. That is, the second control unit302divides the case into two states according to the magnitudes of the coefficient F and the coefficient P, and calculates the control amount C for controlling the update cycle of the frame image by a different method according to the states. The two states will be described using a state transition diagram illustrated inFIG.4. The initial state is a first state in which control is performed only by the coefficient F. In a case where a period of time in which the absolute value of the coefficient F becomes smaller than the second value (stable_threshold) continues for the second period of time (stable_time) or more, the state transits from the first state to the second state. The second state is a state in which control is performed by the coefficient F and the coefficient P. When a period of time in which the absolute value of the coefficient F becomes larger than the first value (instable_freq_threshold) continues for the first period of time (instable_time) or more, the state transits from the second state to the first state. The first state is a state in which a deviation between the time interval at which the image information is generated and the time interval of display actually performed on the basis of the image information is large. Thus, it is necessary to largely change the update cycle of the frame image. Specifically, for example, in the case of the first state, the second control unit302calculates the control amount C in accordance with Equation 1. The function f0of Equation 1 is a function of outputting a value proportional to the absolute value of the coefficient F. The control amount C does not include a component P related to the phase. C=F×f0(abs(F))  [Math. 1] On the other hand, the second state is a state in which a deviation between the time interval at which the image information is generated and the time interval of display actually performed on the basis of the image information is small. Thus, it is not necessary to largely change the update cycle of the frame image. Specifically, for example, in the case of the second state, the second control unit302calculates the control amount C in accordance with Equation 2. The function f1of Equation 2 is a function of outputting a value proportional to the absolute value of the coefficient P. C=F×f0(abs(F))+P×f1(abs(P))  [Math. 2] The second control unit302controls the update cycle of the frame image using the control amount C. Specifically, the second control unit302changes the update cycle of the current frame image only by the control amount C. That is, when the control amount C is a positive value, the second control unit302makes the update cycle of the frame image longer (that is, decreases the update frequency of the frame image). On the other hand, when the control amount C is a negative value, the second control unit302shortens the update cycle of the frame image (that is, increases the update frequency of the frame image). The first state is a state in which the above-described deviation is large and it is necessary to control the frequency before matching the phases. Therefore, the coefficient F larger than that in the second state is calculated on the basis of Equation 1 in the first state. Thus, in the first state, the update cycle of the frame image is changed more largely than in the second state. On the other hand, the second state is a state in which the above-described deviation is small and it is necessary to control the phase. Thus, the update cycle of the frame image is controlled such that the coefficient P approximates to the target value (Target) on the basis of Equation 2 indicating that the component of the coefficient P is included in the control amount C. Note that the component of the coefficient F is also included in Equation 2. Accordingly, the second control unit302can not only control the update cycle of the frame image so as to align the phases on the basis of the coefficient P, but also prevent the frequency from being largely deviated on the basis of the coefficient F. The coefficient F and the coefficient P are controlled to become smaller by repeating the above-described control. Note that the second control unit302converts the control amount C into a format that can be recognized by the display unit400, and then transmits the converted signal to the display unit400. The display unit400changes the update cycle of the frame image by changing the frequency of the display clock or the length of the blanking period by using the signal. As described above, the second control unit302controls the update cycle of the frame image such that the second interval approximates to the first interval. Note that the present invention is not limited to the above-described embodiment. In addition, the above-described specific letter strings and numerical values and the specific letter strings and numerical values in the drawings are illustrative, and the present invention is not limited to these letter strings and numerical values. For example, the second control unit302may control the frequency of the display clock using the coefficient F and control the length of the blanking period using the coefficient P. The coefficient F is information on which a deviation between the frequency at which the image information is generated and the update frequency of the frame image is reflected. On the other hand, the coefficient P is information on which a deviation between the time when the image information is generated and the time when the display of the frame image starts is reflected. That is, the coefficient F is information indicating a frequency deviation, and the coefficient P is information indicating a phase deviation. Therefore, the second control unit302may control the frequency of the display clock using the coefficient F indicating the frequency deviation and control the length of the blanking period using the coefficient P indicating the phase deviation. It is possible to further reduce the possibility of missing frames and an increase in display delay under the control.
27,721
11943497
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. DETAILED DESCRIPTION Provided herein are system, apparatus, article of manufacture, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for distributing audio when playing content, such as a movie or TV show. In some embodiments, the audio is transmitted over a wireless network to wireless speakers. The wireless network may be a home WIFI network. In addition to transmitting the audio, the home WIFI network may also be used to stream the content, and/or enable what could be a multitude of electronic devices to access the Internet. Due to this heavy use of the home WIFI network, it may take some time to transmit the audio to the wireless speakers when using the home WIFI network, and this may make it difficult to maintain synchronization of the audio and video playback. Accordingly, in some embodiments, a separate wireless connection/network may be established with the wireless speakers. This separate wireless connection/network (or “clean channel”) may then be used to transmit the audio to the wireless speakers. Since this separate wireless connection/network is not subject to the burdens of the home WIFI network, the audio may be transmitted significantly faster, thereby better enabling audio/video sync during playback of the content. According to some embodiments, this functionality is achieved by a media device that has only a single wireless radio. As will be appreciated, most modern WIFI radios can simultaneously connect to a network (for example, act as a client connecting to an Access Point or AP; this may be referred to as the primary network), and create a wireless network of its own (for example, act as an AP and have clients connect to it; this may be referred to as the secondary network). However, the secondary network must share the channel with the primary network, so the secondary network still suffers from congestion caused by traffic on the primary network. This disclosure includes embodiments that operate to move the radio off the main AP channel to a clean channel that is not affected by traffic on the main AP channel. FIG.1illustrates a block diagram of a whole home entertainment system102, according to some embodiments. In a non-limiting example, whole home entertainment system102is directed to playing content such as movies, TV shows, games, audio books, and music, to name just some examples. System102may be located in a user's home or any other location, such as a business, a park, a stadium, a restaurant, a bar, or a government office, to name just some examples. Whole home entertainment system102may include one or more display devices104. Display devices104may be any combination of monitors, televisions (TVs), computers, smart phones, tablets, wearables (such as a watch), appliances, and/or projectors, to name just some examples. Display devices104may include internal speakers105. Each display device104may be connected to a media device106. Each media device106may be separate from its respective display device104, or may be part of or integrated with the display device104. Each media device106may be a streaming media device (that is, a streaming source) that may stream content from content sources124, and may provide such content to its respective display device104for playback to users. For example, a given media device106may communicate via a home (or local) wireless network126to stream content from content sources124via the Internet122. Wireless network126may be any wireless network, wireless medium or communication protocol such as WIFI, Bluetooth, infrared, cellular, etc., or any combination thereof. In some embodiments, the media device106may be connected to its respective display device104via a HDMI (high-definition multimedia interface) ARC (audio return channel) connection155. As will be appreciated by persons skilled in the relevant art(s), the media device106may transmit audio and video to the display device104over the HDMI ARC connection155. Also, the media device106may receive from the display device104over the HDMI ARC connection155the audio that is being played by the display device104. The media device106receives the audio in real time from the display device104, that is, as the audio is played on the display device104. Each content source124may store content140and metadata142. Content140may include any combination of music, videos, movies, TV programs, multimedia, images, still pictures, text, graphics, gaming applications, advertisements, programming content, public service content, government content, local community content, software, and/or any other content or data objects in electronic form. Metadata142may include data about content140. For example, metadata142may include associated or ancillary information indicating or related to writer, director, producer, composer, artist, actor, summary, chapters, production, history, year, trailers, alternate versions, related content, applications, and/or any other information pertaining or relating to the content140. Metadata142may also or alternatively include links to any such information pertaining or relating to the content140. Metadata142may also or alternatively include one or more indexes of content140, such as but not limited to a trick mode index. Each display device104may also receive content for playback from any number of non-streaming sources119in addition to media device106, such as cable or satellite120, an over-the-air antenna122, a Blu-Ray/DVD player124, etc., to name just some examples. Each display device104may also or alternatively wirelessly receive content from any number of other electronic devices113over the home wireless network126, such as from computers114, smart phones116, appliances and internet of things (IoT) devices118, etc., to name just some examples. Any or all of these electronic devices113may also use the home wireless network126to access the Internet122in a well-known manner. The whole home entertainment system102may include wireless speakers108. In some embodiments, when playing content (such as music, movies, TV shows, etc.), the audio portion of the content is provided to the wireless speakers108for playback (the audio may also or alternatively be provided to the internal speakers105in display devices104for playback). For example, during streaming, the media device106may receive content from content sources124via the Internet122and the home wireless network126. Then, the media device106may transmit the content to the display device104for video playback via the HDMI/ARC connection155, and may transmit the audio of the content to the wireless speakers108via the home wireless network126for audio playback. But, the home wireless network126may be burdened by having to stream the content to the media device106from content sources124(as just described), as well as having to provide connectivity to the Internet122to a multitude of electronic devices113. Due to this burden on the home wireless network126, it may take some time to transmit the audio to the wireless speakers108over the home wireless network126. For example, it may take 100s of milliseconds to transmit the audio to the wireless speakers108over the home wireless network126, and this latency may greatly vary depending on the load on the home wireless network126at any given moment. This latency may make it difficult to maintain audio and video synchronization when content is played. As persons skilled in the relevant art(s) will appreciate, loss of audio/video sync may greatly detract from users' experience when consuming content. Accordingly, in some embodiments, a new wireless connection or network112may be established that is separate from the home wireless network126. This wireless network/connection112may be used to transmit audio from the media device106to the wireless speakers108. Since the wireless network/connection112is not burdened like the home wireless network126(that is, since the wireless network/connection112is a “clean channel”), it make take significantly less time to transmit the audio. For example, in some embodiments, the audio may be transmitted from the media device106to the wireless speakers108via this wireless connection/network112in approximately 20-30 milliseconds. Accordingly, use of the wireless connection/network112in this manner makes it much easier to maintain audio/video sync. Also, embodiments of this disclosure achieve these technological advantages while using only a single wireless radio in the media device106. This shall now be described further. FIG.2illustrates an example media device106, according to some embodiments. The media device106may include one or more video processing modules202, and one or more audio processing modules204. Each video processing module202may be configured to decode, encode and/or translate video of one or more video formats, such as but not limited to MP4 (mp4, m4a, m4v, f4v, f4a, m4b, m4r, f4b, mov), 3GP (3gp, 3gp2, 3g2, 3gpp, 3gpp2), OGG (ogg, oga, ogv, ogx), WMV (wmv, wma, asf), WEBM, FLV, AVI, QuickTime, HDV, MXF (OP1a, OP-Atom), MPEG-TS, MPEG-2 PS, MPEG-2 TS, WAV, Broadcast WAV, LXF, GXF, and/or VOB, to name just some examples. Each video processing module202may include one or more video codecs, such as but not limited to H.263, H.264, HEV, MPEG1, MPEG2, MPEG-TS, MPEG-4, Theora, 3GP, DV, DVCPRO, DVCPRO, DVCProHD, IMX, XDCAM HD, XDCAM HD422, and/or XDCAM EX, to name just some examples. Similarly, each audio processing module204may be configured to decode, encode and/or translate audio of one or more audio formats, such as but not limited to AAC, HE-AAC, AC3 (Dolby Digital), EAC3 (Dolby Digital Plus), WMA, WAV, PCM, MP3, OGG GSM, FLAC, AU, AIFF, and/or VOX, to name just some examples. Media device106may also include buffer206for buffering video, as well as other types of data. The media device106may include a microphone208for receiving audio input, such as spoken audio commands from users, as well as audio output by the internal speakers105and/or the wireless speakers108while content is being played. The media device106may include an audio detection module212for detecting the presence or absence of audio being received by the microphone208. The audio detection module212may also be configured to compare and process two or more audio streams; this is further described below. In some embodiments, the media device106includes a single wireless radio214, for communicating wirelessly in a well-known manner. Media device106may include a networking module210for connecting and disconnecting the radio214to the home wireless network126in a well-known manner. The networking module210may be also configured to create, establish and maintain a wireless connection or network112using the radio214, where the wireless connection/network112is separate from the home wireless network126. The networking module210may perform this function in any well-known manner, such as via the well-known SoftAP (software enabled access point) technology. However, the networking module210is not limited to this example embodiment for creating, establishing and maintaining the wireless connection/network112. FIGS.3and4collectively illustrate a method300for wirelessly distributing audio while playing content that includes such audio, according to some embodiments. Method300can be performed by processing logic that can comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executing on a processing device), or a combination thereof. It is to be appreciated that not all steps may be needed to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown inFIG.3, as will be understood by a person of ordinary skill in the art. Method300shall be described with reference toFIGS.1and2. However, method300is not limited to those example embodiments. Method300may operate while the display device104is playing content, such as a movie, TV show, game, music, etc., to name just some examples. This is indicated by301. In302, the media device106may determine if it is the source of the content currently being played by the display device104. Operation of302is described below with reference toFIG.5. If the media device106is the source, then304is performed. In304, the media device106may stream the content from a content source124. In particular, the media device106may receive the content from the content source124via the Internet122and the home wireless network126. The content may include both video and audio. In306, the video processing module202of the media device106may decode the video, and optionally buffer the decoded video in the buffer206. Such buffering206may be performed to compensate for delays in transmitting the audio to the wireless speakers108over the home wireless network126, to thereby better achieve sync between the audio and video during playback of the content. In308, the audio processing module204may decode the audio of the content, and then the media device106may transmit the decoded audio to the wireless speakers108via the home wireless network126. It is noted that, since the media device106includes just a single radio214, the home wireless network126must be used to deliver the audio to the wireless speakers108, because the home wireless network126is also needed by the media device106to receive the content from the content source124(that is, the media device106cannot disconnect from the home wireless network126and establish a separate wireless connection/network112in the manner that is described below). In310and312, the video may be provided from the buffer206to the display device104over the HDMI/ARC connection155for playback, and the audio is provided to the wireless speakers108over the home wireless network126for playback. Since the video was buffered in the buffer206to compensate for delays in the home wireless network126, such playback of video and audio is in sync. Referring back to302, if it is determined that the media device106is not the source of the content being played on the display device104, then402is performed (seeFIG.4). It is noted that, in this case, the display device104may be receiving the content (both audio and video) from one of the non-streaming sources119, or any other source other than the media device106. In402, the media device106may inform the wireless speakers108that it should begin communicating via a new wireless connection/network112. Such operation may be achieved in a well-known manner using signaling or messaging (or any other means) via the home wireless network126or any other communication medium, means, methodology, approach and/or technology. In404, the networking module210of the media device106may disconnect the radio214from the home wireless network126, in a well-known manner. Such disconnection is possible in this case, since the media device106is not the source of the content being played on the display device104and, thus, the home wireless network126is not needed to stream content from the content source124. In406, the networking module210may create and/or otherwise establish the new wireless connection and/or network112in a well-known manner (using SoftAP technology, for example). This wireless connection/network112may be different and independent of the home wireless network126, such that any burdens on the home wireless network126(as discussed above) do not impact the wireless connection/network112. In some embodiments, the wireless connection/network112may be dedicated for the transmission of audio data from the media device106to the wireless speakers108. In408, the media device106may receive the audio of the content from the display device104, while the content is being played on the display device104. In the example ofFIG.1, the audio is received via the HDMI/ARC connection155. In other embodiments, the media device106may receive the audio from the display device104via other means during playback of the content, such as via SPDIF (SONY/PHILLIPS Digital Interface), analog audio, etc. In410, the audio processing module204may decode the audio of the content (to the extent necessary), and then the media device106may transmit the decoded audio to the wireless speakers108via the wireless connection/network112. In414, the audio plays on the wireless speakers108. Since the audio was provided to the wireless speakers108much faster over the wireless connection/network112compared to the home wireless network126(that is, for example, 20-30 milliseconds versus 100s of milliseconds), the audio and video playback are synchronized. This is the case, even though the video was playing on the display device104when the audio was provided to the media device106over the HDMI/ARC connection155(or via other means, as described above). In practice, there may be some latency between such video playback and audio playback, but due to the greater speed of the wireless connection/network112, such latency is not so great to be discernable by users experiencing the content. In some embodiments, the media device106may cause the wireless speakers108to switch back to the home wireless network126to receive audio data. This may occur, for example, if the media device106determines that, based on current conditions, traffic on the home wireless network126would not prevent audio/video playback sync if the home wireless network126was used to transmit the audio to the wireless speakers108, As discussed above, in302, the media device106may determine if it is the source of the content currently being played by the display device104. Alternatively, the source of the content may be a non-streaming source119(such as cable or satellite120, antenna122, Blu-Ray/DVD124, etc), or any other source. Operation of302shall now be described below with reference toFIG.5. In502, the display device104may signal or otherwise indicate to the media device106that it is or is not the source of content currently being played on the display device104. The display device104may provide such signaling via a hot plug detect (HPD) pin of the HDMI/ARC connection155, or through any other well-known means, approach, functionality, mechanism or technology. However, some display devices104may leave the HPD pin active (or inactive) irrespective of the current source. Accordingly, in some embodiments, even if the display device104indicates through the HPD pin that the media device106is (or is not) the source,504and/or506may still be performed. In504, the audio detection module212of the media device106compares the audio being provided by the media device106to the display device104over the HDMI/ARC connection155, to the audio being received by the media device106from the display device104over the HDMI/ARC connection155, to determine whether the display device104is playing the audio that is being provided by the media device106. If it is determined that the display device104is playing the audio that is being provided by the media device106, then the audio detection module212concludes that the media device106is the source. In some embodiments, the audio detection module212performs such comparison using any one or combination of well-known correlation methods, techniques, procedures, approaches and/or technologies. It is noted that the comparison of504may be possible even when the HDMI connection155is not ARC (or the audio is being delivered via other means such as SPDIF, analog audio, etc.). For example, the audio detection module212may instead compare the audio being provided by the media device106to the display device104, to the audio output by the display device104using internal speakers105(or wireless speakers108) and received by the media device106via microphone208. It is noted that it may not be possible for the audio detection module212to conclude with substantial certainty that the display device104is playing the audio that is being provided by the media device106. This may be case for a number of reasons. For example, the display device104may process the audio before playing it back (to enhance quality or add sound effects, for example). Therefore, even when the audio detection module212in504concludes that it is likely that the media device106is the source,506may be performed. In506, the audio detection module212performs one or more heuristics to determine with more certainty whether the media device106is the source. For example, the audio detection module212may cause silence to be sent to the display device104, and determine whether the display device104is outputting silence. If the display device104is outputting silence, then it is more likely that the media device106is the source; otherwise, it is more likely the media device106is not the source. More generally, the audio detection module212may cause known data to be sent to the display device104, and determine whether the display device104is outputting such data. If the display device104is outputting the known data, then it is more likely that the media device106is the source; otherwise, it is more likely the media device106is not the source. As another example, the audio detection module212may detect if users have interacted with the media device106within a given time period. For example, the audio detection module212may determine if users have used a remote control (not shown in FIG.1) to interact with a user interface of the media device106, or have used pause/rewind/fast forward/stop (etc.) buttons of the remote control to control playback by the media device106. If users have interacted with the media device106within the given time period, then it is more likely the media device106is the source; otherwise, it is less likely the media device106is the source. In some embodiments, if any of502,504and/or506indicates that the media device106is the source, then the audio detection module212concludes that the media device106is the source. Otherwise, the audio detection module212concludes that the media device106is not the source. In other embodiments, at least two of502,504and506would have to indicate that the media device106is the source, in order for the audio detection module212to conclude that the media device106is the source. Otherwise, the audio detection module212concludes that the media device106is not the source. In other embodiments,504and at least one of the heuristics of506would have to indicate that the media device106is the source, in order for the audio detection module212to conclude that the media device106is the source. Otherwise, the audio detection module212concludes that the media device106is not the source. Example Computer System Various embodiments and/or components therein can be implemented, for example, using one or more computer systems, such as computer system600shown inFIG.6. Computer system600can be any computer or computing device capable of performing the functions described herein. For example, one or more computer systems600or portions thereof can be used to implement any embodiments ofFIGS.1-5, and/or any combination or sub-combination thereof. Computer system600includes one or more processors (also called central processing units, or CPUs), such as a processor604. Processor604is connected to a communication infrastructure or bus606. One or more processors604can each be a graphics processing unit (GPU). In some embodiments, a GPU is a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU can have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system600also includes user input/output device(s)603, such as monitors, keyboards, pointing devices, etc., that communicate with communication infrastructure606through user input/output interface(s)602. Computer system600also includes a main or primary memory608, such as random access memory (RAM). Main memory608can include one or more levels of cache. Main memory608has stored therein control logic (i.e., computer software) and/or data. Computer system600can also include one or more secondary storage devices or memory610. Secondary memory610can include, for example, a hard disk drive612and/or a removable storage device or drive614. Removable storage drive614can be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive614can interact with a removable storage unit618. Removable storage unit618includes a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit618can be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive614reads from and/or writes to removable storage unit618in a well-known manner. According to an exemplary embodiment, secondary memory610can include other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system600. Such means, instrumentalities or other approaches can include, for example, a removable storage unit622and an interface620. Examples of the removable storage unit622and the interface620can include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system600can further include a communication or network interface624. Communication interface624enables computer system600to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number628). For example, communication interface624can allow computer system600to communicate with remote devices628over communications path626, which can be wired and/or wireless, and which can include any combination of LANs, WANs, the Internet, etc. Control logic and/or data can be transmitted to and from computer system600via communication path626. In some embodiments, a non-transitory, tangible apparatus or article of manufacture comprising a tangible computer useable or readable medium having control logic (software) stored thereon is also referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system600, main memory608, secondary memory610, and removable storage units618and622, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system600), causes such data processing devices to operate as described herein. Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown inFIG.6. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described herein. CONCLUSION It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections can set forth one or more but not all exemplary embodiments as contemplated by the inventors, and thus, are not intended to limit this disclosure or the appended claims in any way. While this disclosure describes exemplary embodiments for exemplary fields and applications, it should be understood that the disclosure is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein. Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein. References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment can not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. Additionally, some embodiments can be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments can be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
30,881
11943498
DETAILED DESCRIPTION OF THE EMBODIMENTS It should be understood that the specific embodiments described herein are only used to explain the present disclosure, and are not intended to limit the present disclosure. As shown inFIG.1,FIG.1is a schematic structural diagram of a hardware of a display terminal according to an embodiment of the present disclosure. The display terminal includes a communication module10, a memory20, a processor30and other components in a hardware structure. In the display terminal, the processor30is respectively connected to the memory20and the communication module10. A computer program is stored in the memory20, and the computer program is executed by the processor30. When the computer program is executed, the operations of the following method embodiments are implemented. The communication module10can be communicated with external communication device via a network. The communication module10can receive requests sent by the external communication device, and can also send requests, instructions, and information to the external communication device. The external communication device can be a user terminal or other system server or the like. The memory20can be used to store software programs and various data. The memory20can mainly include a program storage area and a data storage area. The program storage area can store an operating system, an application program required by at least one function (for example, obtaining an initial current value corresponding to a current light intensity value of TV and a target current value corresponding to a target light intensity value in the light intensity adjustment request), or the like. The data storage area can include a database, and the data storage area can store data or information according to the use of the display terminal. In addition, the memory20can include a high-speed random access memory, and can also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices. The processor30is the control center of the display terminal, which utilizes various interfaces and lines connect each part of the entire display terminal. By running or executing the software programs and/or modules stored in the memory20, and calling the data stored in the memory20, various functions and processing data of the display terminal are executed, and then the display terminal is monitored as a whole. The processor30can include one or more processing units. In an embodiment, the processor30can integrate an application processor and a modem processor. The application processor mainly deals with the operating system, user interface and application programs, and the modem processor mainly deals with wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor30. Although not shown inFIG.1, the above-mentioned display terminal can also include a circuit control module for connecting with a power source to ensure the normal operation of other components. Those skilled in the art should understand that the structure of the display terminal shown inFIG.1does not constitute a limitation on the display terminal, and can include more or fewer components, a combination of some components, or differently arranged components than shown in the figure. Based on the above hardware structure, various embodiments of the method of the present disclosure are proposed. As shown inFIG.2.FIG.2is a schematic flowchart of a display method according to a first embodiment of the present disclosure. In this embodiment, the method includes: Operation S10, after receiving a wake-up signal, obtaining description information fed back by a corresponding smart terminal according to the wake-up signal, and generating AIoT display information of the display terminal including the description information. In this embodiment, AIoT refers to the combination of artificial intelligence technology and Internet of Things technology. With the development of artificial intelligence technology, traditional Internet of Things devices will tend to be intelligent, thus forming the artificial intelligence Internet of Things (AIoT), making the “Internet of Everything” evolve to the “Intelligent Internet of Everything”. In the AIoT era, “voice+screen” has become the entrance and center of the intelligent Internet of Things. For smart homes in the AIoT era, because of the family's public nature (family sharing), a “public screen” is needed to become the entrance and center of the whole house's intelligent Internet of Things. In the era of AIoT smart homes, all “public screens” (screens of home TVs, range hoods, or the like) in the whole house are empowered to realize multi-screen interconnection. Each “public screen” in the home is the entrance to the AIoT ecosystem, and any “public screen” can realize voice interaction with smart devices in the whole house. The smart large screen in the living room at the core of the house will be the most perfect “public screen” in the family. In this embodiment, the display terminal can be a TV or an associated smart terminal with display function. Both the user and the associated smart terminal can send a wake-up signal to the display terminal. When the wake-up signal is detected, the wake-up signal sent by the outside world is received, and the description information fed back by the corresponding smart terminal is obtained according to the wake-up signal. The corresponding smart terminal can be the display terminal itself that receives the wake-up signal or the associated smart terminal that sends the wake-up signal. It is necessary to further obtain the description information fed back by the corresponding smart terminal according to the wake-up signal, and then generate the AIoT display information containing the description information. The description information is obtained through the wake-up signal, and then the AIoT display information is generated, thereby further presenting the AIoT display information to the user. Operation S20, dividing a display area of the display terminal into at least two non-overlapping areas, displaying a current playback screen of the display terminal in one area of the at least two non-overlapping areas, and displaying the AIoT display information in another area of the at least two non-overlapping areas. In this embodiment, when displaying the AIoT information, the display area of the current display terminal needs to be divided into at least two areas that do not affect each other and do not overlap each other. It can be understood that the display area of the TV is divided into at least two non-overlapping areas, the current TV program being played by the TV is displayed in one area of the at least two non-overlapping areas, and the AIoT display information is displayed in the other area of the at least two non-overlapping areas. In fact, the current TV program is scaled down without affecting the clarity. The scaled down TV program can still be viewed normally without affecting the picture quality, and then the scaled down TV program is displayed in a divided display area. In this embodiment, after receiving wake-up signal, the description information fed back by the corresponding smart terminal according to the wake-up signal is obtained, and the AIoT display information of the display terminal including the description information is generated; and the display area of the display terminal is divided into at least two non-overlapping areas, the current playback screen of the display terminal is displayed in one area of the at least two non-overlapping areas, and the AIoT display information is displayed in another area of the at least two non-overlapping areas. By dividing the display area, the TV program being played and the AIoT display information are displayed together in two non-overlapping display areas. When the AIoT display information is presented to the user, the TV program being played will not be affected, and the AIoT display information will not cover the TV program being played when it appears, and will not affect the normal viewing of the user. In an embodiment, the display method of a second embodiment of the present disclosure is provided based on the display method of the first embodiment of the present disclosure. In this embodiment, as shown inFIG.3, the operation of obtaining description information fed back by a corresponding smart terminal according to the wake-up signal in operation S10includes:Operation S11, determining an object sending the wake-up signal; andOperation S12, when the object sending the wake-up signal is a user, obtaining a user instruction from the wake-up signal, and determining a smart terminal that needs to be awakened according to the user instruction, to obtain the description information fed back by the smart terminal after receiving the user instruction. In this embodiment, either the user or the associated smart terminal can send the wake-up signal. After receiving the wake-up signal, it is necessary to further determine whether the object sending the wake-up signal is the user or the associated smart terminal. The user can send the wake-up signal directly through voice. For example, when the user wants to check the weather forecast, the user can input voice to check the weather or weather forecast. When the object sending the wake-up signal is the user, the user instruction is obtained from the wake-up signal. The user instruction at this time is to check the weather and determine the smart terminal that needs to be awakened according to the user instruction. When the user needs to check the weather, the corresponding smart terminal that needs to be awakened is the TV itself, and the TV feeds back description information after receiving the user instruction. The TV receives the user instruction to check the weather and feeds back the weather forecast for the user. The text information of the weather forecast can be presented to the user during the feedback, and the voice broadcast can also be carried out. In this embodiment, the instruction input by the user includes an action instruction and a non-action instruction. For example, when the user instruction is a weather forecast, the TV only needs to call the local content to present the weather content to the user, and the user instruction is the non-action instruction. In an embodiment, the smart terminal that needs to be awakened is the TV itself, and the description information fed back by the TV is obtained after receiving the user instruction. For example, when the user instruction is to open the curtain, the TV needs to send the instruction to the associated smart terminal that controls the curtain to close, and the user instruction is the action instruction. In an embodiment, the smart terminal that needs to be awakened is an associated smart terminal that controls the closing of the curtain, and the description information fed back by the associated smart terminal that controls the closing of the curtain is obtained after receiving the user instruction. It can be understood that the user instruction that can be completed by the TV itself is the non-action instruction, and the TV needs to send the instruction to the associated smart terminal, and the smart terminal completes the instruction, and the user instruction at this time is the action instruction. In an embodiment, as shown inFIG.4, the operation S11further includes:Operation S13, when the object sending the wake-up signal is an associated smart terminal, obtaining prompt information corresponding to the associated smart terminal, and using the prompt information as obtained description information fed back by the corresponding smart terminal. In this embodiment, the user or the associated smart terminal can send the wake-up signal. After receiving the wake-up signal, it is necessary to further determine whether the object sending the wake-up signal is the user or the associated smart terminal. When the object sending the wake-up signal is the associated smart terminal, it means that the associated smart terminal has completed a user-specified event at this time, and the user needs to be reminded that the event has been completed. The prompt information of the associated smart terminal is obtained, and the prompt information is used as the description information fed back by the smart terminal that sends the wake-up signal. For example, the washing machine has finished washing, the washing machine sends the wake-up signal to the TV, and the TV obtains the prompt information of the washing machine. The prompt information fed back by the washing machine can be that the clothes have been washed, please dry them as soon as possible, or the like, and can also include the instructions for use or maintenance of the washing machine and the instructions for drying and maintaining clothes. The prompt information of washing feedback is used as the description information. In this embodiment, after receiving the wake-up signal, the object sending the wake-up signal is determined. When the object sending the wake-up signal is the user, the user instruction is obtained from the wake-up signal. The user instruction includes the action instruction and the non-action instruction. It is determined whether the awakened smart terminal is the TV itself or the associated smart terminal according to the user instruction, so as to obtain the description information fed back by the TV or the associated smart terminal after receiving the user instruction, and generate AIoT display information based on the description information. When it is determined that the object sending the wake-up signal is the associated smart terminal, the prompt information of the associated smart terminal is obtained, and the prompt information is used as the description information fed back by the associated smart terminal. The AIoT display information can be generated according to the description information, so that the AIoT display information including the description information is presented to the user, making the way for the user to obtain the information more convenient and faster. In an embodiment, the display method of a third embodiment of the present disclosure is provided based on the display method of the second embodiment of the present disclosure. In this embodiment, as shown inFIG.5, the operation of obtaining the description information fed back by the smart terminal after receiving the user instruction in operation S12includes:Operation S100, when the smart terminal that needs to be awakened is an associated smart terminal, obtaining execution status information of the associated smart terminal after receiving the user instruction and recommendation information associated with the user instruction, and using the execution status information and the recommendation information associated with the user instruction as the description information fed back by the associated smart terminal; andOperation S200, when the smart terminal that needs to be awakened is the display terminal itself, obtaining a local application corresponding to the user instruction to receive the description information fed back by the local application. In this embodiment, the smart terminal to be awakened can be further determined according to the user instruction. When the user instruction is the action instruction, the smart terminal that needs to be awakened is the associated smart terminal. For example, the action instruction is to open the curtain, and the smart terminal that needs to be awakened at this time is the associated smart terminal that controls the closing of the curtain, and obtains the execution status information of the associated smart terminal that controls the closing of the curtain after receiving the user instruction and the recommendation information related to the user instruction. The execution status information can be that the curtain is about to be opened or the curtain is being opened or the curtain has been opened. The execution status information changes with the state of the curtain. Or, the execution status information can directly display that the curtain has been opened when the curtain is opened. The recommendation information related to the user instruction can recommend information that is highly related to the user instruction. For example, the user instruction is to open curtains, and the recommendation information can be outdoor ultraviolet intensity, weather, or the like. According to the current outdoor ultraviolet intensity or weather, it is further recommended that the user needs to bring an umbrella in the recommendation information, and the execution status information and the recommendation information associated with the user are integrated into the description information fed back by the associated smart terminal. According to the user instruction, the smart terminal that need to be awakened can be further determined. When the user instruction is the non-action instruction, the smart terminal that needs to be awakened is the TV itself. For example, if the user instruction is to check the weather, the TV obtains the local application—the weather app according to the user instruction, and receives the description information fed back by the weather app. The description information at this time can include the highest temperature, the lowest temperature, the current temperature, the humidity, the rainfall probability, the visibility, the ultraviolet index of the day. In this embodiment, it is determined whether the awakened smart terminal is the TV itself or the associated smart terminal according to the user instruction. When the awakened smart terminal is the TV itself, the local application is obtained and the description information fed back by the local application is received. When the smart terminal that needs to be awakened is the associated smart terminal, the execution status information of the smart terminal after receiving the user instruction and the recommendation information associated with the user instruction are obtained. The execution status information and the recommendation information are integrated into description information, and the description information including the execution status information and the recommendation information is further added to the AIoT display information and presented to the user, to make AIoT display information content more conform to user needs, and make the information content obtained by users more comprehensive and specific. In an embodiment, the display method of a fourth embodiment of the present disclosure is provided based on the display method of the first embodiment of the present disclosure. In this embodiment, the at least two non-overlapping areas further include a menu display area. As shown inFIG.6, before the operation of dividing a display area of the display terminal into at least two non-overlapping areas in operation S20, the display method further includes:Operation S30, obtaining status information of other associated smart terminals except the smart terminal that feeds back the description information, and sorting icons of all associated smart terminals in a preset order, where the icon of the smart terminal that feeds back the description information is in a first position, after determining the icon in the first position, sorting the icons according to a remaining time in the status information in ascending order or a degree of relevance to the description information; andwhile performing the operation of displaying the AIoT display information in another area of the at least two non-overlapping areas in operation S20, the following operation is also performed:Operation S40, displaying the sorted icons in the menu display area. In this embodiment, the display area is divided into at least two non-overlapping areas. The divided display area also includes the menu display area. For example, as shown inFIG.7, the TV area is divided into three areas, including the program area, the AIoT display area, and the menu display area. Before dividing the display area of the display terminal, the status information of the associated smart terminal other than the smart terminal that feeds back the description information can also be obtained. The status information can be working status information, and it can be further known whether the associated smart terminal is in working status through the status information. When multiple associated smart terminals are in working state, for example, the rice cooker is stewing rice, the dryer is drying clothes, or the like. The icons of the associated smart terminals in the working state can be sorted according to the preset order. When sorting the icons, the icons of the smart terminal that feeds back the description information are ranked in a first position (the icon can also be the device name of the associated smart terminal). Other associated smart terminals that are in working state can be sorted according to the remaining time of work. The icon with the shortest remaining work time is put in the first position, and the icon with the longest work remaining time is put in the last position. For example, the washing machine has finished washing, the remaining time for the rice cooker to cook is 3 minutes, and the remaining time for the dryer to work is 5 minutes. As shown inFIG.7, the washing machine is sorted in the first position according to the urgency of time, and the rice cooker is in front of the dryer. Finally, the sorted icons are displayed in the menu display area. Since the first position is the washing machine, the information in the AIoT display area is the description information corresponding to the washing machine, including that the clothes have been washed, please dry them as soon as possible, and can also include the washing machine instructions or maintenance instructions and the clothes drying and maintenance instructions. After seeing the sorting of the icons, the user can know which the next associated smart terminal is to finish the work, so that the information content that the user knows is more comprehensive and specific. In this embodiment, in addition to sorting by time urgency, icon sorting can also be sorted by relevance. Specifically, after determining the first icon, the information associated with it is searched for according to the description information of the associated smart terminal corresponding to the first icon. For example, as shown inFIG.8, the icon in the first position is the curtain, the description information in the AIoT display information corresponding to the curtain includes the execution status information and the recommendation information. The execution status information includes contents such as opening curtains and completion status. The recommendation information includes outdoor ultraviolet intensity, umbrella and other information. The AIoT display information involves outdoor ultraviolet intensity. Therefore, the curtain is related to the weather app or air conditioner. The icons of the air conditioner and weather app are placed after the curtain icon, and sorted according to the degree of relevance. For example, the weather app can be arranged after the curtain icon, and the air conditioner after the weather app, and the sorted icons are finally displayed in the menu display area. The user can choose according to the needs after seeing the sorted icons, and the information presented to the user is more humane. In an embodiment, as shown inFIG.9, after the operation of obtaining status information of other associated smart terminals except the smart terminal that feeds back the description information in operation S30, the display method further includes:Operation S31, associating the status information of each other associated smart terminal with a corresponding icon;after the operation S40, the display method further includes:Operation S32, in response to detecting a movement instruction corresponding to the menu display area, determining a pointed icon after completion of the movement instruction response to obtain status information associated with the pointed icon, and replacing the AIoT display information in another area of the at least two non-overlapping areas with new AIoT display information including the status information associated with the pointed icon. In this embodiment, the smart terminal sending the feedback description information is the TV itself or the associated smart terminal. After obtaining the status information of other associated smart terminals except the smart terminal that feeds back the description information, the status information of each other associated smart terminal is associated with its corresponding icon, and each icon in the menu display area is associated with its status information. After the sorted icons are displayed in the menu display area, whether there is the movement instruction in the menu display area is detected. When the movement instruction corresponding to the menu display area is detected, the corresponding pointed icon is determined after the corresponding completion of the movement instruction. The cursor can be used to point to the icon, or the pointed icon can be set to blink or set a different color to distinguish it from other icons. When determining the pointed icon after completion of the movement instruction response, the status information associated with the pointed icon is obtained, and the AIoT display information in the area where the AIoT display information is displayed is replaced with the new AIoT display information that includes the status information. For example, as shown inFIG.10, the pointed icon is weather. The AIoT display information displays weather-related status information. Actually, when the content in the icon is the local application of the TV, the status information is its corresponding description information. When the icon is the associated smart terminal, the status information is its working status information. In this embodiment, the AIoT display information is updated in real time according to the icon pointed to by the movement instruction, so that the information obtained by the user is more comprehensive and specific. In an embodiment, the display method of a fifth embodiment of the present disclosure is provided based on the display method of the first embodiment of the present disclosure. In this embodiment, as shown inFIG.11, after the operation of displaying the AIoT display information in another area of the at least two non-overlapping areas in operation S20, the display method further includes:Operation S50, detecting whether feedback information sent by an input device or a user is received; andOperation S51, in response to receiving the feedback information sent by the input device or the user, restoring the divided display area of the display terminal to an initial state. In this embodiment, after the AIoT display information is presented to the user, it is detected whether the feedback information sent by the user through the remote control is received or the feedback information sent by the user through voice. When receiving the feedback information sent by the user through the remote control or the feedback information sent by the user through voice, the display area divided by the display terminal is restored to the initial state. The initial state is the state before the display area is divided. In fact, the TV programs broadcast on the TV are enlarged in equal proportions, the division of the display area is canceled, and the TV programs are displayed on the full screen. When the AIoT display information contains prompt information, in order to confirm that the user has seen the prompt information, when receiving the feedback information, it means that the user has already seen the prompt information at this time, and the division of the display area is canceled, so that the TV display interface is restored to the state before the division. After confirming that the user has viewed the AIoT display information, the initial state of the display area is restored to ensure that the user does not miss the prompt information. In an embodiment, as shown inFIG.12, after the operation S50, the display method further includes:Operation S52, in response to a determination that no feedback information is received from the input device or the user, determining whether a display time corresponding to the AIoT display information has reached a preset display time; andOperation S53, in response to a determination that the display time corresponding to the AIoT display information has reached the preset display time, restoring the divided display area of the display terminal to the initial state. In this embodiment, after the AIoT display information is presented to the user, it is detected whether the feedback information sent by the user through the remote control or the feedback information sent by the user through voice is received. When no feedback information from the user through the remote control or from the user through voice is received, the display time corresponding to the AIoT display information is obtained. The display time is set by the system or by the user. It is determined whether the display time corresponding to the AIoT display information has reached the preset display time. When the corresponding display time has reached the preset display time, the display area divided by the display terminal is restored to the initial state. The initial state is the state before the display area is divided. In fact, the TV programs broadcast on the TV are enlarged in equal proportions, the division of the display area is canceled, and the TV programs are displayed on the full screen. For example, when the user views the weather, the AIoT display information does not contain prompt information. After the weather information is displayed and broadcast for the preset display time, it will automatically exit and restore the full screen. When the AIoT display information does not contain prompt information, the user does not need to confirm. When the preset display time is reached, the display area is automatically restored to the state before the division, which can reduce user operations. The present disclosure further provides a non-transitory computer readable storage medium. A computer program is stored in the non-transitory computer readable storage medium. The non-transitory computer readable storage medium can be the memory in the display terminal ofFIG.1, or can be at least one of a Read-Only Memory (ROM)/a Random Access Memory (RAM), a magnetic disk, and an optical disk. The non-transitory computer readable storage medium includes a number of instructions to make a terminal device (a mobile phone, a computer, a server, a terminal, or a network device, or the like) with a processor execute the method described in each embodiment of the present disclosure. In the present disclosure, the terms “first”, “second”, “third”, “fourth” and “fifth” are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance. For those of ordinary skill in the art, the specific meanings of the above-mentioned terms in the present disclosure can be understood according to specific circumstances. In the description of this specification, the description of the terms “one embodiment”, “some embodiments”, “example”, “specific example”, and “some examples”, or the like, means that a specific feature, structure, material, or characteristic described in conjunction with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials or characteristics can be combined in any one or more embodiments or examples in a suitable manner. In addition, those skilled in the art can combine the different embodiments or examples and the features of the different embodiments or examples described in this specification without contradicting each other. Although the embodiments of the present disclosure have been shown and described above, the scope of the present disclosure is not limited thereto. It can be understood that the above-mentioned embodiments are exemplary and cannot be understood as a limitation of the present disclosure. Those of ordinary skill in the art can make changes, modifications and substitutions to the above-mentioned embodiments within the scope of the present disclosure, and these changes, modifications and substitutions should all be covered by the scope of the present disclosure. Therefore, the scope of the present disclosure shall be subject to the scope of the claims.
33,187
11943499
DETAILED DESCRIPTION The amount of media available to users in any given media delivery system can be substantial. Consequently, many users desire a form of media guidance through an interface that allows users to efficiently navigate media selections and easily identify media that they may desire. An application which provides such guidance is referred to herein as an interactive media guidance application or, sometimes, a media guidance application or a guidance application. Interactive media guidance applications may take various forms depending on the media for which they provide guidance. One typical type of media guidance application is an interactive television program guide. Interactive television program guides (sometimes referred to as electronic program guides) are well-known guidance applications that, among other things, allow users to navigate among and locate many types of media content including conventional television programming (provided via traditional broadcast, cable, satellite, Internet, or other means), as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming media, downloadable media, Webcasts, etc.), and other types of media or video content. Guidance applications also allow users to navigate among and locate content related to the video content including, for example, video clips, articles, advertisements, chat sessions, games, etc. With the advent of the Internet, mobile computing, and high-speed wireless networks, users are accessing media on personal computers (PCs) and other devices on which they traditionally did not, such as hand-held computers, personal digital assistants (PDAs), personal media players (e.g., MP3 players), mobile telephones, in-car television devices, or other mobile devices. On these devices users are able to navigate among and locate the same media available through a television. Consequently, media guidance is necessary on these devices, as well. The guidance provided may be for media content available only through a television, for media content available only through one or more of these devices, or for media content available both through a television and one or more of these devices. The media guidance applications may be provided as on-line applications (i.e., provided on a web-site), or as stand-alone applications or clients on hand-held computers, PDAs, mobile telephones, or other mobile devices. The various devices and platforms that may implement media guidance applications are described in more detail below. A user, as referred to herein, may be an individual user or a group of users such as the members of a family or a group of friends, for example. A user may use multiple user equipment devices, such as a television, a cell-phone and a personal media player, to access media content. The user equipment devices used by the user form the user's media network. The user's media network may be a home network including, for example, the user's television and personal computer connected through the user's WIFI home network. Alternatively, the user's media network may include user equipment devices connected through the Internet or third-party networks including television service provider networks and cell-phone networks, for example (e.g., a work computer for accessing an on-line interactive program guide, a television and recording device in the user's car, and a plurality of televisions and recording devices in the user's home). The user's media network may include equipment devices that are only used by the user, such as the user's cell-phone. The user's media network may also include shared equipment, such as a television used by multiple members of a household. In some embodiments, shared equipment may be associated with a primary user or group of users (e.g., the television in the parents' bedroom is associated with the parents, while the television in the game room is associated with a child). User profile information for the user equipment devices of a user's media network may be shared between the devices to coordinate the media guidance provided to the user on each device. The coordination may include sharing user configuration information to provide a common or similar media guidance interface on all of the user's devices. The coordination may also include sharing preference information in order to provide coordinated media content recommendation on the devices. The coordination may include sharing media content information, to allow a user to access recorded content or other stored content from multiple devices. The coordination may provide additional functionality, such as allowing the user to remotely communicate with or control devices on the user's media network using another device on the network. The media guidance application may provide users with the opportunity to define rules for assigning particular devices of a user's media network as destination devices. Such rules may automatically control where media content is transmitted, stored, or both in the media network. A destination device is a device that can be used to store (e.g., download, cache or record) or display (e.g., stream) media content. The rules may define conditions for identifying media content that is assigned a destination device. In some embodiments, the conditions may be based on the attributes of the media content (e.g., rating, actors, high definition, or theme). In some embodiments, the conditions may be based on the manner in which the media content is received (e.g., recorded, streamed or cached). For example, a rule may assign recordings to particular recording devices. As another example, a rule may assign downloaded content to a particular device of the media network (e.g., media content downloaded from an online store, such as iTunes or Google Video, may be downloaded to a user's personal media device). As still another example, a rule may assign media content streamed from a server (e.g., a VOD server) or provided by a webcast to a particular user device. One of the functions of the media guidance application is to provide media listings and media information to users.FIGS.1-6show illustrative display screens that may be used to provide media guidance, and in particular media listings. The display screens shown inFIGS.1-6may be implemented on any suitable device or platform. While the displays ofFIGS.1-6are illustrated as full screen displays, they may also be fully or partially overlaid over media content being displayed. A user may indicate a desire to access media information by selecting a selectable option provided in a display screen (e.g., a menu option, a listings option, an icon, a hyperlink, etc.) or pressing a dedicated button (e.g., a GUIDE button) on a remote control or other user input interface or device. In response to the user's indication, the media guidance application may provide a display screen with media information organized in one of several ways, such as by time and channel in a grid, by time, by channel, by media type, by category (e.g., movies, sports, news, children, or other categories of programming), or other predefined, user-defined, or other organization criteria. FIG.1shows illustrative grid program listings display100arranged by time and channel that also enables access to different types of media content in a single display. Display100may include grid102with: (1) a column of channel/media type identifiers104, where each channel/media type identifier (which is a cell in the column) identifies a different channel or media type available; and (2) a row of time identifiers106, where each time identifier (which is a cell in the row) identifies a time block of programming. Grid102also includes cells of program listings, such as program listing108, where each listing provides the title of the program provided on the listing's associated channel and time. With a user input device, a user can select program listings by moving highlight region110. Information relating to the program listing selected by highlight region110may be provided in program information region112. Region112may include, for example, the program title, the program description, the time the program is provided (if applicable), the channel the program is on (if applicable), the program's rating, and other desired information. In addition to providing access to linear programming provided according to a schedule, the media guidance application also provides access to non-linear programming which is not provided according to a schedule. Non-linear programming may include content from different media sources including on-demand media content (e.g., VOD), Internet content (e.g., streaming media, downloadable media, etc.), locally stored media content (e.g., video content stored on a digital video recorder (DVR), digital video disc (DVD), video cassette, compact disc (CD), etc.), or other time-insensitive media content. On-demand content may include both movies and original media content provided by a particular media provider (e.g., HBO On Demand providing “The Sopranos” and “Curb Your Enthusiasm”). HBO ON DEMAND, THE SOPRANOS, and CURB YOUR ENTHUSIASM are trademarks owned by the Home Box Office, Inc. Internet content may include web events, such as a chat session or Webcast, or content available on-demand as streaming media or downloadable media through an Internet web site or other Internet access (e.g. FTP). Grid102may provide listings for non-linear programming including on-demand listing114, recorded media listing116, and Internet content listing118. A display combining listings for content from different types of media sources is sometimes referred to as a “mixed-media” display. The various permutations of the types of listings that may be displayed that are different than display100may be based on user selection or guidance application definition (e.g., a display of only recorded and broadcast listings, only on-demand and broadcast listings, etc.). As illustrated, listings114,116, and118are shown as spanning the entire time block displayed in grid102to indicate that selection of these listings may provide access to a display dedicated to on-demand listings, recorded listings, or Internet listings, respectively. In other embodiments, listings for these media types may be included directly in grid102. Additional listings may be displayed in response to the user selecting one of the navigational icons120. (Pressing an arrow key on a user input device may affect the display in a similar manner as selecting navigational icons120.) Display100may also include video region122, advertisement124, options region126, and user media network identification region128. User media network identification region128may identify the user media network with which the media guidance application is currently associated. Video region122may allow the user to view and/or preview programs that are currently available, will be available, or were available to the user. The content of video region122may correspond to, or be independent from, one of the listings displayed in grid102. Grid displays including a video region are sometimes referred to as picture-in-guide (PIG) displays. PIG displays and their functionalities are described in greater detail in Satterfield et al. U.S. Pat. No. 6,564,378, issued May 13, 2003 and Yuen et al. U.S. Pat. No. 6,239,794, issued May 29, 2001, which are hereby incorporated by reference herein in their entireties. PIG displays may be included in other media guidance application display screens of the present invention. Advertisement124may provide an advertisement for media content that, depending on a viewer's access rights (e.g., for subscription programming), is currently available for viewing, will be available for viewing in the future, or may never become available for viewing, and may correspond to or be unrelated to one or more of the media listings in grid102. Advertisement124may also be for products or services related or unrelated to the media content displayed in grid102. Advertisement124may be selectable and provide further information about media content, provide information about a product or a service, enable purchasing of media content, a product, or a service, provide media content relating to the advertisement, etc. Advertisement124may be targeted based on a user's profile/preferences, monitored user activity, the type of display provided, or on other suitable targeted advertisement bases. While advertisement124is shown as rectangular or banner shaped, advertisements may be provided in any suitable size, shape, and location in a guidance application display. For example, advertisement124may be provided as a rectangular shape that is horizontally adjacent to grid102. This is sometimes referred to as a panel advertisement. In addition, advertisements may be overlaid over media content or a guidance application display or embedded within a display. Advertisements may also include text, images, rotating images, video clips, or other types of media content. Advertisements may be stored in the user equipment with the guidance application, in a database connected to the user equipment, in a remote location (including streaming media servers), or on other storage means or a combination of these locations. Providing advertisements in a media guidance application is discussed in greater detail in, for example, Knudson et al., U.S. patent application Ser. No. 10/347,673, filed Jan. 17, 2003, Ward, III et al. U.S. Pat. No. 6,756,997, issued Jun. 29, 2004, and Schein et al. U.S. Pat. No. 6,388,714, issued May 14, 2002, which are hereby incorporated by reference herein in their entireties. It will be appreciated that advertisements may be included in other media guidance application display screens of the present invention. Options region126may allow the user to access different types of media content, media guidance application displays, and/or media guidance application features. Options region126may be part of display100(and other display screens of the present invention), or may be invoked by a user by selecting an on-screen option or pressing a dedicated or assignable button on a user input device. The selectable options within options region126may concern features related to program listings in grid102or may include options available from a main menu display. Features related to program listings may include searching for other air times or ways of receiving a program, recording a program, enabling series recording of a program, setting program and/or channel as a favorite, purchasing a program, or other features. Options available from a main menu display may include search options, VOD options, parental control options, access to various types of listing displays, subscribe to a premium service, edit a user's profile, define a rule for assigning a destination for media content, access a browse overlay, or other options. The media guidance application may be personalized based on a user's preferences. A personalized media guidance application allows a user to customize displays and features to create a personalized “experience” with the media guidance application. This personalized experience may be created by allowing a user to input these customizations and/or by the media guidance application monitoring user activity to determine various user preferences. Users may access their personalized guidance application by logging in or otherwise identifying themselves to the guidance application. Customization of the media guidance application may be made in accordance with a user profile. The customizations may include varying presentation schemes (e.g., color scheme of displays, font size of text, etc.), aspects of media content listings displayed (e.g., only HDTV programming, user-specified broadcast channels based on favorite channel selections, re-ordering the display of channels, recommended media content, etc.), desired recording features (e.g., recording or series recordings for particular users, recording quality, etc.), parental control settings, and other desired customizations. The media guidance application may allow a user to provide user profile information or may automatically compile user profile information. The media guidance application may, for example, monitor the media the user accesses and/or other interactions the user may have with the guidance application. Additionally, the media guidance application may obtain all or part of other user profiles that are related to a particular user (e.g., from other web sites on the Internet the user accesses, such as www.tvguide.com, from other media guidance applications the user accesses, from other interactive applications the user accesses, from a handheld device of the user, etc.), and/or obtain information about the user from other sources that the media guidance application may access. As a result, a user can be provided with a unified guidance application experience across the user's different devices. This type of user experience is described in greater detail below in connection withFIG.4. Additional personalized media guidance application features are described in greater detail in Ellis et al., U.S. patent application Ser. No. 11/179,410, filed Jul. 11, 2005, Boyer et al., U.S. patent application Ser. No. 09/437,304, filed Nov. 9, 1999, and Ellis et al., U.S. patent application Ser. No. 10/105,128, filed Feb. 21, 2002, which are hereby incorporated by reference herein in their entireties. Another display arrangement for providing media guidance is shown inFIG.2. Video mosaic display200includes selectable options202for media content information organized based on media type, genre, and/or other organization criteria. In display200, television listings option204is selected, thus providing listings206,208,210, and212as broadcast program listings. Unlike the listings fromFIG.1, the listings in display200are not limited to simple text (e.g., the program title) and icons to describe media. Rather, in display200the listings may provide graphical images including cover art, still images from the media content, video clip previews, live video from the media content, or other types of media that indicate to a user the media content being described by the listing. Each of the graphical listings may also be accompanied by text to provide further information about the media content associated with the listing. For example, listing208may include more than one portion, including media portion214and text portion216. Media portion214and/or text portion216may be selectable to view video in full-screen or to view program listings related to the video displayed in media portion214(e.g., to view listings for the channel that the video is displayed on). The listings in display200are of different sizes (i.e., listing206is larger than listings208,210, and212), but if desired, all the listings may be the same size. Listings may be of different sizes or graphically accentuated to indicate degrees of interest to the user or to emphasize certain content, as desired by the media provider or based on user preferences. Various systems and methods for graphically accentuating media listings are discussed in, for example, Yates, U.S. patent application Ser. No. 11/324,202, filed Dec. 29, 2005, which is hereby incorporated by reference herein in its entirety. Selectable options202may also include user media network options such as View My Media Network, User Preferences and/or Logoff Media Network options. View My Media Network option may be used to view a list of user equipment devices that are associated with the current user media network. The View My Media Network option may also be used to add or remove devices from a user's media network. The User Preferences option may be used to access a user preference menu allowing the user to modify user preference and user personalization options associated with the user's media network and/or the device the media guidance application is associated with. The user preference menu may also allow the user to define and modify rules for assigning a device of the user media network as a destination for media content. The Logoff Media Network option may be used to log off of the user media network the media guidance application is currently associated with, and/or to log on to a different user media network. Further access features for user media networks are discussed in connection withFIG.3. FIG.3shows an illustrative login overlay302that allows a user to log in to a user media network. Login overlay302may be presented in response to a user selection of the Logoff Media Network menu option202ofFIG.2, for example. Login overlay302may include a user selection field304and a password field306. A user may enter a username or other identifier in user selection field304by using the arrow buttons to the left and right of field304to toggle between the names of users that have recently used the user equipment device. A user may alternatively type a username or identifier into field304, or use other appropriate means to identify themselves to the user equipment device. An optional password field306may be used to confirm the identity of the user logging on. The login feature may be required in order to associate a user equipment device with a user's media network. A user may be required to log in to her media network the first time she uses a user equipment device. The act of logging into the user's media network may permanently associate the device with the user's media network by storing an identifier of the device in the user's media network profile information. The act of logging in may also download at least part of the profile information associated with the user's media network onto the user equipment device, allowing the user to access her personalization and preference options and her media content information from the device. The device may remain associated with the media network until the user removes the device from her media network. Alternatively, the user may be required to login to her media network every time she uses the user equipment device in order to confirm her identity to the device and/or to her media network. In the case of shared devices that may be used by multiple users, for example, logging into the user's media network may ensure that the correct user's profile information is loaded onto the equipment device. Other means of logging onto the device may be used. For example, the device may automatically detect the identity of the user using the equipment device based on the user's interactions with the device, and in response to the automatic detection, may prompt the user to confirm her identity or automatically log in to the user's media network. As another example, the device may automatically detect the identity of the user based on the time of day (e.g., a day-parting approach). FIG.4shows an illustrative overlay402of a user's media network allowing a user to view user equipment devices associated with the user's media network (e.g., in response to a user selection of the View My Media Network option202,FIG.2). The devices associated with the user's media network may be displayed according to the device types (e.g., television, PC, recording device, cell-phone), and according to whether the devices are currently available or unavailable. The media guidance application may identify the user equipment devices associated with the user's media network from, for example, a user media network data structure (e.g., data structure1000,FIG.10). Devices may be unavailable if, for example, they are turned off, they are not currently connected to the network, they are being used by other users, or they are performing background functions such as a scheduled recording. A device may also be marked as unavailable if the device, or features of the device, cannot be accessed from the device the media network is being accessed from. A device may not be accessible because of limitations of the device or limitations imposed by service providers. For example, a cell-phone device may be indicated as unavailable on the user's home television system because the set-top box may not be capable of accessing recordings stored on the phone or scheduling recordings on the phone. In another embodiment, the cell-phone device may be indicated as unavailable because of limitations imposed by telephone service and/or television service providers. For example, limitations imposed by a cell-phone telephone service provider may bar users from scheduling recordings on home television systems using the users' cell-phones. The service providers may limit cross-device functionality and may make such functionality available to users having both their phone service and television service with the same provider, for example. Alternatively, full cross-device functionality may be available for an additional fee. In another embodiment, cross-device functionality may be limited by agreements that users may have entered into. For example, a user may have purchased a recording for playback on a single device, or on particular types of devices, but the recording and/or the device having the recording may be marked as unavailable if the user tries to access the recording from an unauthorized device. Available devices which will become unavailable at a scheduled time may include indications of the time at which they will become unavailable. Similarly, unavailable devices that will become available at a scheduled time may include indications of their future availability. For example, a recording device that is currently recording a program may be listed as unavailable and may include an indication that the device will become available at a scheduled time (as shown). Conversely, a recording device that is currently available may contain an indication of when it will become unavailable (e.g., the start time of its next scheduled recording). Overlay402may also include a menu option404for adding a new device to the network. Menu option404may be used to associate the user equipment device currently being used by the user with the user's media network. Menu option404may also be used to associate another device with the user's media network by, for example, requesting that the user identify the device to be associated with the network by providing an IP address or other unique identifier of the device. Menu option406may allow the user to remove a device from the user's media network. Menu option406may be used to disassociate the device being used by the user from the user's media network. Menu option406may also be used to disassociate other devices from the user's media network. The selection of a device listed from overlay402may allow a user to access options relating to the device. The user may, for example, access a schedule of the device indicating times at which the device is scheduled to be available or unavailable. The user may also access options for sending a message for display on the device, for remotely controlling the device (e.g., for setting up a recording on a recording device), or for accessing other information relating to the device (e.g., for accessing a list of media content recorded on a recording device). FIG.5shows an illustrative overlay502of a menu for allowing a user to define rules for assigning one or more devices of the user's media network as a destination for media content. Overlay502includes condition field504and user equipment device field508. To define a rule for automatically assigning a destination for media content, the user may first select a condition for identifying media content. In some embodiments, the condition may be an attribute of media content. For example, the user may select a condition type using arrows502. In response to a user selection of a condition type, listing506of fields associated with the condition type is displayed. The user may then select one or more fields using a highlight region to define the condition for identifying media content. The user may select any suitable condition type in condition field504. Such condition types may include, for example, program rating, themes, channel, actor, actress, or any other suitable condition type. In some embodiments, the condition type may include the user (e.g., the user requesting or scheduling a recording), or a user profile (e.g., to capture media that fits within the user's profile information). In some embodiments, the condition type may include the manner in which the media content is transmitted. In the example, ofFIG.5, the displayed condition types are Theme, Rating, and User. Any suitable field may be displayed for each selected condition type. For example, when a user or user profile is selected, the fields displayed may include a listing of users or user profiles associated with one or more devices of the currently selected user media network (e.g., the members of a household). In the example ofFIG.5, the fields listed under the condition type Rating include G, PG, PG-13, R and NC-17. In some embodiments, the user may simultaneously select a plurality of condition types, fields, or both to define a condition for identifying media content (e.g., the media guidance application may display a plurality of condition types and logical operators between the condition types). In addition to setting up a condition, the user may select one or more user equipment devices of the user media network as a destination for the media content that satisfies the condition. To select a device, the user may select a device in user equipment device field508using arrows510. The user equipment devices that the user may scroll through in field508may include the user equipment devices of the user's media network. In some embodiments, the user may enter identification information (e.g., an IP address or other unique identifier) for a user equipment device that is not listed in field508. When the user has selected both the one or more conditions and the one or more user equipment devices for the rule, the user may select an option to define the rule. In the example ofFIG.5, the user selects OK option512. In response to receiving the user request to define the rule, the media guidance application may store the rule in memory. The rule may also be added to the user profile of the user setting up the rule, transmitted to the devices of the user media network, transmitted to the destination device, or stored in any other suitable location. In some embodiments, if the user is in a household, the rule may be incorporated in the user profiles of each of the users of the household. The user may view a listing of rules that have been defined by selecting a View Rules option (e.g., option514). In response to the user selection of the option, the media guidance application may display a listing of rules, which the user may select to modify or remove a rule. The listing may be displayed in a new screen, in an overlay, in a pop-up window, or in any other suitable manner. When the user has finished managing the rules, the user may return to other display screens of the media guidance application (e.g., screen100ofFIG.1or screen200ofFIG.2) by selecting an Exit option. In the example ofFIG.5, the user may select Exit option516. In some embodiments, the rules may be protected by a parental control feature. This feature may prevent a child, for example, from having R rated media recorded to a recording device in the child's room or downloaded to the child's media player without the parents' knowledge. To access a parental control menu, the user may select Parent Control Option520. The parental control menu (not shown), may include a field for entering a parental control password. In response to receiving the correct parental control password, the parental control menu may provide the user with access to some or all options of overlay500(e.g., OK option512), or define or modify the rules as requested by the user. The user may be required to log in to access overlay500. For example, the user may be required to log in to his media network (e.g., using overlay300,FIG.3). In some embodiments, the parental control menu may serve as a login. This may serve to associate the rules with the user's profile information. The media guidance application may in addition or instead associate the rule with the devices of the user's media network. In such an embodiment, when media content is selected to be stored in a particular user's media network, the rules of all users who are associated with user equipment devices of the user's media network may be applied (e.g., the parents' and older siblings' rules in a household are applied to a younger sibling's recording request). If the user does not log in, the media guidance application may use a default media network and apply the rules that are associated with the user equipment devices of the network, or require the user to select one or more particular user equipment devices. This approach may be used, for example, for a member of a household that does not have a user ID associated with a particular media network (e.g., a young child in a family does not log-in, and uses a default family media network), or guests (e.g., a babysitter). A user may remotely assign any device associated with the user's media network as a destination for media content. In some embodiments, the media guidance application may automatically select or recommend a device from the plurality of devices of the user's media network as a destination for media content that satisfies the conditions of a rule. In some embodiments, the user may assign a device as a destination for media content from a web-interface or other user interface allowing the user to communicate with and access information about the media network. The web-interface or other user interface may run on a device of the media network or on a device that is not part of the media network. Any suitable interface may be used to assign one or more user equipment devices as a destination for media content. For simplicity, the following interface will be described in the context of assigning user equipment devices for recording media content. It will be understood, however, that this or another interface with some or all of the same features may be used to assign one or more equipment devices as a destination for media content in other contexts. Such contexts may include, for example, streaming, downloading, webcasts, caching, or transferring a recorded program to another user equipment device. FIG.6shows an illustrative display600of a record-setup overlay602. Overlay602may be presented in response to, for example, the user selecting a Record option while highlight region110ofFIG.1is located on program listing108. As another example, overlay602may be displayed in response to a user pressing a Record key or key sequence while the program is displayed. Overlay602may allow the user to select to record media content on any recording device associated with the media network using selection arrows610and recording device selection option608. Alternatively, the user may select to record the program on the device the user is currently using by selecting Current Device option604, or allow the media guidance application to select a recording device using the rules by selecting Rules option606. In response to a user selection of Rules option606to set-up a recording, the media guidance application retrieves the rules and applies the condition of each rule to the media content scheduled for recording. After identifying the one or more rules for which the media content satisfies the condition, the media guidance application may schedule the media content for recording with the recording devices specified in the identified rules. In some embodiments, if two or more rules apply to the media content, a single recording may be performed using a single recording device based on an ordering of the rules (e.g., ratings related rules have priority over user-preference and theme related rules, or an ordering set by the user). Alternatively, the media guidance application may direct some or all of the recording devices identified in the rules to perform the recording. In some embodiments, a user may order the rules in response to selecting a View Rules option (e.g., option514,FIG.5) to set the relative priority of each rule. The ability to order rules may also be limited to users with the proper parental control password. In some embodiments, the user may also select one or more formats in which to record the media content. As shown in overlay602, a user may select to record media content in HDTV format and in a Cell Phone—Highlights format, corresponding to an edited version suitable for viewing on a cell-phone and containing only highlights of the program. A user may select additional formats to record the media content in using Other option616. A user may select to record the media content in the best available format(s) by selecting Best option618. Best option618may allow the user to record the media content in the highest quality format the content is available in, or in the highest quality format available that can be viewed on the recording device or on any of the user equipment devices associated with the user's media network. A user may select to record the media content in all available formats by selecting All option618. All option618may alternatively allow the user to record the media content in the available formats that are suitable for viewing or recording on user equipment devices associated with the user's media network (e.g., do not record a program in HD if no user equipment device is HD capable). In some embodiments, the rules may automatically determine the format used for the selected content. In some embodiments, the media guidance application may automatically select a recording device and a format for the selected media content. For example, the rules may the a default selection for identifying the one or more recording devices that are selected to perform the recording. As another example, the rules may automatically select a format for recording the selected content. Users may access media content and the media guidance application (and its display screens described above and below) from one or more of their user equipment devices.FIG.7shows a generalized embodiment of illustrative user equipment device700. More specific implementations of user equipment devices are discussed below in connection withFIG.8. User equipment device700may receive media content and data via input/output (hereinafter “I/O”) path702. I/O path702may provide media content (e.g., broadcast programming, on-demand programming, Internet content, and other video or audio) and data to control circuitry704, which includes processing circuitry706and storage708. Control circuitry704may be used to send and receive commands, requests, and other suitable data using I/O path702. I/O path702may connect control circuitry704(and specifically processing circuitry706) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path inFIG.7to avoid overcomplicating the drawing. Control circuitry704may be based on any suitable processing circuitry706such as processing circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, etc. In some embodiments, control circuitry704executes instructions for a media guidance application stored in memory (i.e., storage708). In client-server based embodiments, control circuitry704may include communications circuitry suitable for communicating with a guidance application server or other networks or servers. Communications circuitry may include a cable modem, an integrated services digital network (ISDN) modem, a digital subscriber line (DSL) modem, a telephone modem, or a wireless modem for communications with other equipment. Such communications may involve the Internet or any other suitable communications networks or paths (which is described in more detail in connection withFIG.8). In addition, communications circuitry may include circuitry that enables peer-to-peer communication of user equipment devices, or communication of user equipment devices in locations remote from each other (described in more detail below). Memory (e.g., random-access memory, read-only memory, or any other suitable memory), hard drives, optical drives, or any other suitable fixed or removable storage devices (e.g., DVD recorder, CD recorder, video cassette recorder, or other suitable recording device) may be provided as storage708that is part of control circuitry704. Storage708may include one or more of the above types of storage devices. For example, user equipment device700may include a hard drive for a DVR (sometimes called a personal video recorder, or PVR) and a DVD recorder as a secondary storage device. Storage708may be used to store various types of media described herein and guidance application data, including program information, guidance application settings, user preferences or profile information, or other data used in operating the guidance application. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Control circuitry704may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be provided. Control circuitry704may also include scaler circuitry for upconverting and downconverting media into the preferred output format of the user equipment700. Circuitry704may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by the user equipment to receive and to display, to play, or to record media content. The tuning and encoding circuitry may also be used to receive guidance data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, scaler, and analog/digital circuitry, may be implemented using software running on one or more general purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage708is provided as a separate device from user equipment700, the tuning and encoding circuitry (including multiple tuners) may be associated with storage708. A user may direct control circuitry704using user input interface710. User input interface710may be any suitable user interface, such as a remote control, mouse, trackball, keypad, keyboard, touch screen, touch pad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display712may be provided as a stand-alone device or integrated with other elements of user equipment device700. Display712may be one or more of a monitor, a television, a liquid crystal display (LCD) for a mobile device, or any other suitable equipment for displaying visual images. In some embodiments, display712may be HDTV-capable. Speakers714may be provided as integrated with other elements of user equipment device700or may be stand-alone units. The audio component of videos and other media content displayed on display712may be played through speakers714. In some embodiments, the audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers714. User equipment device700ofFIG.7can be implemented in system800ofFIG.8as user television equipment802, user computer equipment804, wireless user communications device806, or any other type of user equipment suitable for accessing media, such as a non-portable gaming machine. For simplicity, these devices may be referred to herein collectively as user equipment or user equipment devices. User equipment devices, on which a media guidance application is implemented, may function as a standalone device or may be part of a network of devices. Various network configurations of devices may be implemented and are discussed in more detail below. User television equipment802may include a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a television set, a digital storage device, a DVD recorder, a video-cassette recorder (VCR), a local media server, or other user television equipment. One or more of these devices may be integrated to be a single device, if desired. User computer equipment804may include a PC, a laptop, a tablet, a WebTV box, an Apple TV, a personal computer television (PC/TV), a PC media server, a PC media center, or other user computer equipment. WEBTV is a trademark owned by Microsoft Corp. Wireless user communications device806may include PDAs, a mobile telephone, a portable video player, a portable music player, a portable gaming machine, or other wireless devices. It should be noted that with the advent of television tuner cards for PC's, WebTV, and the integration of video into other user equipment devices, the lines have become blurred when trying to classify a device as one of the above devices. In fact, each of user television equipment802, user computer equipment804, and wireless user communications device806may utilize at least some of the system features described above in connection withFIG.7and, as a result, include flexibility with respect to the type of media content available on the device. For example, user television equipment802may be Internet-enabled allowing for access to Internet content, while user computer equipment804may include a tuner allowing for access to television programming. The media guidance application may also have the same layout on the various different types of user equipment or may be tailored to the display capabilities of the user equipment. For example, on user computer equipment, the guidance application may be provided as a web site accessed by a web browser. In another example, the guidance application may be scaled down for wireless user communications devices. In system800, there is typically more than one of each type of user equipment device but only one of each is shown inFIG.8to avoid overcomplicating the drawing. In addition, each user may utilize more than one type of user equipment device (e.g., a user may have a television set and a computer) and also more than one of each type of user equipment device (e.g., a user may have a PDA and a mobile telephone and/or multiple television sets). The user may also set various settings such as user profile settings to maintain consistent media guidance application settings across in-home devices and remote devices. Settings include those described herein, as well as channel and program favorites, programming preferences that the guidance application utilizes to make programming recommendations, display preferences, and other desirable guidance settings. For example, if a user sets a channel as a favorite on, for example, the web site www.tvguide.com on their personal computer at their office, the same channel would appear as a favorite on the user's in-home devices (e.g., user television equipment and user computer equipment) as well as the user's mobile devices, if desired. Therefore, changes made on one user equipment device can change the guidance experience on another user equipment device, regardless of whether they are the same or a different type of user equipment device. In addition, the changes made may be based on settings input by a user, as well as user activity monitored by the guidance application. User profile information including user settings, user personalization, user preference, rules, and user media content information may be stored on user equipment devices and/or on user profile server824. User profile server824may be in communication with user equipment devices802,804and806through communications path826and communications network814. User profile server824may include storage devices for storing user profile information associated with user media networks. User profile server824may also include storage devices for storing media content information associated with user media networks including recordings of media content and/or lists of selected media content. User profile server824may include processors and communications circuits for managing user profile information, remotely controlling and communicating with user equipment devices, and exchanging user profile information with user equipment devices. The user equipment devices may be coupled to communications network814. Namely, user television equipment802, user computer equipment804, and wireless user communications device806are coupled to communications network814via communications paths808,810, and812, respectively. Communications network814may be one or more networks including the Internet, a mobile phone network, mobile device (e.g., Blackberry) network, cable network, public switched telephone network, or other types of communications network or combinations of communications networks. BLACKBERRY is a trademark owned by Research In Motion Limited Corp. Paths808,810, and812may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. Path812is drawn with dotted lines to indicate that in the exemplary embodiment shown inFIG.8it is a wireless path and paths808and810are drawn as solid lines to indicate they are wired paths (although these paths may be wireless paths, if desired). Communications with the user equipment devices may be provided by one or more of these communications paths, but are shown as a single path inFIG.8to avoid overcomplicating the drawing. Although communications paths are not drawn between user equipment devices, these devices may communicate directly with each other via communication paths, such as those described above in connection with paths808,810, and812, as well other short-range point-to-point communication paths, such as USB cables, IEEE 1394 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 802-11x, etc.), or other short-range communication via wired or wireless paths. BLUETOOTH is a trademark owned by Bluetooth SIG, INC. The user equipment devices may also communicate with each other directly through an indirect path via communications network814. System800includes media content source816, media guidance data source818, and user profile server824coupled to communications network814via communication paths820,822and826, respectively. Paths820,822and826may include any of the communication paths described above in connection with paths808,810, and812. Communications with the media content source816, the media guidance data source818and the user profile server824may be exchanged over one or more communications paths, but are shown as a single path inFIG.8to avoid overcomplicating the drawing. In addition, there may be more than one of each of media content source816, media guidance data source818and user profile server824, but only one of each is shown inFIG.8to avoid overcomplicating the drawing. (The different types of each of these sources are discussed below.) If desired, media content source816and media guidance data source818may be integrated as one source device. Although communications between sources816and818and server824with user equipment devices802,804, and806are shown as through communications network814, in some embodiments, sources816and818and server824may communicate directly with user equipment devices802,804, and806via communication paths (not shown) such as those described above in connection with paths808,810, and812. Media content source816may include one or more types of media distribution equipment including a television distribution facility, cable system headend, satellite distribution facility, programming sources (e.g., television broadcasters, such as NBC, ABC, HBO, etc.), intermediate distribution facilities and/or servers, Internet providers, on-demand media servers, and other media content providers. NBC is a trademark owned by the National Broadcasting Company, Inc., ABC is a trademark owned by the ABC, INC., and HBO is a trademark owned by the Home Box Office, Inc. Media content source816may be the originator of media content (e.g., a television broadcaster, a Webcast provider, etc.) or may not be the originator of media content (e.g., an on-demand media content provider, an Internet provider of video content of broadcast programs for downloading, etc.). Media content source816may include cable sources, satellite providers, on-demand providers, Internet providers, or other providers of media content. Media content source816may also include a remote media server used to store different types of media content (including video content selected by a user), in a location remote from any of the user equipment devices. Systems and methods for remote storage of media content, and providing remotely stored media content to user equipment are discussed in greater detail in connection with Ellis et al., U.S. patent application Ser. No. 09/332,244, filed Jun. 11, 1999, which is hereby incorporated by reference herein in its entirety. Media guidance data source818may provide media guidance data, such as media listings, media-related information (e.g., broadcast times, broadcast channels, media titles, media descriptions, ratings information (e.g., parental control ratings, critic's ratings, etc.), genre or category information, actor information, logo data for broadcasters' or providers' logos, etc.), media format (e.g., standard definition, high definition, etc.), advertisement information (e.g., text, images, media clips, etc.), on-demand information, and any other type of guidance data that is helpful for a user to navigate among and locate desired media selections. Media guidance application data may be provided to the user equipment devices using any suitable approach. In some embodiments, the guidance application may be a stand-alone interactive television program guide that receives program guide data via a data feed (e.g., a continuous feed, trickle feed, or data in the vertical blanking interval of a channel). Program schedule data and other guidance data may be provided to the user equipment on a television channel sideband, in the vertical blanking interval of a television channel, using an in-band digital signal, using an out-of-band digital signal, or by any other suitable data transmission technique. Program schedule data and other guidance data may be provided to user equipment on multiple analog or digital television channels. Program schedule data and other guidance data may be provided to the user equipment with any suitable frequency (e.g., continuously, daily, a user-specified period of time, a system-specified period of time, in response to a request from user equipment, etc.). In some approaches, guidance data from media guidance data source818may be provided to users' equipment using a client-server approach. For example, a guidance application client residing on the user's equipment may initiate sessions with source818to obtain guidance data when needed. Media guidance data source818may provide user equipment devices802,804, and806the media guidance application itself or software updates for the media guidance application. Media guidance applications may be, for example, stand-alone applications implemented on user equipment devices. In other embodiments, media guidance applications may be client-server applications where only the client resides on the user equipment device. For example, media guidance applications may be implemented partially as a client application on control circuitry704of user equipment device700and partially on a remote server as a server application (e.g., media guidance data source818). The guidance application displays may be generated by the media guidance data source818and transmitted to the user equipment devices. The media guidance data source818may also transmit data for storage on the user equipment, which then generates the guidance application displays based on instructions processed by control circuitry. Media guidance system800is intended to illustrate a number of approaches, or network configurations, by which user equipment devices and sources of media content and guidance data may communicate with each other for the purpose of accessing media and providing media guidance. The present invention may be applied in any one or a subset of these approaches, or in a system employing other approaches for delivering media and providing media guidance. The following three approaches provide specific illustrations of the generalized example ofFIG.8. In one approach, user equipment devices may communicate with each other within a home network. User equipment devices can communicate with each other directly via short-range point-to-point communication schemes describe above, via indirect paths through a hub or other similar device provided on a home network, or via communications network814. Each of the multiple individuals in a single home may operate different user equipment devices on the home network. As a result, it may be desirable for various media guidance information or settings to be communicated between the different user equipment devices. For example, it may be desirable for users to maintain consistent media guidance application settings on different user equipment devices within a home network, as described in greater detail in Ellis et al., U.S. patent application Ser. No. 11/179,410, filed Jul. 11, 2005. Different types of user equipment devices in a home network may also communicate with each other to transmit media content. For example, a user may transmit media content from user computer equipment to a portable video player or portable music player. In a second approach, users may have multiple types of user equipment by which they access media content and obtain media guidance. For example, some users may have home networks that are accessed by in-home and mobile devices. Users may control in-home devices via a media guidance application implemented on a remote device. For example, users may access an online media guidance application on a website via a personal computer at their office, or a mobile device such as a PDA or web-enabled mobile telephone. The user may set various settings (e.g., recordings, reminders, or other settings) on the online guidance application to control the user's in-home equipment. The online guide may control the user's equipment directly, or by communicating with a media guidance application on the user's in-home equipment. Various systems and methods for user equipment devices communicating, where the user equipment devices are in locations remote from each other, are discussed in, for example, Ellis et al., U.S. patent application Ser. No. 10/927,814, filed Aug. 26, 2004, which is hereby incorporated by reference herein in its entirety. In a third approach, users of user equipment devices inside and outside a home can use their media guidance application to communicate directly with media content source816to access media content. Specifically, within a home, users of user television equipment804and user computer equipment806may access the media guidance application to navigate among and locate desirable media content. Users may also access the media guidance application outside of the home using wireless user communications devices806to navigate among and locate desirable media content. It will be appreciated that while the discussion of media content has focused on video content, the principles of media guidance can be applied to other types of media content, such as music, images, etc. FIGS.9-13show illustrative embodiments of data structures that may be used in accordance with the principles of the present invention to store user profile information, including rules for performing recordings, in memories of user equipment devices and user profile servers. The data structures ofFIGS.9-13also show illustrative types of data that may be stored and used by systems providing management of user profiles. The data structures may be used to create and maintain a database of user equipment devices associated with users' media networks, and of user profile information for each media network. The data stored in the data structures may be stored in memories located in user equipment802,804or806, in one or more user profile servers824, or on any other servers or devices accessible through communications network814. The data may alternatively be distributed across different servers and devices, with, for example, device-specific profile information located on the devices the information corresponds to, and more general profile information stored on the user profile server. In one embodiment, the user profile server824may be operative to synchronize user profile information stored on the server with that stored on one or more user equipment devices. The profile server may thus be operative to communicate with the equipment devices, and to process the received profile information in order to maintain up-to-date profile information. The information stored in the data structures ofFIGS.9-13may include information based on settings input by a user, information based on user activity monitored by a guidance application or user equipment device or both. It will be appreciated that all fields in data structures ofFIGS.9-13may be organized using any organization scheme. For simplicity, the organization scheme used to describe fields in the data structures ofFIGS.9-13will be a list. FIG.9shows an illustrative embodiment of a profile data structure900. Data structure900may include field902that includes a list of user media networks (UMNs) for which user profile information is stored in the data structure. Field902may be organized in the form of a linked list of UMN identifiers, an array of UMN identifiers, a table of UMN identifiers, or any other organization scheme of UMN identifiers. Each UMN listed in field902may be identified by a name and/or other unique identifier that may be used to uniquely identify and locate the UMN. The identifier may include, for example, a username or an equipment address that may be used to locate or identify the UMN on communications network814. Additional information that may be included in UMN field902is described in more detail in connection withFIGS.10-13. Data structure900may also include field904that includes a list of rules for assigning a user equipment device of one or more UMNs as a destination for media content. This approach may be of particular use, for example, when the rules apply to every UMN (e.g., the UMNs are all within a household, and a parent has defined a rule for controlling recording operations within the household). FIG.10shows an illustrative embodiment of a user media network data structure1000. Information for multiple UMNs may be stored on user equipment devices and profile servers, and thus multiple instances of UMN information data structure1000, corresponding to different UMNs, may be required. Data structure1000may include field1002that may include a unique name, address and/or identifier corresponding to a particular UMN. Data structure1000may further include field1004which stores a list of all user equipment devices associated with the UMN. Data structure1000may also include fields1006,1008and1010which respectively store general user profile information, device-type specific profile information, and device-specific profile information. Data structure1000may also include field1012, which stores rules for assigning a user equipment device as a destination for media content selected by the user that apply to the users or devices of the UMN. Each of fields1006,1008,1010and1012may point to separate profile information data structures such as data structure900ofFIG.9. Note that while data structure1000has separate general, device-type specific, device-specific profile information data and rules fields, data structure1000may have different combinations of profile information data fields than that shown inFIG.10. For example, in embodiments in which UMN data structure1000is stored on a user equipment device, data structure1000may include only a device-specific profile information field1010containing profile information for the specific device and a rules field1012that includes rules that identify the user equipment device (e.g., no field1012or an empty field1012for user equipment devices that are not assigned as destinations for media content). In such an embodiment, the device-specific profile information data structure stored on the device may include all of the profile information that is stored in the general and device-type specific data fields1006and1008and that is useable by the user equipment device. In such an embodiment, the UMN data structure1000may include data field1004including a list of user equipment devices associated with the user media network if the equipment device is capable of communicating with other equipment devices. However, data field1004may be omitted in data structures stored on user equipment devices that cannot communicate with other equipment devices. Each user equipment device (UED) listed in data field1004may have an associated UED data structure storing information about the user equipment device. The UED data structure may include information such as the device name, device address or other identifier of the location of the device, device type and device capabilities. The UED data structure may additionally include information regarding the device's availability. The device availability information may include information about the device's current availability, such as an indication of whether the device is powered on, available to receive commands, or busy performing other functions. The device availability information may also include device scheduling information. The device scheduling information may include information and commands used for scheduling functions on the device, as well as a schedule of times when the device is scheduled to be available or not available. The UED data structure may also include additional information for specific types of devices. For example, the data structure may store information about the total and remaining storage space on user recording devices, the types of data the recording may record (e.g., digital or analog video or audio data), and the quality of the recording. FIG.11shows an illustrative embodiment of a profile information data structure1100. Profile information data structure1100may include field1102that may include a unique name, address and/or identifier corresponding to the profile information data structure. Data structure1100may include personalization information field1104including user personalization data used in generating display screens, program recommendations, and other personalized menus and functions for users. Content information field1106may include information on media content stored by or otherwise available to the user. Data structure1100may also include a list of user equipment devices associated with the user media network in field1108. The data in field1108may be used to enable communication between equipment devices, for example. The data in field1108may be identical, or substantially identical, to the information stored in field1004ofFIG.10. Thus, each user equipment device listed in field1108may have an associated UED data structure identical to, or substantially identical to, the UED data structures discussed in relation to field1004. Profile information data structure1100may include field1110for storing rules defined by or associated with the user identified in field1102. The rules may include conditions used to identify media content and user equipment device identification information for assigning the identified user equipment device as a destination for the media content that satisfies the conditions. The user equipment devices may be identified from the data stored in field1108. FIG.12shows an illustrative embodiment of a media content information data structure1200. Media content information stored in data structure1200may include information on stored media content and stored passes for media content. Media content information data structure1200may include field1202that may include a unique name, address and/or identifier corresponding to the media content information data structure. Field1204may include a list of media content that has been stored by the user (e.g., recorded, downloaded, streamed, or cached). Each item of media content listed in field1204may have an associated data structure including the recorded media content and information about the storage of the media content. Information about the storage of the media content may include the title, media type, content type, and the quality of the stored media content. The information may also include the storage location, identifying the user equipment device and location in memory at which the media content is located. The information may also include an indication of the types of devices the media content may be displayed on. Field1206may store information on passes that the user may have access to. The passes may allow users to access media content stored at other locations, such as media content stored on other users' media network or on content provider servers816such as video-on-demand sources. Data structure1200may include additional fields storing lists of media content organized by device type or by device. Media content by device type field1208may store lists of media content that may be accessed from different types of devices. Field1208may, for example, include a first list of all media content a UMN has access to and that may be viewed on a television. Field1208may also include a second list of all media content that may be viewed on a cell-phone. Media content by device field1210may include a list of all media content stored on each device associated with the UMN. Field1210may, for example, store a first list of all media content stored on a digital video recorder and a second list of all media content stored on a personal media player. For each instance of media content listed in fields1204,1208and1210may identify, in addition to the user equipment devices on which the content is stored, the rule(s), if any, that were used to associate the media content to the particular user equipment device(s). FIG.13shows an illustrative embodiment of a rule data structure1300. Rule data structure1300may include field1302that may include a unique name, address and/or identifier corresponding to the rule data structure. Data structure1300may include media condition field1304that includes the conditions for identifying the media content for which a rule will apply. Media conditions stored in field1304may include, for example, program ratings (e.g., G or PG), actors, themes, program rankings (e.g., 4 stars or 3 stars), user preferences, or any other suitable condition. In some embodiments, the conditions may be selected such that no user equipment device is inherently more suited to store media content (e.g., for embodiments in which HD or regular transmission is not be a condition stored in field1304). Data structure1300may include user equipment device field1306, which includes an identifier for the one or more user equipment devices that are a destination for media content that satisfies a condition of field1302. The data in field1306may include data that is stored in one or both of field1004ofFIG.10and field1108ofFIG.11. Data structure1300may include user field1308, which identifies the user or user profile associated with a particular rule. The data in field1308may include data that is stored in field1102ofFIG.11. Data structure1310may include action field1310, which includes information related to the action that is performed by the one or more user equipment devices identified in field1306. For example, action field1310may specify that the rule directs the one or more identified user equipment devices to record, stream, or download media content. As another example, action field1310may specify that the rule directs the one or more identified user equipment devices to transfer a recording from a default recording device to the identified user equipment device. Rule data structure1300may include data related to a plurality of rules. For example, each rule may include a unique identifier that is applied to the data associated with the particular rule stored in each field of data structure1300. In some embodiments, the data associated with each rule may be stored in a distinct data structure1300. The following flow charts describe processes for creating and applying rules in some embodiments of this invention.FIG.14shows an illustrative process for allowing a user to assign a device among a plurality of devices in a user's media network as a destination for media content in accordance with an embodiment of the invention. Process1400begins at step1402. At step1404, the media guidance application receives user inputs defining a rule for assigning a user equipment device as the destination of media content. For example, processing circuitry706(FIG.7) may receive inputs entered using user input interface710(FIG.7). The user inputs may include identification information for one or more user equipment devices as destinations for media content. For example, the user inputs may include a selection from a listing of user equipment devices, or identification information entered by the user (e.g., an IP address or unique identifier). The user inputs may also include conditions identifying media content for which the rule applies. The conditions may include any suitable attribute of media content, including for example, theme, actor, genre, rating, definition, or any other suitable attribute. In some embodiments, the media guidance application may select the attribute from user profile information (e.g., the rule assigns media content that is of interest to the user to a particular user equipment device). The rule may be stored in a data structure similar to data structure1300(FIG.13). At step1406, the media guidance application receives a user input identifying media content for which the rule applies. In some embodiments, processing circuitry706may receive inputs entered using user input interface710. For example, the user may select media content for recording, or select media content to download or stream. As another example, the user may select media content to transfer from a first user equipment device to another user equipment device. The media guidance application may compare the attributes of the identified media content with the attributes selected for the condition of the rule at step1404. If the attributes of the identified media content match the condition of the rule, process1400continues to step1408. If the attributes of the identified media content do not match the conditions of the rule, the rule is not applied to the media content and process1400terminates. At step1408, the media guidance application assigns a user equipment device as the destination for the media content identified at step1406based on the rule defined at step1404. In some embodiments, processing circuitry706may assign a user equipment device802,804or806(FIG.8) as a destination device. For example, the media guidance application may identify the user equipment devices of the rule and direct the identified user equipment devices to serve as a destination for the media content. The user equipment devices may serve as a destination for the media content by recording the content, streaming the content, downloading the content, caching the content, transferring the content or any other method by which content is assigned to a user equipment device. Process1400then ends at step1410. FIG.15shows a flow chart of an illustrative process for setting up a rule in accordance with an embodiment of the invention. Process1500begins at step1502. At step1504, the media guidance application receives a user selection of at least one criterion for the rule. In some embodiments, processing circuitry706(FIG.7) may receive at least one criterion from a user input on user input interface710(FIG.1). The criterion may be an attribute of media content, user profile data, time and channel data, a URL, or any other suitable criteria for identifying media content. At step1506, the media guidance application receives a user selection of a user equipment device as a destination for media content. In some embodiments, processing circuitry706may receive a user selection of a user equipment device from a user input on user input interface710. The user may select any suitable user equipment device, including for example, a recording device, a computer, a portable electronic device, a cellular telephone, or any other suitable electronic device. At step1508, the media guidance application receives authorization information. In some embodiments, processing circuitry706may receive authorization information from user inputs on user input interface710. For example, the user may enter parental control data to authorize the user to define a rule. As another example, the user may login to the user's media network. At step1510, the media guidance application defines the a rule using the condition identified at step1504and the user equipment device identified at step1506to assign the identified user equipment device as a destination for media content that satisfies the identified condition. In some embodiments, processing circuitry706may create a data structure1300(FIG.13) for the rule. Process1500then ends at step1512. FIG.16shows a flowchart of an illustrative process for assigning a user equipment device as a destination for media content selected by an identified user in accordance with an embodiment of the invention. Process1600begins at step1602. At step1604, the media guidance application identifies the current user. For example, the media guidance application may identify the user that has logged in the system. As another example, the media guidance application may identify the user based on the user's interactions with the guidance application. As still another example, the guidance application may identify the user based on the time of day (e.g., using a day-parting approach). At step1606, the media guidance application identifies the rules that apply to the identified user. For example, the media guidance application may identify the rules defined by the user. As another example, the media guidance application may identify the rules that involve user equipment devices that are part of the user's media network. In some embodiments, processing circuitry706(FIG.7) may identify the rules associated with field1110(FIG.11) of the identified user's profile information data structure1100(FIG.11). At step1608, the media guidance application receives a user selection of media content. In some embodiments, processing circuitry706may receive user inputs from user input interface710(FIG.7). For example, the user may select media content from content listings, while viewing the content, or from any other suitable context. The media content may be selected for recording, downloading, streaming, caching, or any other suitable process by which user equipment devices of the user's media network are destination devices for media content. At step1610, the media guidance application determines whether the user specified a destination device for the selected media content. In some embodiments, processing circuitry706may determine whether the user provided an input using user input interface710for specifying the destination device. For example, the media guidance application may determine whether the user selected a particular user equipment device as a destination for media content when the user selected the media content (e.g., selecting a recording device when setting up a recording). If the media guidance application determines that a particular user equipment device was selected, process1600moves to step1612. At step1612, the identified user equipment device is assigned as a destination for the selected media content. For example, processing circuitry706assigns the identified user equipment device802,804or806(FIG.8) as a destination for the selected media content. Process1600then ends at step1614. If, at step1610, the media guidance application instead determines that no particular user equipment device was selected as a destination for the selected media content, process1600moves to step1616. In some embodiments, process1600may include an additional step for determining whether rules apply for the selected media. For example, the media guidance application may determine whether the user selected an option to record a program using rules. If the rules do not apply, a default user equipment device may be used as the destination for the selected media content (e.g., step1618). At step1616, the media guidance application determines whether the selected media content satisfies a condition for one of the rules identified at step1606. In some embodiments, processing circuitry706may determine whether the selected media content satisfies media condition field1304(FIG.13) for the data structure1300(FIG.13) of one of the rules. For example, the media guidance application may compare the attributes of the selected media content with the conditions for each of the rules identified at step1606. If the media guidance application determines that none of the identified rules have conditions that are satisfied by the selected media content, process1600moves to step1618. At step1618, the media guidance application uses a default user equipment device as a destination device for the selected media content. For example, the media guidance application may use a default recording device to perform a recording. Process1600then ends at step1614. If, at step1616, the media guidance application instead determines that at least one of the identified rules has a condition that is satisfied by the selected media content, process1600moves to step1620. At step1620, the media guidance application assigns the user equipment device of at least one rule whose condition is satisfied by the selected media content as the destination device for the selected media content. In some embodiments, processing circuitry706may assign user equipment devices802,804and806identified in user equipment device field1306(FIG.13) of data structures1300of rules for which the media content satisfies media condition field1304as destination devices for the selected content. Processing circuitry706may then direct the identified user equipment devices802,804and806to record, download, stream, cache, transfer (or perform any other suitable action with) the selected media content. For example, the media guidance application may identify every rule that is satisfied by the media content, and use every device associated with those rules as destination devices for the selected media content. As another example, the media guidance application may use only one or some of the destination devices of the rules. The one or some of the user equipment devices used may be selected, for example, using conflict rules, priority rules, or any other suitable mechanism. Process1600then ends a step1614. FIG.17shows an illustrative process for identifying the applicable rules when a user is not identified in accordance with.an embodiment of the invention. Process1700begins at step1702. At step1704, the media guidance application receives a user selection of a media network. For example, the user may access one or a combination of user equipment devices that are associated with a user media network. As another example, the user may login to a user media network (e.g., login to a household network without identifying which household member it is). At step1706, the media guidance application identifies the rules that apply the user equipment devices of the identified user media network. For example, the media guidance application may identify the rules that are stored with the user media network data structure (e.g., field1012of data structure1000,FIG.10). As another example, the media guidance application may identify the rules that are stored with the user equipment devices of the user media network. Process1700then moves to step1708, which may correspond to step1608of process1600(FIG.16). The above described embodiments of the present invention are presented for purposes of illustration and not of limitation, and the present invention is limited only by the claims which follow.
85,927
11943500
DETAILED DESCRIPTION The present disclosure provides a media streaming device suspended between two cord segments, where one cord segment is a cable for transferring media content over a particular media transfer interface (e.g., a high-definition multimedia interface (HDMI) output cable or audio cable), and the other cord segment is a power cord coupled to a power supply (e.g., DC or AC power supply). The media streaming device may be small and lightweight such that the media streaming device can be suspended between the two cord segments. In some examples, one or both of the two cord segments may be flexible yet sufficiently rigid to suspend the media streaming device. Further, the length of the cord segments may be designed such that the media streaming device is suspended at a position away from a receiving device in a manner that minimizes interference or port blocking of adjacent media transfer interface connections at the receiving device and/or far enough away from the receiving device to reduce negative effects on the device's radio-frequency (RF) performance. The media streaming device may have a certain size, shape, and weight, and the cord segments may have a certain thickness such that from a point of view of the user, the overall streaming solution appears as a single continuation cord with an electronic module integrated within the cord. In some conventional media streaming devices, the connector directly extends from a housing of the media streaming device, and the connector of the media streaming device is plugged directly into the connector of the receiving device (e.g., a media streaming dongle or media streaming stick). In contrast, in various implementations of the present disclosure, the cord segment is coupled to the media streaming device and the connector is disposed on the end portion of the cord segment such that the receiving device is connected to the media streaming device via the cord segment, and the cord segment has a certain thickness and rigidity in order to suspend the media streaming device at a location away from the receiving device. In some examples, one or both of the cord segments may include a memory-shape material configured to maintain a certain shape. FIG.1illustrates a system100for implementing a streaming solution according an implementation. The system100includes a media streaming device102configured to transfer, over a wireless connection, streamed media content from a media content source106to a receiving device104. The receiving device104may be any type of device capable of receiving and then rendering audio and/or video content. In some examples, the receiving device104may include or otherwise be connected to a display screen capable of displaying the video content. The display screen may be a liquid crystal display (LCD), plasma display, cathode tube, or any type of display screen technology known to one of ordinary skill in the art. The receiving device104may include or be connected to one or more speakers capable of rendering the audio content. In some examples, the receiving device104may be a television set, standalone display device, tablet, gaming console, or a laptop computer, etc. In some examples, the receiving device104may be an audio device capable of rendering the audio content (not the video content). The media streaming device102may include a system on chip (SOC) and one or more wireless interfaces having one or more antenna structures designed to wirelessly receive and transmit data. The SOC may be an integrated circuit that integrates two or more components into a chip, and may contain digital, analog, mixed-signal, and may include radio-frequency functions. In other examples, the radio-frequency functions may be provided on a separate chip. The media streaming device102may be configured to stream the media content from the media content source106to the receiving device104over a network150. The network150may be any type of public or private communication network such as the Internet (e.g. Wi-Fi, mobile network, etc.) or short-range communication network (e.g., Bluetooth, near-field communication (NFC), etc.). The media content may include video and/or audio data. The media content source106may be any type of device capable of providing the media content. The media content source106may be a consumer computing device such as a tablet, smartphone, desktop computer, laptop computer, tablet, gaming console, etc. In other examples, the media content source106may be one or more server devices that host one or more applications configured to provide the media content over the network150. The media streaming device102may have a housing103configured to house the components of the media streaming device102. The components of the media streaming device102are further explained with reference toFIGS.2and3. The housing103may be a unitary component or multiple components coupled together. The housing103may have a circular, rectangular, or any type of non-circular and/or non-rectangular shape. In some examples, the housing103may be cylindrical (e.g., puck shape). The media streaming device102may be coupled to the receiving device104via an output cord segment110, and the media streaming device102may be coupled to a power source108via a power cord segment112. The output cord segment110may provide the physical connection between the media streaming device102and the receiving device104, where the media content is routed from the media streaming device102to the receiving device104via the output cord segment110. In some examples, the output cord segment110is an HDMI cord segment. In some examples, the output cord segment110is an audio cord segment (digital or analog). The power cord segment112may provide the physical connection between the media streaming device102and the power source108. The power source108may be an AC power source such as an AC wall socket, for example. In other examples, the power source108is a DC power source such as another computing device. The power cord segment112is configured to transfer power from the power source108to the media streaming device102. In some examples, the power cord segment112is a universal serial bus (USB) power cord. In some examples, the power cord segment112is a USB power and data cord. The power cord segment112may be longer than the output cord segment110. In other examples, the power cord segment112is shorter than the output cord segment110. In other examples, the power cord segment112is the same length as the output cord segment110. In some examples, the power cord segment112has a larger diameter than the output cord segment110. In other examples, the power cord segment112has a small diameter than output cord segment110. In other examples, the power cord segment112as the same diameter as the output cord segment110. The output cord segment110may include one or more materials that are configured to transfer audio and/or video content from the media streaming device102to the receiving device104. In some examples, the output cord segment110may include an outer material configured to enclose one or more metal wires. In some examples, the output cord segment110may include a first material that is flexible yet sufficient rigid to suspend the media streaming device102. In some examples, the first material is a polymer-based material. In some examples, the first material is a memory-shape material. In some examples, the output cord segment110includes one or more memory-shape wires. In some examples, the power cord segment112includes a second material that is flexible yet sufficient rigid to suspend the media streaming device102. In some examples, the second material is a polymer-based material. In some examples, the second material is a memory-shape material. In some examples, the first material is the same as the second material. In other examples, the first material is different than the second material. The output cord segment110may include a first end portion Ill configured to be coupled to the housing103of the media streaming device102, and a second end portion113configured to be coupled to the receiving device104. The first end portion111may be fixedly coupled to the media streaming device102. For example, the first end portion111may be integrally coupled to the housing103of the media streaming device102. The first end portion111may define a connector configured to be coupled to a corresponding connector of the media streaming device102. In some examples, the connectors may be contained within the housing103of the media streaming device102such that the output cord segment110is integrally coupled to the media streaming device102. In some examples, the connector of the first end portion111is a low-voltage differential signaling (LVDS) connector. In some examples, the connector of the first end portion111is an audio-type connector. The second end portion113may be removably coupled to the receiving device104. In some examples, the second end portion113may define a HDMI connector to be coupled to a HDMI connector associated with the receiving device104. In some examples, the connector of the second end portion113is an audio-type connector configured to be coupled to a corresponding connector of the receiving device104. In some examples, configurations of the output cord segment110and associated connectors are provided in Application No. 62/215,571, filed on Sep. 8, 2015, titled IMPROVED HIGH-DEFINITION MULTIMEDIA INTERFACE (HDMI) CABLE INTEGRATED WITH A MEDIA DEVICE, the contents of which are herein incorporated by reference in their entirety. The power cord segment112may include a first end portion117configured to be coupled to the media streaming device102, and a second end portion119configured to be coupled to the power source108. The first end portion117of the power cord segment112may be removably coupled to the media streaming device102. In other examples, the first end portion117of the power cord segment112may be fixedly coupled to the media streaming device102. In some examples, the first end portion117of the power cord segment112may define a male USB connector to be coupled to a female USB connector on the media streaming device102. The second end portion119of the power cord segment112may define a power plug adaptor to be inserted into a wall socket. In some examples, the second end portion119may define a USB connector configured to be coupled to a device. In some examples, the second end portion119may define a USB connector and a power plug adaptor, where the USB connector is removably coupled to the power plug adaptor. In some examples, the outer housing103of the media streaming device102may have a tubular shape that is the same or similar to the shape of the output cord segment110and/or the power cord segment112. In some examples, the outer housing103may be larger than the output cord segment110and the power cord segment112. The media streaming device102may be relatively small and lightweight such that the cord segments110,112can suspend the media streaming device102along the assembled system100. In some examples, the output cord segment110integrally coupled to the media streaming device102is sufficiently rigid such that the output cord segment110can support the media streaming device's weight. For example, relative to the weight of the media streaming device102, the material of the output cord segment110includes one or more properties that make the output cord segment110flexible yet rigid such that, when assembled, the output cord segment can support the weight of the media streaming device102. In some examples, the output cord segment110may include one or more materials that define an elasticity above a certain threshold, and that threshold is chosen relative to the weight of the media streaming device102. For instance, under the load of the media streaming device102, the output cord segment110can substantially maintain its shape. The output cord segment110can have a certain non-bendability in the sense that it can substantially resist deformation in response to the weight of the media streaming device102. In some examples, when a force greater than the force of the media streaming device102is applied to the output cord segment110, the output cord segment110can bend and hold that bent shape. Once assembled, the user may perceive the streaming solution (the media streaming device102with cord segments110,112) as a cable assembly with a power plug on one end and the output on the other end. For instance, when the connector of the output cord segment110is coupled to the receiving device104and the power cord segment112is coupled to the media streaming device102and the power source108, the media streaming device102is configured to be suspended at a distance away from the receiving device104. The length of the output cord segment110may be designed such that it is short enough to remain relatively close to the receiving device104(e.g., potentially hidden from the user) but long enough to reduce one or more problems associated with plugging the media streaming device102directly into the receiving device's port. In some examples, the length of the output cord segment110may be less than a length of the receiving device104. In some examples, the length of the output cord segment110may be less than a length (or width) of a display screen of the receiving device104. Also, the material(s) of the output cord segment110have properties such that when a force is not applied to the media streaming device102(the media streaming device102being integrally coupled to one end of the output cord segment110, the other end of the output cord segment110being coupled to the receiving device104), the media streaming device102remains a distance from the receiving device104that is more than one half of the length of the output cord segment110. At the same time, the output cord segment110can be sufficiently flexible to permit the user to bend the output cord segment110to a desired location (e.g., to hide the media streaming device102or improve the wireless functionality of the media streaming device102). In some examples, when coupled to the cord segments110,112, the media streaming device102is suspended in air. For instance, when coupled to the cord segments110,112, the media streaming device102does not contact (or otherwise rest) on the ground or another object (including the receiving device104). Rather, the media streaming device102remains at a position away from the receiving device104. In some examples, when the streaming solution is assembled, the output cord segment110bends (thereby creating one or more bend portions) to a certain point such that the media streaming device102does not contact any portion of the receiving device104. In some examples, the output cord segment110includes one or more materials that define a certain rigidity that provide a stiffness (in relation to the media streaming device102). In some examples, the corresponding port (e.g., HDMI port) of the receiving device104is located on a lateral side (or the back side) of the receiving device104, and when the output cord segment110is coupled to the receiving device104, the output cord segment110forces the media streaming device102a certain horizontal distance (e.g., more than 50% the length of the output cord segment110) away from a surface of the receiving device104. The output cord segment110can force the media streaming device102away from the surface of the receiving device104by not completely bending (e.g., the output cord segment110may slightly bend, but may maintain a certain shape until the user put additional force on the output cord segment110to move the media streaming device102to another location). In some examples, the output cord segment110includes a bendable material, where the output cord segment110is configured to hold its shape (e.g., a moldable material). As such, a user may be able to deform the output cord segment110into a desired position, e.g., hide the media streaming device102from a view of the user, or change the position of the media streaming device102relative to the receiving device104to increase the RF performance of the media streaming device102and/or receiving device104. As a result, the radio frequency (RF) performance may be improved. For example, interference from the receiving device104on the wireless communication of the media streaming device102may be reduced. Also, by placing the media streaming device102a distance away from the receiving device104, adjacent connector ports on the receiving device104are not blocked by the media streaming device102. For example, the receiving device104may include multiple ports, and, conventionally, when a device is plugged directly into one of the ports, the device can block one or more adjacent ports such that other devices are prevented from using these adjacent ports. FIG.2illustrates a media streaming device202configured to stream video content according to an implementation. In some examples, the media streaming device202may include one or more of the above-described features of the media streaming device102ofFIG.1. The media streaming device202may include a computer processing unit (CPU)220such as any type of general purpose computing circuitry or special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit), a graphics processing unit (GPU)222, random-access memory (RAM)224, storage226, and a network interface228configured to wirelessly connect the media streaming device202with the media content source106over the network150. The media streaming device202may include other components such as one or more batteries, connection interfaces, etc. The media streaming device202may be coupled to a video output cord segment210. The video output cord segment210may be an HDMI cord segment fixedly coupled to the media streaming device202. In other examples, the video output cord segment210is removably coupled to the media streaming device202. The video output cord segment210may include a first connector207(e.g., an HDMI connector) configured to be coupled to the receiving device104ofFIG.1, and a second connector214(e.g., a LVDS connector) configured to be coupled to the media streaming device202. The media streaming device202may be removably coupled to a power cord segment212. The power cord segment212may be a USB power cord segment having a first connector209to be removably coupled to the power source108ofFIG.1, and a second connector215configured to be removably coupled to the media streaming device202. The first connector209may be a USB connector, a power plug adaptor, or a USB connector and a power plug adaptor. The second connector215may be a USB connector or a micro-USB connector. FIG.3illustrates a media streaming device302configured to stream audio content according to an implementation. In some examples, the media streaming device302streams the audio content, but not the video content. The media streaming device302may be considered an audio streaming device, where networked audio content is seamlessly streamed to a wide variety of existing home speaker systems. In some examples, the media streaming device302may receive AC or DC power, provide audio output using a common plug format or set of formats, and supports wireless network connectors for control and streaming media data. The user may be able to control the media playback on the media streaming device302through one or more multiple other computing devices that can use control protocols. Also, the media streaming device302may provide a minimal user interface for resetting the device or initiating a setup mode, but the majority of the control and interaction may be driven by other devices that communicate with the media streaming device302wirelessly. The media streaming device302may include a computer processing unit (CPU)320such as any type of general purpose computing circuitry or special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit), a memory326, a network interface328configured to wireless connect the media streaming device302with the media content source106over the network150, and an audio output circuit330configured to output the audio content to the receiving device104. The memory326may include RAM and/or storage. The media streaming device302may include other components such as one or more batteries, connection interfaces, etc. The media streaming device302may be coupled to an audio output cord segment310. The audio output cord segment310may be fixedly coupled to the media streaming device302. In other examples, the audio output cord segment310is removably coupled to the media streaming device302. The audio output cord segment310may include a first connector307configured to be coupled to the receiving device104ofFIG.1, and a second connector314configured to be coupled to the media streaming device302. The media streaming device302may be removably coupled to a power cord segment312. The power cord segment312may be a USB power cord segment having a first connector309to be removably coupled to the power source108ofFIG.1, and a second connector315configured to be removably coupled to the media streaming device302. The first connector309may be a USB connector, a power plug adaptor, or a USB connector and a power plug adaptor. The second connector315may be a USB connector or a micro-USB connector. The audio output circuit330may be configured to detect which type of audio output cord segment310is coupled to the media streaming device302. In some examples, the audio output circuit330may be configured to detect whether the connected audio output cord segment310is a digital-type cord or an analog-type cord. For example, the digital-type cord may be an optical audio cord such as TOSLINK, and the analog-type cord may be an RCA adaptor cord. Depending on the type of cord detected, the audio output circuit330is configured to format the audio content to have the appropriate format corresponding to the detected cord type. For example, when the audio output circuit330detects that the audio output cord segment310is the digital-type cord, the audio output circuit330formats the audio content to a digital format. When the audio output circuit330detects that the audio output cord segment310is the analog-type cord, the audio output circuit330formats the audio content to an analog format. In some examples, the audio output circuit330may transfer digital audio via optical interface, supply analog audio via a digital-to-analog converter, and/or supply the audio at various voltage levels to address various classes of audio rendering systems. FIG.4illustrates a computer module402configured to be coupled to a device404via a first cord segment410and a power source408via a second cord segment412such that the computer module402converts the device404into an application-specific computer according to an implementation. The first cord segment410may be any of the output cord segments described with reference to any of the figures. The second cord segment412may be any of the power cord segments described with reference to any of the figures. Also, the computer module402may include one or more of the components described with reference to the media streaming device (video or audio) of any of the figures. However, more generally, the computer module402may include components and logic associated with a network-enabled computer such as one or more processors, a non-transitory computer-readable medium, one or more network interfaces, an operating system, and/or one or more applications. When coupled to the device404, the computer module402converts the device404into an application-specific computer capable of connecting to the network150. For example, the device404may be a lamp, and when the computer module402is coupled to the lamp via the first cord segment410, the lamp is converted into a lamp-controlled computer configured to be manipulated and controlled in a manner that was not possible before. In other examples, the device404may be a microwave, and when the computer module402is coupled to the microwave via the first cord segment410, the microwave is converted into a microwave-controlled computer configured to be manipulated and controlled in a manner that was not possible before. Beside the lamp and microwave examples, the device404may be any type of device that can be electrically-controlled. In some examples, the computer module402is removably coupled to the first cord segment410. In other examples, the computer module402is fixedly coupled to the first cord segment410. In some examples, the computer module402is removably coupled to the second cord segment412. In other examples, the computer module402is fixedly coupled to the second cord segment412. In some examples, the second cord segment412is longer than the first cord segment410. In other examples, the second cord segment412has the same length as the first cord segment410. In some examples, the computer module402is smaller than a diameter of the first cord segment410and/or the second cord segment412. In other examples, the computer module402is slightly larger than the first cord segment410and the second cord segment412. In some examples, the first cord segment410and the second cord segment412appear as a continuation cord, and the computer module402appears to be integrated into the continuation cord. FIG.5illustrates a media streaming device502fixedly coupled to a power cord segment512and fixedly coupled to an output cord segment510according to an implementation. The media streaming device502may be any of the media streaming devices discussed with reference to any of the figures. Referring toFIG.5, the power cord segment512may include a power cord adaptor509configured to be plugged into an AC wall socket, and the output cord segment510may include an HDMI connector507configured to be coupled to a receiving device. FIG.6illustrates a media streaming device602removably coupled to a power cord segment612and fixedly coupled to an output cord segment610according to an implementation. The media streaming device602may be any of the media streaming devices discussed with reference to any of the figures. Referring toFIG.6, the power cord segment612may include a connector615(e.g., micro-USB connector) on one end portion of the power cord segment612, a connector616(e.g., USB connector) on the other end portion of the power cord segment612, and a power plug adaptor617configured to be removably coupled to the connector616. The output cord segment610may include an HDMI connector607configured to be coupled to the receiving device104. FIG.7illustrates a media streaming device702coupled to an output cord segment710according to an implementation. Referring toFIGS.1and7, the media streaming device702may be configured to stream media content, over the network150, from the media content source106to the receiving device104. In some examples, the media streaming device702is the media streaming device102ofFIG.2or the media streaming device202ofFIG.2. The media streaming device702may be configured with wireless communication modules to communicate using Wi-Fi, Bluetooth (or other short-range protocols like Near Field Communication (NFC)), and cellular. The media streaming device702may be coupled to an output cord segment710having a HDMI connector707. In some examples, the media streaming device702may be configured with a USB power scheme. For example, the media streaming device702may define a connector slot730configured to receive a USB connector of the power cord segment. In some examples, the connector slot730is a micro-USB connector slot configured to receive a micro-USB connector of the power cord segment. Also, the media streaming device702may include a reset button729. When pressed, the reset button729is configured to start the reset of the media streaming device702. The reset button729may be considered part of a minimal user interface for resetting the media streaming device702or initiating a setup mode. However, the majority of the control and interaction may be driven by other computing devices that communicate with it wirelessly. The media streaming device702is relatively small and lightweight such that the media streaming device702can be suspended along the assembled streaming solution. Once assembled, the user may perceive the streaming solution (e.g., the media streaming device702with the output cord segment710and the power cord segment) as an integrated cable assembly (or continuous cord assembly) with a power plug on one end and the output on the other end. For instance, when the HDMI connector707is coupled to the receiving device104and the power cord segment is coupled to the media streaming device702and the power source108, the media streaming device702is configured to be suspended at a distance away from the receiving device104. The length of the output cord segment710may be designed such that it is short enough to remain relatively close to the receiving device104(e.g., potentially hidden from the user) but long enough to reduce one or more problems associated with plugging the media streaming device702directly into the receiving device's HDMI port. In some examples, when coupled to the cord segments, the media streaming device702is suspended in air. In some examples, when coupled to the cord segments, the media streaming device702does not contact (or otherwise rest) on the ground or another object. Rather, the media streaming device702remains at a position away from the receiving device104. In some examples, the media streaming device702is configured to hang from the HDMI port of the receiving device104. In some examples, the media streaming device702is configured to hang from the HDMI port of the receiving device104at an angle. In some examples, when the streaming solution is assembled, the output cord segment710bends (thereby creating one or more bend portions) to a certain point such that the media streaming device702does not contact any portion of the receiving device104. As a result, the radio frequency (RF) performance may be improved. For example, interference from the receiving device104on the wireless communication of the media streaming device702may be reduced. Also, by placing the media streaming device702a distance away from the receiving device104, adjacent HDMI ports on the receiving device104are not blocked by the media streaming device702. Furthermore, the output cord segment710may be flexible yet semi-rigid such that the output cord segment710can maintain a position. In some examples, the output cord segment710includes a bendable material, where the output cord segment710is configured to hold its shape (e.g., “Gumby” type material). As such, a user may be able to deform the output cord segment710into a desired position, e.g., hide the media streaming device702from a view of the user, or increase the RF performance of the media streaming device702and/or receiving device104. In some examples, the output cord segment710includes a memory shape material such as a memory shape polymer. In some examples, the output cord segment710includes a memory shape metal wire. As such, the output cord segment710may be configured to deflect into a bent shape when suspended between the cord segments, but return to its original linear shape when disassembled from the receiving device104. In some examples, the media streaming device702may be substantially cylindrical having a diameter and a sidewall731. In some examples, the media streaming device702may be mostly cylindrical with a diameter that can be defined by the distance from the center of the media streaming device702to a point on the outer perimeter. The diameter may be within a range of 45-55 millimeters (mm). In some examples, the diameter may be within a range of 48-53 mm. In some examples, the diameter may be approximately 51.8 mm. In some examples, the sidewall731may have a height within a range of 5-10 mm. In some examples, the height of the sidewall731may be approximately 7 mm. The above ranges and values for the diameter and the sidewall731of the media streaming device702may ensure that the media streaming device702is relatively compact (and lightweight) so that the media streaming device702can be suspended between the cord segments. The output cord segment710may be fixedly coupled to the media streaming device702. In some examples, the output cord segment710is not removable from the media streaming device702(e.g., without taking it apart and disassembling the media streaming device702). In other words, a captive connection may be used between the output cord segment710and the media streaming device702. In some examples, the length of the output cord segment710may be in the range of 90-120 mm (e.g., from the media streaming device702to the HDMI connector end). In some examples, the length of the output cord segment710may be in the range of 95-115 mm. In some examples, the length of the output cord segment710may be approximately 110 mm. The above ranges and values for the length of the output cord segment710may ensure that the media streaming device702remains relatively close to the receiving device (and/or suspended in air), but positioned a distance away from the receiving device104such that wireless interference caused by the receiving device104is reduced. The output cord segment710may have a width that is wider than a width of any power cord segment. In some examples, the width of the output cord segment710is wider than any USB cord segment. In some examples, the output cord segment710is not cylindrical. Rather, the output cord segment710includes a first flat surface and a second flat surface that is opposite to the first flat surface. FIG.8illustrates an exploded view of the media streaming device702according to an implementation. The media streaming device702may include a top enclosure assembly734, a printed circuit board assembly736, and a bottom enclosure assembly738. The printed circuit board assembly736may be disposed between the top enclosure assembly734and the bottom enclosure assembly738. As shown inFIG.8, the output cord segment710may include the HDMI connector707, and an LVDS connector732. The LVDS connector732is configured to be coupled to the printed circuit board assembly736. In some examples, the top enclosure assembly734and the bottom enclosure assembly738(when coupled together) are configured to enclose the LVDS connector732, where only the cord portion extends from the outer structure of the media streaming device702. The LVDS connector732may have a size larger than a size of the cord portion of the output cord segment710, but the LVDS connector732may reside inside the overall housing structure defined by the top enclosure assembly734and the bottom enclosure assembly738. In some examples, the top enclosure assembly734is coupled to the bottom enclosure assembly738using an interference fit. In some examples, the top enclosure assembly734is fused with the bottom enclosure assembly738using ultrasonic welding (e.g., two plastic parts are fused together to make a bond). In some examples, the top enclosure assembly734is coupled to the bottom enclosure assembly738using one or more fasteners. In some examples, the output cord segment710is coupled to the bottom enclosure assembly738and the printed circuit board assembly736with fasteners such as screws. The bottom enclosure assembly738may have a cupped-shaped structure configured to receive the printed circuit board assembly736. In some examples, the bottom enclosure assembly738is configured to enclose most of the printed circuit board assembly736(or the printed circuit board assembly736in its entirety). Within the recess of the bottom enclosure assembly738, the bottom enclosure assembly738may also include a thermal adhesive, a heat spreader, a thermal pad or gel, and a shield. The printed circuit board assembly736may include a plurality of integrated chips coupled to a substrate (and/or both sides of the substrate) and one or more shields to protect the integrated chips. The top enclosure assembly734may have a disc-shaped structure configured to be coupled to the bottom enclosure assembly738. In some examples, the top enclosure assembly734may include or otherwise be coupled to a first thermal gel, a heat spreader, and a second thermal pad or gel. FIG.9Aillustrates an external surface740of the top enclosure assembly734of the media streaming device702according to an implementation.FIG.9Billustrates an internal surface742of the top enclosure assembly734of the media streaming device702according to an implementation. The external surface740may be the surface visible to the user, and the internal surface742may be the surface facing the printed circuit board assembly736. In some examples, the top enclosure assembly734may have a cylindrical shape with a sidewall741(e.g., the sidewall741may define the depth of the cylinder). In some examples, the length of the sidewall741may be less than the sidewall of the bottom enclosure assembly738. The top enclosure assembly734may have a diameter that is the same (or substantially the same) as the diameter of the bottom enclosure assembly738. Referring toFIG.9B, the internal surface742of the top enclosure assembly734may define a pair of heat stake components744. FIG.10Aillustrates an external surface746of the bottom enclosure assembly738of the media streaming device702according to an implementation.FIG.10Billustrates an internal surface748of the bottom enclosure assembly738of the media streaming device702according to an implementation. The external surface740may be the surface visible to the user, and the internal surface742may be the surface facing the printed circuit board assembly736. In some examples, the bottom enclosure assembly738may have a cylindrical shape with a sidewall745(e.g., the sidewall745may define the depth of the cylinder). In some examples, the length of the sidewall745may be greater than the sidewall741of the top enclosure assembly734. The bottom enclosure assembly738may have a diameter that is the same (or substantially the same) as the diameter of the top enclosure assembly734. Referring toFIG.10A, the bottom enclosure assembly738may define the connector slot730configured to receive a USB connector of the power cord segment. In some examples, the connector slot730is a micro-USB connector slot configured to receive a micro-USB connector of the power cord segment. Also, the bottom enclosure assembly738may define a reset slot733configured to expose the reset button729. Further, the bottom enclosure assembly738may define an LVDS connector slot747. The LVDS connector slot747may be the opening in which the output cord segment710extends from the bottom enclosure assembly738. The LVDS connector slot747may capture the HDMI cable along the cable section. In some examples, the LVDS connector732is inboard of the LVDS connector slot747. In cases where the cable is not fixed, a female HDMI receptacle (or variant) may be disposed in the LVDS connector slot747. Referring toFIG.10B, a pair of alignment pins750may be coupled to the internal surface748of the bottom enclosure assembly738. In some examples, more than two alignment pins750may be used. The printed circuit board assembly736may define corresponding holes (e.g., holes754onFIG.11A) on the substrate. The holes are configured to receive the alignment pins750such that the printed circuit board assembly736is aligned in the correct manner with respect to the bottom enclosure assembly738. Also, a heat spreader752may be coupled to the internal surface748of the bottom enclosure assembly738. FIG.11Aillustrates the printed circuit board assembly736disassembled from the bottom enclosure assembly738according to an implementation.FIG.1IB illustrates the printed circuit board assembly736assembled with the bottom enclosure assembly738according to an implementation. Referring toFIGS.11A-11B, the printed circuit board assembly736is coupled to the LVDS connector732of the output cord segment710. The other end of the output cord segment710defines the HDMI connector707. The printed circuit board assembly736may be properly aligned with the bottom enclosure assembly738by aligning the alignment pins750with the holes754on the printed circuit board assembly736. As shown inFIG.11B, the printed circuit board assembly736is configured to fit within the bottom enclosure assembly738such that the LVDS connector732is contained within the bottom enclosure assembly738. FIGS.12A-12Billustrate one side of the printed circuit board assembly736according to an implementation. The printed circuit board assembly736may include a two-layer shield (e.g., internal frame+cover shield) configured to protect the integrated circuits (or IC chips) of the printed circuit board assembly736.FIG.12Aillustrates a top side761of the printed circuit board assembly736depicting one layer (internal frame760) of the two-layer shield according to an implementation.FIG.12Billustrates the top side761of the printed circuit board assembly736depicting the other layer (a cover shield769) of the two-layer shield according to an implementation. The top side761may be considered one surface of the printed circuit board assembly736. The top side761may be considered the surface of the printed circuit board assembly736facing the top enclosure assembly734. Referring toFIG.12A, the top side761of the printed circuit board assembly736may include a plurality of integrated circuits coupled to a substrate including a system on chip (SOC)764, a wireless communication chip766, and one or more power management integrated circuits (PMICs)768. In some examples, the wireless communication chip766may provide the logic for the Wi-Fi capabilities of the media streaming device702. The internal frame760may be coupled to the printed circuit board assembly736. The internal frame760may be a metal structure configured to surround the plurality of integrated circuits, and one or more walls that extend within the metal structure in order to separate one or more integrated circuits from other integrated circuits. For example, the internal frame760may include a shield wall763configured to separate the SOC764and the wireless communication chip766. Referring toFIG.12B, the cover shield769may be coupled to the internal frame760such that the integrated circuits are covered and protected by the two-layer shield defined by the internal frame760and the cover shield769. In some examples, the cover shield769may include a metal cover that is configured to be coupled to the internal frame762. The internal frame760and the cover shield769may form two or more separate metal enclosures configured to enclose and separate one or more integrated circuits from other integrated circuits. FIG.13A-13Billustrates the other side of the printed circuit board assembly736according to an implementation. For instance, the printed circuit board assembly736may include another two-layer shield (e.g., internal frame+cover shield) configured to protect the integrated circuits of a bottom side770of the printed circuit board assembly736.FIG.13Aillustrates a bottom side770of the printed circuit board assembly736depicting one layer (internal frame776) of the two-layer shield according to an implementation.FIG.13Billustrates the bottom side770of the printed circuit board assembly736depicting the other layer (cover shield778) of the two-layer shield according to an implementation. The bottom side770may be considered one surface of the printed circuit board assembly736. The bottom side770may be opposite to the top side761ofFIGS.12A-12B. The bottom side770may be considered the surface of the printed circuit board assembly736facing the bottom enclosure assembly738. Referring toFIG.13A, the bottom side770of the printed circuit board assembly736may include a plurality of integrated circuits coupled to the substrate including dynamic random access memory (DRAM) chips772, flash memory (NAND)774, and PMICs779. The internal frame776may be coupled to the bottom side770of the printed circuit board assembly736such that a perimeter of the internal frame776surrounds the integrated circuits. The internal frame776may be a metal structure configured to surround the plurality of integrated circuits. In some examples, the internal frame776may be a wall structure configured to provide support for the cover shield778. The internal frame776may include a shield wall777configured to separate the memory components (e.g., DRAM chips772, the flash memory774) from the other components such as the PMICs779. Referring toFIG.13B, the cover shield778may be coupled to the internal frame776such that the integrated circuits are covered and protected by the two-layer shield defined by the internal frame776and the cover shield778. In some examples, the cover shield778may include a metal cover that is configured to be coupled to the internal frame776. The internal frame776and the cover shield778may form two or more separate metal enclosures configured to enclose and separate one or more integrated circuits from other integrated circuits. FIG.14Aillustrates the output cord segment710having the HDMI connector707on one end portion of the output cord segment710and the LVDS connector732on the other end of the output cord segment710according to an implementation. The LVDS connector732may define a lip785configured to engage with the bottom enclosure assembly738that defines the LVDS connector slot747. The lip's engagement with the bottom enclosure assembly738ensures that the LVDS connector732will not become detached from the printed circuit board assembly736.FIG.14Billustrates an exploded view of the LVDS connector732according to an implementation. Referring toFIG.14B, the LVDS connector732may include a shield shell top780, an LVDS plug782, an LVDS receptacle784coupled to a substrate786, and a shield shell bottom788. FIG.15Aillustrates a perspective of a media streaming device802in a folded configuration according to an implementation.FIG.15Billustrates a perspective of the media streaming device802in an unfolded configuration according to an implementation.FIG.15Cillustrates another perspective of the media streaming device802in the folded configuration according to an implementation.FIG.15Dillustrates another perspective of the media streaming device802in the unfolded configuration according to an implementation. Referring toFIGS.15A-15D, the media streaming device802includes an output cord segment810coupled to the media streaming device802, where the output cord segment810includes an HDMI cable end portion811. The media streaming device802includes a bottom enclosure assembly838and a top enclosure assembly834. Referring toFIGS.15A and15C, in the folded configuration, the HDMI cable end portion811of the output cord segment810is coupled to the bottom enclosure assembly838of the media streaming device802. In some examples, the folded configuration is achieved by magnetic attraction between a magnet disposed within the HDMI cable end portion811and an internal metal heat spreader within the bottom enclosure assembly838. Referring toFIGS.15B and15D, the HDMI cable end portion811is uncoupled to the bottom enclosure assembly838of the output cord segment810. In some examples, the output cord segment810is biased to the unfolded configuration. In some examples, the unfolded configuration is a linear configuration. FIG.16illustrates an exploded view of the media streaming device802according to an implementation. Referring toFIG.16, the media streaming device802includes the top enclosure assembly834, the bottom enclosure assembly838, and a printed circuit board assembly836to be enclosed by the top enclosure assembly834and the bottom enclosure assembly838. The HDMI cable end portion811of the output cord segment810may include a magnet852. For example, the magnet852may be disposed within a structure of the HDMI cable end portion811. The top enclosure assembly834may be coupled to the bottom enclosure assembly838via thread forming fasteners850. FIG.17illustrates a partially exploded view of the printed circuit board assembly836according to another implementation. A first shield can869-1may be coupled to one surface of a substrate835of the printed circuit board assembly836, and a second shield can869-2may be coupled to the other surface of the substrate835of the printed circuit board assembly836. The substrate835may be any type of substrate capable of having mounted integrated circuits. In some examples, the substrate835is substantially circular. The first shield can869-1and the second shield can869-2may protect the circuit components on both sides of the printed circuit board assembly836. Also, one or more thermal gels854may be coupled to the first shield can869-1and the second shield can869-2. FIG.18Aillustrates an external view of the bottom enclosure assembly838according to an implementation.FIG.18Billustrates an internal view of the bottom enclosure assembly838according to another implementation. Referring toFIGS.18A and18B, the bottom enclosure assembly838may include a metal (e.g., steel) heat spreader855coupled to an internal surface851of the bottom enclosure assembly838. The heat spreader855may interact with the magnet852on the HDMI cable end portion811when in the folded configuration as shown inFIGS.15A and15C. Also, the bottom enclosure assembly838may include a thermal gel854coupled to the heat spreader855. In addition, a reset button829may be coupled to the bottom enclosure assembly838in order to allow a user to reset the media streaming device802. For example, the reset button829may protrude through an opening on a sidewall852of the bottom enclosure assembly838, and may be operatively coupled to the printed circuit board assembly836when the components of the media streaming device802are assembled together. In addition, a light pipe856may be coupled the bottom enclosure assembly838in order to allow a user to view light transmitted from the media streaming device802. For example, activation of the light via the light pipe856may indicate an operating status of the media streaming device802. The light pipe856may protrude through an opening on the sidewall852of the bottom enclosure assembly838, and may be operatively coupled to the printed circuit board assembly836when the components of the media streaming device802are assembled together. FIG.19Aillustrates a top view of the printed circuit board assembly836according to an implementation.FIG.19Billustrates the output cord segment810coupled to the printed circuit board assembly836without the shield can869according to an implementation.FIG.19Cillustrates the output cord segment110coupled to the printed circuit board assembly836with the shield can869according to an implementation. The printed circuit board assembly836may include an LVDS board connector833configured to be coupled to the LVDS connector of the output cord segment810. Also, the printed circuit board assembly836may include an internal frame860. The internal frame860may be a metal structure configured to surround the plurality of integrated circuits, and one or more walls that extend within the metal structure in order to separate one or more integrated circuits from other integrated circuits. In some examples, the printed circuit board assembly836may include a NAND flash874, and system on chip (SOC)864. Also, the printed circuit board assembly836may include a micro-USB connector875configured to be coupled to a micro-USB connector on the power cord segment. Referring toFIG.19C, the shield can869may be disposed on and surround the internal frame860in order to protect the NAND flash874and the SOC864, as well as other circuit components. FIG.20Aillustrates a bottom view of the printed circuit board assembly836without the shield can869according to an implementation.FIG.20Billustrates a bottom view of the printed circuit board assembly836with the shield can869according to an implementation. The printed circuit board assembly836may include a DDR memory877and a WiFi chip878. The printed circuit board assembly836may include an internal frame860configured to surround and separate the DDR memory877and the WiFi chip878. The shield can869may be disposed on and surround the internal frame860in order to protect the DDR memory877and the WiFi chip878, as well as other circuit components, on the other side of the printed circuit board assembly836. FIG.21illustrates an audio streaming device902configured to stream audio content according to an implementation. In some examples, the audio streaming device902streams the audio content, but not the video content. The audio streaming device902may seamlessly stream networked audio content to a wide variety of existing home speaker systems (e.g., one or more receiving devices104). In some examples, the audio streaming device902may receive AC or DC power, provide audio output using a common plug format or set of formats, and support wireless network connections for control and streaming media data. The user may be able to control the media playback on the audio streaming device902through one or more multiple other computing devices that can use control protocols. Also, the audio streaming device902may provide a minimal user interface for resetting the device or initiating a setup mode, but the majority of the control and interaction may be driven by other devices that communicate with the audio streaming device902wirelessly. The audio streaming device902may include a housing903configured to support and enclose a computer processing unit (CPU)320such as any type of general purpose computing circuitry or special purpose logic circuitry configured to wireless connect the audio streaming device902with a media content source106. In some examples, the housing903may include a cylindrical or puck shape design. In some examples, the housing903may be any of the structures described with reference to the previous figures. The housing903may define a micro-USB connector slot906configured to receive a micro-USB connector of a power cord segment. Also, the audio streaming device902may include an audio jack905configured to receive an audio output cord segment910. The audio jack905may be optical and analog audio jack. The audio output cord segment910may be a digital-type cord. In some examples, the audio output cord segment910may be an analog-type cord. The audio streaming device902may be removably coupled to the audio output cord segment910. In other examples, the audio output cord segment910may be fixedly coupled to the audio streaming device902. The audio output cord segment910may include a first connector907configured to be inserted and coupled to the audio jack905on the audio streaming device902, and a second connector914configured to be coupled to a receiving device (e.g., the receiving device104ofFIG.1). In some examples, the audio streaming device902includes features from the media streaming device302ofFIG.3(e.g., the CPU320, the memory326, the network interface128, and the audio output circuit330). FIG.22illustrates an exploded view of the audio streaming device902according to an implementation. The audio streaming device902may include a top enclosure assembly934, a printed circuit board assembly936and a bottom enclosure assembly938. The top enclosure assembly934may be coupled to the bottom enclosure assembly938via thread forming fasteners850such that the printed circuit board assembly936is disposed within the top enclosure assembly934and the bottom enclosure assembly938. FIG.23Aillustrates a top view of the printed circuit board assembly936according to an implementation.FIG.23Billustrates a bottom view of the printed circuit board assembly936according to an implementation. A first shield can969-1may be coupled to one surface of the printed circuit board assembly936, and a second shield can969-2may be coupled to the other surface of the printed circuit board assembly936. The first shield can969-1and the second shield can969-2may protect the circuit components on both sides of the printed circuit board assembly936. Also, one or more thermal gels954may be coupled to the first shield can869-1and the second shield can869-2. FIG.24Aillustrates an external view of the bottom enclosure assembly938according to an implementation.FIG.24Billustrates an internal view of the bottom enclosure assembly938according to another implementation. Referring toFIGS.24A and24B, the bottom enclosure assembly938may include a metal (e.g., steel) heat spreader955coupled to an internal surface of the bottom enclosure assembly938. Also, the bottom enclosure assembly938may include a thermal gel954coupled to the heat spreader955. In addition, a reset button929may be coupled to the bottom enclosure assembly938in order to allow a user to reset the audio streaming device902. For example, the reset button929may protrude through an opening on a sidewall of the bottom enclosure assembly938, and may be operatively coupled to the printed circuit board assembly936when the components of the audio streaming device902are assembled together. In addition, a light pipe956may be coupled the bottom enclosure assembly938in order to allow a user to view light transmitted from the audio streaming device902. For example, activation of the light via the light pipe956may indicate an operating status of the audio streaming device902. The light pipe956may protrude through an opening on the sidewall of the bottom enclosure assembly938, and may be operatively coupled to the printed circuit board assembly936when the components of the audio streaming device902are assembled together. FIG.25Aillustrates a top view of the printed circuit board assembly936without a shield can969according to an implementation.FIG.25Billustrates a top view of the printed circuit board assembly936with the shield can969according to an implementation. The printed circuit board assembly936may include an internal frame960. The internal frame960may be a metal structure configured to surround the plurality of integrated circuits, and one or more walls that extend within the metal structure in order to separate one or more integrated circuits from other integrated circuits. In some examples, the printed circuit board assembly936may include a NAND flash974, and system on chip (SOC)964coupled to a substrate of the printed circuit board assembly936. Also, the printed circuit board assembly836may include a micro-USB connector975configured to be coupled to a micro-USB connector on the power cord segment. The shield can969may be disposed on and surround the internal frame960in order to protect the NAND flash974and the SOC964, as well as other circuit components, on one side of the printed circuit board assembly936. FIG.26Aillustrates a bottom view of the printed circuit board assembly936without the shield can969according to an implementation.FIG.26Billustrates a bottom view of the printed circuit board assembly936with the shield can969according to an implementation. The printed circuit board assembly936may include a DDR memory977and a WiFi chip978disposed on a substrate of the printed circuit board assembly936. The printed circuit board assembly936may include an internal frame960configured to surround and separate the DDR memory977and the WiFi chip978. The shield can969may be disposed on and surround the internal frame960in order to protect the DDR memory977and the WiFi chip978, as well as other circuit components, on the other side of the printed circuit board assembly936. The printed circuit board assembly936may include the audio jack905coupled to the bottom surface of the substrate at one end of printed circuit board assembly936, and the micro-USB connector975coupled to the bottom surface of the substrate at the other end of the printed circuit board assembly936. The printed circuit board assembly936may include an audio output circuit930. In some examples, the audio output circuit930may be disclosed on the substrate outside the internal frame960and outside the shield can969. In some examples, the audio output circuit930may be the audio output circuit330discussed with reference toFIG.3. The audio output circuit930may be configured to detect which type of audio output cord segment910is coupled to the audio streaming device902. In some examples, the audio output circuit930may be configured to detect whether the connected audio output cord segment910is a digital-type cord or an analog-type cord. Depending on the type of cord detected, the audio output circuit930is configured to format the audio content to have the appropriate format corresponding to the detected cord type. For example, when the audio output circuit930detects that the audio output cord segment910is the digital-type cord, the audio output circuit930formats the audio content to a digital format. When the audio output circuit930detects that the audio output cord segment910is the analog-type cord, the audio output circuit930formats the audio content to an analog format. While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the embodiments. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The embodiments described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different embodiments described.
62,091
11943501
DETAILED DESCRIPTION In general, this disclosure describes techniques for dynamically switching between video resolutions when streaming video data. Streaming video data may be performed via unicast, broadcast, or multicast protocols. For example, Hypertext Transfer Protocol (HTTP) may be used to stream video data according to, e.g., Dynamic Adaptive Streaming over HTTP (DASH), HTTP Live Streaming (HLS), or other such protocols. As another example, broadcast or multicast protocols, such as enhanced Multimedia Broadcast Multicast Service (eMBMS), File Delivery over Unidirectional Transport (FLUTE), Real-time Object-delivery over Unidirectional Transport (ROUTE), Real-time Transport Protocol (RTP), Real Time Streaming Protocol (RTSP), or the like, may be used to deliver video data. Broadcast may also be performed over-the-air (OTA) via, e.g., radiowaves. In some examples, a broadcast or multicast protocol may deliver DASH or HLS formatted video data, such that a middleware unit of a client device may act as a client with respect to the broadcast or multicast session and as a proxy client to a DASH or HLS client. Digital Video Broadcasting (DVB) is a group of standards related to digital television. Standards for DVB are currently being developed for next generation video codecs. Document CM-AVC0620 for DVB provides the “Draft Commercial Requirements for Next Generation Video Codecs.” While the main objective of the work item addresses the addition of new codecs beyond existing codecs in TS 101 154 (MPEG-2 video, H.264/AVC, VC-1, H.265/HEVC), the commercial requirements also address new formats and new ways of encoding and consuming these formats. A draft of TS 101 154 mentions the possibility of dynamic changes of resolution only in the case of reference picture resampling using ITU-T H.266/Versatile Video Coding (VVC) coding standard. The use of a dynamic resolution for the encoding of contents has been used with existing codecs and is deployed by some streaming services for video on demand (VOD) streaming services. This disclosure recognizes that dynamic resolution of video data may also be deployed for live streaming or broadcast services. This disclosure describes techniques related to dynamic resolution changes that may be used for new encoding schemes. These techniques also allow for seamless resolution upscaling of decoded pictures in the MPEG-2 Systems layer of Integrated Receiver Decoders (IRDs) or in the rendering engine of displays (e.g., televisions). The resolution may change dynamically at sequence parameter set (SPS) boundaries, even without reference picture resampling, or within a sequence when reference picture resampling is used in VVC. These techniques may be employed in future codecs beyond VVC as well. This disclosure further describes certain constraints that may be applied to limit resolution changes in certain contexts. In video streaming applications, over-the-top (OTT) players may support a dynamic and seamless resolution change at segment boundaries. That is, in adaptive bitrate (ABR) encoding and streaming schemes, the switch of bitrate profile from one segment to the next may also be accompanied by a change of encoded resolution. Even within a bitrate profile, evolved video on demand (VOD) streaming services can dynamically change the resolution per scene to improve compression efficiency and save bandwidth. Dynamic resolution encoding tests have been performed on various standards and resolutions. For example, heuristic testing has been performed on ITU-T H.264/Advanced Video Coding (AVC) HD and ITU-T H.265/High Efficiency Video Coding (HEVC) UHD use cases. For H.264/AVC HD, tested resolutions included 1920×1080p, 1280×720p and 960×540p. For H.265/HEVC UHD, tested resolutions included 3840×2160, 250×1440 and 1920×1080. These tests have shown a 20-25% bit rate savings by using lower resolutions on the most temporally complex scenes, yet such bit rate reductions are not perceptible, and the benefits of the highest resolutions can be maintained when needed. In order for live broadcast applications to compete with streaming of pre-recorded content, it may be important to use benefits of dynamic resolution encoding adaptation for video content that is broadcast live. While video decoders themselves support decoding of dynamic resolution encoded video data, pictures in the video decoder buffer may stay in the encoded resolution, and automatic upscaling to nominal resolution may be the responsibility of the IRD or display in the MPEG-2 Systems layer. The techniques of this disclosure may be applied to video files conforming to video data encapsulated according to any of ISO base media file format, Scalable Video Coding (SVC) file format, Advanced Video Coding (AVC) file format, Third Generation Partnership Project (3GPP) file format, and/or Multiview Video Coding (MVC) file format, or other similar video file formats. In HTTP streaming, frequently used operations include HEAD, GET, and partial GET. The HEAD operation retrieves a header of a file associated with a given uniform resource locator (URL) or uniform resource name (URN), without retrieving a payload associated with the URL or URN. The GET operation retrieves a whole file associated with a given URL or URN. The partial GET operation receives a byte range as an input parameter and retrieves a continuous number of bytes of a file, where the number of bytes correspond to the received byte range. Thus, movie fragments may be provided for HTTP streaming, because a partial GET operation can get one or more individual movie fragments. In a movie fragment, there can be several track fragments of different tracks. In HTTP streaming, a media presentation may be a structured collection of data that is accessible to the client. The client may request and download media data information to present a streaming service to a user. In the example of streaming 3GPP data using HTTP streaming, there may be multiple representations for video and/or audio data of multimedia content. As explained below, different representations may correspond to different coding characteristics (e.g., different profiles or levels of a video coding standard), different coding standards or extensions of coding standards (such as multiview and/or scalable extensions), or different bitrates. The manifest of such representations may be defined in a Media Presentation Description (MPD) data structure. A media presentation may correspond to a structured collection of data that is accessible to an HTTP streaming client device. The HTTP streaming client device may request and download media data information to present a streaming service to a user of the client device. A media presentation may be described in the MPD data structure, which may include updates of the MPD. A media presentation may contain a sequence of one or more Periods. Each period may extend until the start of the next Period, or until the end of the media presentation, in the case of the last period. Each period may contain one or more representations for the same media content. A representation may be one of a number of alternative encoded versions of audio, video, timed text, or other such data. The representations may differ by encoding types, e.g., by bitrate, resolution, and/or codec for video data and bitrate, language, and/or codec for audio data. The term representation may be used to refer to a section of encoded audio or video data corresponding to a particular period of the multimedia content and encoded in a particular way. Representations of a particular period may be assigned to a group indicated by an attribute in the MPD indicative of an adaptation set to which the representations belong. Representations in the same adaptation set are generally considered alternatives to each other, in that a client device can dynamically and seamlessly switch between these representations, e.g., to perform bandwidth adaptation. For example, each representation of video data for a particular period may be assigned to the same adaptation set, such that any of the representations may be selected for decoding to present media data, such as video data or audio data, of the multimedia content for the corresponding period. The media content within one period may be represented by either one representation from group 0, if present, or the combination of at most one representation from each non-zero group, in some examples. Timing data for each representation of a period may be expressed relative to the start time of the period. A representation may include one or more segments. Each representation may include an initialization segment, or each segment of a representation may be self-initializing. When present, the initialization segment may contain initialization information for accessing the representation. In general, the initialization segment does not contain media data. A segment may be uniquely referenced by an identifier, such as a uniform resource locator (URL), uniform resource name (URN), or uniform resource identifier (URI). The MPD may provide the identifiers for each segment. In some examples, the MPD may also provide byte ranges in the form of a range attribute, which may correspond to the data for a segment within a file accessible by the URL, URN, or URI. Different representations may be selected for substantially simultaneous retrieval for different types of media data. For example, a client device may select an audio representation, a video representation, and a timed text representation from which to retrieve segments. In some examples, the client device may select particular adaptation sets for performing bandwidth adaptation. That is, the client device may select an adaptation set including video representations, an adaptation set including audio representations, and/or an adaptation set including timed text. Alternatively, the client device may select adaptation sets for certain types of media (e.g., video), and directly select representations for other types of media (e.g., audio and/or timed text). FIG.1is a block diagram illustrating an example system10that implements techniques for streaming media data over a network. In this example, system10includes content preparation device20, server device60, and client device40. Client device40and server device60are communicatively coupled by network74, which may comprise the Internet. In some examples, content preparation device20and server device60may also be coupled by network74or another network, or may be directly communicatively coupled. In some examples, content preparation device20and server device60may comprise the same device. Content preparation device20, in the example ofFIG.1, comprises audio source22and video source24. Audio source22may comprise, for example, a microphone that produces electrical signals representative of captured audio data to be encoded by audio encoder26. Alternatively, audio source22may comprise a storage medium storing previously recorded audio data, an audio data generator such as a computerized synthesizer, or any other source of audio data. Video source24may comprise a video camera that produces video data to be encoded by video encoder28, a storage medium encoded with previously recorded video data, a video data generation unit such as a computer graphics source, or any other source of video data. Content preparation device20is not necessarily communicatively coupled to server device60in all examples, but may store multimedia content to a separate medium that is read by server device60. Raw audio and video data may comprise analog or digital data. Analog data may be digitized before being encoded by audio encoder26and/or video encoder28. Audio source22may obtain audio data from a speaking participant while the speaking participant is speaking, and video source24may simultaneously obtain video data of the speaking participant. In other examples, audio source22may comprise a computer-readable storage medium comprising stored audio data, and video source24may comprise a computer-readable storage medium comprising stored video data. In this manner, the techniques described in this disclosure may be applied to live, streaming, real-time audio and video data or to archived, pre-recorded audio and video data. Audio frames that correspond to video frames are generally audio frames containing audio data that was captured (or generated) by audio source22contemporaneously with video data captured (or generated) by video source24that is contained within the video frames. For example, while a speaking participant generally produces audio data by speaking, audio source22captures the audio data, and video source24captures video data of the speaking participant at the same time, that is, while audio source22is capturing the audio data. Hence, an audio frame may temporally correspond to one or more particular video frames. Accordingly, an audio frame corresponding to a video frame generally corresponds to a situation in which audio data and video data were captured at the same time and for which an audio frame and a video frame comprise, respectively, the audio data and the video data that was captured at the same time. In some examples, audio encoder26may encode a timestamp in each encoded audio frame that represents a time at which the audio data for the encoded audio frame was recorded, and similarly, video encoder28may encode a timestamp in each encoded video frame that represents a time at which the video data for an encoded video frame was recorded. In such examples, an audio frame corresponding to a video frame may comprise an audio frame comprising a timestamp and a video frame comprising the same timestamp. Content preparation device20may include an internal clock from which audio encoder26and/or video encoder28may generate the timestamps, or that audio source22and video source24may use to associate audio and video data, respectively, with a timestamp. In some examples, audio source22may send data to audio encoder26corresponding to a time at which audio data was recorded, and video source24may send data to video encoder28corresponding to a time at which video data was recorded. In some examples, audio encoder26may encode a sequence identifier in encoded audio data to indicate a relative temporal ordering of encoded audio data but without necessarily indicating an absolute time at which the audio data was recorded, and similarly, video encoder28may also use sequence identifiers to indicate a relative temporal ordering of encoded video data. Similarly, in some examples, a sequence identifier may be mapped or otherwise correlated with a timestamp. Audio encoder26generally produces a stream of encoded audio data, while video encoder28produces a stream of encoded video data. Each individual stream of data (whether audio or video) may be referred to as an elementary stream. An elementary stream is a single, digitally coded (possibly compressed) component of a representation. For example, the coded video or audio part of the representation can be an elementary stream. An elementary stream may be converted into a packetized elementary stream (PES) before being encapsulated within a video file. Within the same representation, a stream ID may be used to distinguish the PES-packets belonging to one elementary stream from the other. The basic unit of data of an elementary stream is a packetized elementary stream (PES) packet. Thus, coded video data generally corresponds to elementary video streams. Similarly, audio data corresponds to one or more respective elementary streams. Many video coding standards, such as ITU-T H.264/AVC and the upcoming High Efficiency Video Coding (HEVC) standard, define the syntax, semantics, and decoding process for error-free bitstreams, any of which conform to a certain profile or level. Video coding standards typically do not specify the encoder, but the encoder is tasked with guaranteeing that the generated bitstreams are standard-compliant for a decoder. In the context of video coding standards, a “profile” corresponds to a subset of algorithms, features, or tools and constraints that apply to them. As defined by the H.264 standard, for example, a “profile” is a subset of the entire bitstream syntax that is specified by the H.264 standard. A “level” corresponds to the limitations of the decoder resource consumption, such as, for example, decoder memory and computation, which are related to the resolution of the pictures, bit rate, and block processing rate. A profile may be signaled with a profile_idc (profile indicator) value, while a level may be signaled with a level_idc (level indicator) value. The H.264 standard, for example, recognizes that, within the bounds imposed by the syntax of a given profile, it is still possible to require a large variation in the performance of encoders and decoders depending upon the values taken by syntax elements in the bitstream such as the specified size of the decoded pictures. The H.264 standard further recognizes that, in many applications, it is neither practical nor economical to implement a decoder capable of dealing with all hypothetical uses of the syntax within a particular profile. Accordingly, the H.264 standard defines a “level” as a specified set of constraints imposed on values of the syntax elements in the bitstream. These constraints may be simple limits on values. Alternatively, these constraints may take the form of constraints on arithmetic combinations of values (e.g., picture width multiplied by picture height multiplied by number of pictures decoded per second). The H.264 standard further provides that individual implementations may support a different level for each supported profile. A decoder conforming to a profile ordinarily supports all the features defined in the profile. For example, as a coding feature, B-picture coding is not supported in the baseline profile of H.264/AVC but is supported in other profiles of H.264/AVC. A decoder conforming to a level should be capable of decoding any bitstream that does not require resources beyond the limitations defined in the level. Definitions of profiles and levels may be helpful for interpretability. For example, during video transmission, a pair of profile and level definitions may be negotiated and agreed for a whole transmission session. More specifically, in H.264/AVC, a level may define limitations on the number of macroblocks that need to be processed, decoded picture buffer (DPB) size, coded picture buffer (CPB) size, vertical motion vector range, maximum number of motion vectors per two consecutive MBs, and whether a B-block can have sub-macroblock partitions less than 8×8 pixels. In this manner, a decoder may determine whether the decoder is capable of properly decoding the bitstream. In the example ofFIG.1, encapsulation unit30of content preparation device20receives elementary streams comprising coded video data from video encoder28and elementary streams comprising coded audio data from audio encoder26. In some examples, video encoder28and audio encoder26may each include packetizers for forming PES packets from encoded data. In other examples, video encoder28and audio encoder26may each interface with respective packetizers for forming PES packets from encoded data. In still other examples, encapsulation unit30may include packetizers for forming PES packets from encoded audio and video data. Video encoder28may encode video data of multimedia content in a variety of ways, to produce different representations of the multimedia content at various bitrates and with various characteristics, such as pixel resolutions, frame rates, conformance to various coding standards, conformance to various profiles and/or levels of profiles for various coding standards, representations having one or multiple views (e.g., for two-dimensional or three-dimensional playback), or other such characteristics. A representation, as used in this disclosure, may comprise one of audio data, video data, text data (e.g., for closed captions), or other such data. The representation may include an elementary stream, such as an audio elementary stream or a video elementary stream. Each PES packet may include a stream_id that identifies the elementary stream to which the PES packet belongs. Encapsulation unit30is responsible for assembling elementary streams into video files (e.g., segments) of various representations. Encapsulation unit30receives PES packets for elementary streams of a representation from audio encoder26and video encoder28and forms corresponding network abstraction layer (NAL) units from the PES packets. Coded video segments may be organized into NAL units, which provide a “network-friendly” video representation addressing applications such as video telephony, storage, broadcast, or streaming. NAL units can be categorized to Video Coding Layer (VCL) NAL units and non-VCL NAL units. VCL units may contain the core compression engine and may include block, macroblock, and/or slice level data. Other NAL units may be non-VCL NAL units. In some examples, a coded picture in one time instance, normally presented as a primary coded picture, may be contained in an access unit, which may include one or more NAL units. Non-VCL NAL units may include parameter set NAL units and SEI NAL units, among others. Parameter sets may contain sequence-level header information (in sequence parameter sets (SPS)) and the infrequently changing picture-level header information (in picture parameter sets (PPS)). With parameter sets (e.g., PPS and SPS), infrequently changing information need not to be repeated for each sequence or picture; hence, coding efficiency may be improved. Furthermore, the use of parameter sets may enable out-of-band transmission of the important header information, avoiding the need for redundant transmissions for error resilience. In out-of-band transmission examples, parameter set NAL units may be transmitted on a different channel than other NAL units, such as SEI NAL units. Supplemental Enhancement Information (SEI) may contain information that is not necessary for decoding the coded pictures samples from VCL NAL units, but may assist in processes related to decoding, display, error resilience, and other purposes. SEI messages may be contained in non-VCL NAL units. SEI messages are the normative part of some standard specifications, and thus are not always mandatory for standard compliant decoder implementation. SEI messages may be sequence level SEI messages or picture level SEI messages. Some sequence level information may be contained in SEI messages, such as scalability information SEI messages in the example of SVC and view scalability information SEI messages in MVC. These example SEI messages may convey information on, e.g., extraction of operation points and characteristics of the operation points. In addition, encapsulation unit30may form a manifest file, such as a media presentation descriptor (MPD) that describes characteristics of the representations. Encapsulation unit30may format the MPD according to extensible markup language (XML). Encapsulation unit30may provide data for one or more representations of multimedia content, along with the manifest file (e.g., the MPD) to output interface32. Output interface32may comprise a network interface or an interface for writing to a storage medium, such as a universal serial bus (USB) interface, a CD or DVD writer or burner, an interface to magnetic or flash storage media, or other interfaces for storing or transmitting media data. Encapsulation unit30may provide data of each of the representations of multimedia content to output interface32, which may send the data to server device60via network transmission or storage media. In the example ofFIG.1, server device60includes storage medium62that stores various multimedia contents64, each including a respective manifest file66and one or more representations68A-68N (representations68). In some examples, output interface32may also send data directly to network74. In some examples, representations68may be separated into adaptation sets. That is, various subsets of representations68may include respective common sets of characteristics, such as codec, profile and level, resolution, number of views, file format for segments, text type information that may identify a language or other characteristics of text to be displayed with the representation and/or audio data to be decoded and presented, e.g., by speakers, camera angle information that may describe a camera angle or real-world camera perspective of a scene for representations in the adaptation set, rating information that describes content suitability for particular audiences, or the like. Manifest file66may include data indicative of the subsets of representations68corresponding to particular adaptation sets, as well as common characteristics for the adaptation sets. Manifest file66may also include data representative of individual characteristics, such as bitrates, for individual representations of adaptation sets. In this manner, an adaptation set may provide for simplified network bandwidth adaptation. Representations in an adaptation set may be indicated using child elements of an adaptation set element of manifest file66. Server device60includes request processing unit70and network interface72. In some examples, server device60may include a plurality of network interfaces. Furthermore, any or all of the features of server device60may be implemented on other devices of a content delivery network, such as routers, bridges, proxy devices, switches, or other devices. In some examples, intermediate devices of a content delivery network may cache data of multimedia content64, and include components that conform substantially to those of server device60. In general, network interface72is configured to send and receive data via network74. Request processing unit70is configured to receive network requests from client devices, such as client device40, for data of storage medium62. For example, request processing unit70may implement hypertext transfer protocol (HTTP) version 1.1, as described in RFC 2616, “Hypertext Transfer Protocol—HTTP/1.1,” by R. Fielding et al, Network Working Group, IETF, June 1999. That is, request processing unit70may be configured to receive HTTP GET or partial GET requests and provide data of multimedia content64in response to the requests. The requests may specify a segment of one of representations68, e.g., using a URL of the segment. In some examples, the requests may also specify one or more byte ranges of the segment, thus comprising partial GET requests. Request processing unit70may further be configured to service HTTP HEAD requests to provide header data of a segment of one of representations68. In any case, request processing unit70may be configured to process the requests to provide requested data to a requesting device, such as client device40. Additionally or alternatively, request processing unit70may be configured to deliver media data via a broadcast or multicast protocol, such as eMBMS. Content preparation device20may create DASH segments and/or sub-segments in substantially the same way as described, but server device60may deliver these segments or sub-segments using eMBMS or another broadcast or multicast network transport protocol. For example, request processing unit70may be configured to receive a multicast group join request from client device40. That is, server device60may advertise an Internet protocol (IP) address associated with a multicast group to client devices, including client device40, associated with particular media content (e.g., a broadcast of a live event). Client device40, in turn, may submit a request to join the multicast group. This request may be propagated throughout network74, e.g., routers making up network74, such that the routers are caused to direct traffic destined for the IP address associated with the multicast group to subscribing client devices, such as client device40. As illustrated in the example ofFIG.1, multimedia content64includes manifest file66, which may correspond to a media presentation description (MPD). Manifest file66may contain descriptions of different alternative representations68(e.g., video services with different qualities) and the description may include, e.g., codec information, a profile value, a level value, a bitrate, and other descriptive characteristics of representations68. Client device40may retrieve the MPD of a media presentation to determine how to access segments of representations68. In particular, retrieval unit52may retrieve configuration data (not shown) of client device40to determine decoding capabilities of video decoder48and rendering capabilities of video output44. The configuration data may also include any or all of a language preference selected by a user of client device40, one or more camera perspectives corresponding to depth preferences set by the user of client device40, and/or a rating preference selected by the user of client device40. Retrieval unit52may comprise, for example, a web browser or a media client configured to submit HTTP GET and partial GET requests. Retrieval unit52may correspond to software instructions executed by one or more processors or processing units (not shown) of client device40. In some examples, all or portions of the functionality described with respect to retrieval unit52may be implemented in hardware, or a combination of hardware, software, and/or firmware, where requisite hardware may be provided to execute instructions for software or firmware. Retrieval unit52may compare the decoding and rendering capabilities of client device40to characteristics of representations68indicated by information of manifest file66. Retrieval unit52may initially retrieve at least a portion of manifest file66to determine characteristics of representations68. For example, retrieval unit52may request a portion of manifest file66that describes characteristics of one or more adaptation sets. Retrieval unit52may select a subset of representations68(e.g., an adaptation set) having characteristics that can be satisfied by the coding and rendering capabilities of client device40. Retrieval unit52may then determine bitrates for representations in the adaptation set, determine a currently available amount of network bandwidth, and retrieve segments from one of the representations having a bitrate that can be satisfied by the network bandwidth. In general, higher bitrate representations may yield higher quality video playback, while lower bitrate representations may provide sufficient quality video playback when available network bandwidth decreases. Accordingly, when available network bandwidth is relatively high, retrieval unit52may retrieve data from relatively high bitrate representations, whereas when available network bandwidth is low, retrieval unit52may retrieve data from relatively low bitrate representations. In this manner, client device40may stream multimedia data over network74while also adapting to changing network bandwidth availability of network74. Additionally or alternatively, retrieval unit52may be configured to receive data in accordance with a broadcast or multicast network protocol, such as eMBMS or IP multicast. In such examples, retrieval unit52may submit a request to join a multicast network group associated with particular media content. After joining the multicast group, retrieval unit52may receive data of the multicast group without further requests issued to server device60or content preparation device20. Retrieval unit52may submit a request to leave the multicast group when data of the multicast group is no longer needed, e.g., to stop playback or to change channels to a different multicast group. Network interface54may receive and provide data of segments of a selected representation to retrieval unit52, which may in turn provide the segments to decapsulation unit50. Decapsulation unit50may decapsulate elements of a video file into constituent PES streams, depacketize the PES streams to retrieve encoded data, and send the encoded data to either audio decoder46or video decoder48, depending on whether the encoded data is part of an audio or video stream, e.g., as indicated by PES packet headers of the stream. Audio decoder46decodes encoded audio data and sends the decoded audio data to audio output42, while video decoder48decodes encoded video data and sends the decoded video data, which may include a plurality of views of a stream, to video output44. Video encoder28, video decoder48, audio encoder26, audio decoder46, encapsulation unit30, retrieval unit52, and decapsulation unit50each may be implemented as any of a variety of suitable processing circuitry, as applicable, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuitry, software, hardware, firmware or any combinations thereof. Each of video encoder28and video decoder48may be included in one or more encoders or decoders, either of which may be integrated as part of a combined video encoder/decoder (CODEC). Likewise, each of audio encoder26and audio decoder46may be included in one or more encoders or decoders, either of which may be integrated as part of a combined CODEC. An apparatus including video encoder28, video decoder48, audio encoder26, audio decoder46, encapsulation unit30, retrieval unit52, and/or decapsulation unit50may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone. Client device40, server device60, and/or content preparation device20may be configured to operate in accordance with the techniques of this disclosure. For purposes of example, this disclosure describes these techniques with respect to client device40and server device60. However, it should be understood that content preparation device20may be configured to perform these techniques, instead of (or in addition to) server device60. Encapsulation unit30may form NAL units comprising a header that identifies a program to which the NAL unit belongs, as well as a payload, e.g., audio data, video data, or data that describes the transport or program stream to which the NAL unit corresponds. For example, in H.264/AVC, a NAL unit includes a 1-byte header and a payload of varying size. A NAL unit including video data in its payload may comprise various granularity levels of video data. For example, a NAL unit may comprise a block of video data, a plurality of blocks, a slice of video data, or an entire picture of video data. Encapsulation unit30may receive encoded video data from video encoder28in the form of PES packets of elementary streams. Encapsulation unit30may associate each elementary stream with a corresponding program. Encapsulation unit30may also assemble access units from a plurality of NAL units. In general, an access unit may comprise one or more NAL units for representing a frame of video data, as well as audio data corresponding to the frame when such audio data is available. An access unit generally includes all NAL units for one output time instance, e.g., all audio and video data for one time instance. For example, if each view has a frame rate of 20 frames per second (fps), then each time instance may correspond to a time interval of 0.05 seconds. During this time interval, the specific frames for all views of the same access unit (the same time instance) may be rendered simultaneously. In one example, an access unit may comprise a coded picture in one time instance, which may be presented as a primary coded picture. Accordingly, an access unit may comprise all audio and video frames of a common temporal instance, e.g., all views corresponding to time X. This disclosure also refers to an encoded picture of a particular view as a “view component.” That is, a view component may comprise an encoded picture (or frame) for a particular view at a particular time. Accordingly, an access unit may be defined as comprising all view components of a common temporal instance. The decoding order of access units need not necessarily be the same as the output or display order. A media presentation may include a media presentation description (MPD), which may contain descriptions of different alternative representations (e.g., video services with different qualities) and the description may include, e.g., codec information, a profile value, and a level value. An MPD is one example of a manifest file, such as manifest file66. Client device40may retrieve the MPD of a media presentation to determine how to access movie fragments of various presentations. Movie fragments may be located in movie fragment boxes (moof boxes) of video files. Manifest file66(which may comprise, for example, an MPD) may advertise availability of segments of representations68. That is, the MPD may include information indicating the wall-clock time at which a first segment of one of representations68becomes available, as well as information indicating the durations of segments within representations68. In this manner, retrieval unit52of client device40may determine when each segment is available, based on the starting time as well as the durations of the segments preceding a particular segment. After encapsulation unit30has assembled NAL units and/or access units into a video file based on received data, encapsulation unit30passes the video file to output interface32for output. In some examples, encapsulation unit30may store the video file locally or send the video file to a remote server via output interface32, rather than sending the video file directly to client device40. Output interface32may comprise, for example, a transmitter, a transceiver, a device for writing data to a computer-readable medium such as, for example, an optical drive, a magnetic media drive (e.g., floppy drive), a universal serial bus (USB) port, a network interface, or other output interface. Output interface32outputs the video file to a computer-readable medium, such as, for example, a transmission signal, a magnetic medium, an optical medium, a memory, a flash drive, or other computer-readable medium. Network interface54may receive a NAL unit or access unit via network74and provide the NAL unit or access unit to decapsulation unit50, via retrieval unit52. Decapsulation unit50may decapsulate a elements of a video file into constituent PES streams, depacketize the PES streams to retrieve encoded data, and send the encoded data to either audio decoder46or video decoder48, depending on whether the encoded data is part of an audio or video stream, e.g., as indicated by PES packet headers of the stream. Audio decoder46decodes encoded audio data and sends the decoded audio data to audio output42, while video decoder48decodes encoded video data and sends the decoded video data, which may include a plurality of views of a stream, to video output44. In accordance with the techniques of this disclosure, content preparation device20may prepare a media presentation including multiple representations in a switching set that have different spatial resolutions. For example, representation68A may have a resolution of 3840×2160 and representation68N may have a resolution of 1920×1080. Content preparation device20may further construct an addressable resource index (ARI) track for the switching set that advertises quality values for segments (or chunks of segments) of representations68(e.g., using quality_identifier values of the ARI track). ISO/IEC 23009-1:2021/Amd.1 includes one example definition of the ARI track including the quality_identifier. The quality values may generally represent a metric of quality other than spatial resolution and bitrate, as the bitrates for the two resolutions may be substantially similar or identical (i.e., the same). Allowing for a higher bitrate for lower resolution video data may, in some cases, yield a higher quality for the lower resolution video data than for higher resolution video data. For example, for a highly dynamic scene involving many changes in global camera perspective, video data encoded at a lower resolution at a given bitrate may have a higher quality than video data encoded at a higher resolution at the same bitrate. That is, when many global camera perspective changes occur for a video sequence, more intra-prediction encoded frames (I-frames) may be required, which may consume a relatively large amount of the bit budget allocated for the scene. An even larger number of bits would be required to encode higher resolution I-frames. Thus, for the same bit budget, more bits would be available for encoding subsequent inter-prediction encoded frames (e.g., P- and B-frames) for the lower resolution video than for the higher resolution video. Thus, a higher overall quality may be expected for such a scene at a lower resolution than at a higher resolution, assuming the same bitrate is used. Accordingly, retrieval unit52of client device40may obtain quality values for two segments or chunks corresponding to the same playback time from the two different representations. Retrieval unit52may also determine whether the quality value for the segment or chunk of the lower resolution representation is higher than the quality value for the segment or chunk of the higher resolution representation. If the segment or chunk of the lower resolution representation has a higher quality value, retrieval unit52may retrieve the segment or chunk of the lower resolution representation and provide the segment or chunk to video decoder48. On the other hand, if the segment or chunk of the higher resolution representation has a higher quality value, retrieval unit52may retrieve the segment or chunk of the higher resolution representation and provide the segment or chunk to video decoder48. Retrieval unit52may similarly analyze quality values for each segment or chunk throughout playback of the media presentation, and thus, may switch between the lower and higher resolution representations at various times. In other words, for some scenarios, such as H.266/VVC-encoded video data, the benefits of dynamic resolution encoding have been demonstrated by video on demand (VOD) services. This disclosure recognizes that these benefits may be realized using broadcast delivery. That is, according to the techniques of this disclosure, client device40may receive a media stream via broadcast and perform resolution changes between two different resolutions of video data included in the media stream, e.g., between representations including different spatial resolutions for encoded video data. For DVB DASH applications, such resolution changes may be supported within a profile at segment boundaries. This disclosure further recognizes that dynamic resolution changes need not be limited to the usage of reference picture resampling in VVC encoded streams. Likewise, there is not a need to limit changes to being between two encoded resolutions. In some examples, multiple resolutions exceeding two may be supported. For example, a VVC HDR UHDTV-1 IRD may be configured to switch between any resolutions in the following list of resolutions: 3840×2160, 2560×1440, 1920×1080, 1280×720. As another example, a VVC HDR UHDTV-2 IRD may be configured to switch between any resolutions in the following list of resolutions: 7680×4320, 5120×2880, 3840×2160, 2560×1440, 1920×1080, 1280×720. This disclosure also recognizes that resolution changes may happen relatively frequently. For example, if picture resolution (e.g., as indicated by values for syntax elements sps_pic_width_in_luma_samples and sps_pic_height_in_luma_samples of picture parameter sets (PPSs)) were to change at each DVB random access point (RAP), two successive changes of picture resolution may happen at a much higher frequency than two seconds worth of playback time. The techniques of this disclosure may also be premised on video decoders being capable of correctly handling resolution changes for input encoded video data, but also upsampling lower resolution video data to a predetermined maximum resolution for display (e.g., a resolution matching a display to which the video data is output). In this manner, displayed video data would have the same resolution, even though input encoded video data may vary in resolution, thereby preventing user observation of resolution changes. In this manner, client device40represents an example of a device for retrieving media data, the device including a memory configured to store video data; a video decoder configured to decode the video data; and one or more processors implemented in circuitry and configured to: determine that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receive a first portion of the first video data at the first spatial resolution for a first playback time; send the first portion of the first video data at the first spatial resolution to the video decoder; receive a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and send the second portion of the second video data at the second spatial resolution to the video decoder. FIG.2is a block diagram illustrating an example set of components of retrieval unit52ofFIG.1in greater detail. In this example, retrieval unit52includes digital video broadcast (DVB) middleware unit100, DASH client110, and media application112. In this example, DVB middleware unit100further includes DVB reception unit106, cache104, and proxy server unit102. In this example, DVB reception unit106is configured to receive data via DVB. That is, DVB reception unit106may receive files via broadcast from, e.g., server device60, which may act as a DVB broadcast server. As DVB middleware unit100receives data for files (e.g., segments, or chunks thereof), DVB middleware unit may store the received data in cache104. Cache104may comprise a computer-readable storage medium, such as flash memory, a hard disk, RAM, or any other suitable storage medium. Proxy server unit102may act as a server for DASH client110. For example, proxy server unit102may provide a MPD file or other manifest file to DASH client110. Proxy server unit102may advertise availability times for segments in the MPD file, as well as hyperlinks from which the segments can be retrieved. These hyperlinks may include a localhost address prefix corresponding to client device40(e.g., 127.0.0.1 for IPv4). In this manner, DASH client110may request segments from proxy server unit102using HTTP GET or partial GET requests. For example, for a segment available from link http://127.0.0.1/rep1/seg3, DASH client110may construct an HTTP GET request that includes a request for http://127.0.0.1/rep1/seg3, and submit the request to proxy server unit102. Proxy server unit102may retrieve requested data from cache104and provide the data to DASH client110in response to such requests. In accordance with the techniques of this disclosure, DVB reception unit106may receive a media presentation including two or more representations having different resolutions. DVB reception unit106may also receive an ARI track including quality values for segments or chunks of the representations. DVB reception unit106may store the received segments or chunks to cache104. DASH client110may retrieve the data of the ARI track via proxy server unit102and determine, for a given playback time, which segment or chunk has a higher quality value, then retrieve the segment or chunk having the higher quality value for that playback time. FIG.3is a conceptual diagram illustrating elements of example multimedia content120. Multimedia content120may correspond to multimedia content64(FIG.1), or another multimedia content stored in storage medium62. In the example ofFIG.3, multimedia content120includes media presentation description (MPD)122, a plurality of representations124A-124N (representations124), and addressable resource information (ARI) track134. Representation124A includes optional header data126and segments128A-128N (segments128), while representation124N includes optional header data130and segments132A-132N (segments132). The letter N is used to designate the last movie fragment in each of representations124as a matter of convenience. In some examples, there may be different numbers of movie fragments between representations124. MPD122may comprise a data structure separate from representations124. MPD122may correspond to manifest file66ofFIG.1. Likewise, representations124may correspond to representations68ofFIG.1. In general, MPD122may include data that generally describes characteristics of representations124, such as coding and rendering characteristics, adaptation sets, a profile to which MPD122corresponds, text type information, camera angle information, rating information, trick mode information (e.g., information indicative of representations that include temporal sub-sequences), and/or information for retrieving remote periods (e.g., for targeted advertisement insertion into media content during playback). Header data126, when present, may describe characteristics of segments128, e.g., temporal locations of random access points (RAPs, also referred to as stream access points (SAPs)), which of segments128includes random access points, byte offsets to random access points within segments128, uniform resource locators (URLs) of segments128, or other aspects of segments128. Header data130, when present, may describe similar characteristics for segments132. Additionally or alternatively, such characteristics may be fully included within MPD122. Segments128,132include one or more coded video samples, each of which may include frames or slices of video data. Each of the coded video samples of segments128may have similar characteristics, e.g., height, width, and bandwidth requirements. Such characteristics may be described by data of MPD122, though such data is not illustrated in the example ofFIG.3. MPD122may include characteristics as described by the 3GPP Specification, with the addition of any or all of the signaled information described in this disclosure. Each of segments128,132may be associated with a unique uniform resource locator (URL). Thus, each of segments128,132may be independently retrievable using a streaming network protocol, such as DASH. In this manner, a destination device, such as client device40, may use an HTTP GET request to retrieve segments128or132. In some examples, client device40may use HTTP partial GET requests to retrieve specific byte ranges of segments128or132. ARI track134includes, among other things, quality values136A-136N (quality values136). In general, quality values136include values representing qualities for corresponding (i.e., playback time aligned) segments of representations124. Thus, for example, quality values136A include quality values for segments128A and132A, quality values136B include quality values for segments128B and132B, and quality values136A include quality values for segments128N and132N. In accordance with the techniques of this disclosure, representation124A may include video data at a first resolution (e.g., 4K) and representation124N may include video data at a second, different resolution (e.g., 1080p). Representations124A and124N may have similar or identical bitrates. Thus, aligned segments128and132may have similar or identical bitrates. For example, segment128A and segment132A may have similar or identical bitrates, but include video data encoded at different resolutions. Some of segments124may have higher qualities, as indicated by quality values136, than corresponding segments132(at a lower spatial resolution). However, in some cases, one or more of segments132may have higher quality values than corresponding segments124as indicated by quality values136. Thus, client device40(FIG.1) may determine which segment of a given set of corresponding segments has a higher quality value, using quality values136, and retrieve the one of segments124,132having the higher quality value. For example, if quality values136A indicate that segment124A has a higher quality value than segment132A, client device40(or retrieval unit52thereof) may retrieve segment124A. As another example, if quality values136B indicate that segment132B has a higher quality than segment124B, client device40or retrieval unit52may retrieve segment132B. FIG.4is a block diagram illustrating elements of an example video file150, which may correspond to a segment of a representation, such as one of segments128,132ofFIG.3. Each of segments128,132may include data that conforms substantially to the arrangement of data illustrated in the example ofFIG.4. Video file150may be said to encapsulate a segment. As described above, video files in accordance with the ISO base media file format and extensions thereof store data in a series of objects, referred to as “boxes.” In the example ofFIG.4, video file150includes file type (FTYP) box152, movie (MOOV) box154, segment index (sidx) boxes162, movie fragment (MOOF) boxes164, and movie fragment random access (MFRA) box166. AlthoughFIG.4represents an example of a video file, it should be understood that other media files may include other types of media data (e.g., audio data, timed text data, or the like) that is structured similarly to the data of video file150, in accordance with the ISO base media file format and its extensions. File type (FTYP) box152generally describes a file type for video file150. File type box152may include data that identifies a specification that describes a best use for video file150. File type box152may alternatively be placed before MOOV box154, movie fragment boxes164, and/or MFRA box166. In some examples, a segment, such as video file150, may include an MPD update box (not shown) before FTYP box152. The MPD update box may include information indicating that an MPD corresponding to a representation including video file150is to be updated, along with information for updating the MPD. For example, the MPD update box may provide a URI or URL for a resource to be used to update the MPD. As another example, the MPD update box may include data for updating the MPD. In some examples, the MPD update box may immediately follow a segment type (STYP) box (not shown) of video file150, where the STYP box may define a segment type for video file150. MOOV box154, in the example ofFIG.4, includes movie header (MVHD) box156, track (TRAK) box158, and one or more movie extends (MVEX) boxes160. In general, MVHD box156may describe general characteristics of video file150. For example, MVHD box156may include data that describes when video file150was originally created, when video file150was last modified, a timescale for video file150, a duration of playback for video file150, or other data that generally describes video file150. TRAK box158may include data for a track of video file150. TRAK box158may include a track header (TKHD) box that describes characteristics of the track corresponding to TRAK box158. In some examples, TRAK box158may include coded video pictures, while in other examples, the coded video pictures of the track may be included in movie fragments164, which may be referenced by data of TRAK box158and/or sidx boxes162. In some examples, video file150may include more than one track. Accordingly, MOOV box154may include a number of TRAK boxes equal to the number of tracks in video file150. TRAK box158may describe characteristics of a corresponding track of video file150. For example, TRAK box158may describe temporal and/or spatial information for the corresponding track. A TRAK box similar to TRAK box158of MOOV box154may describe characteristics of a parameter set track, when encapsulation unit30(FIG.3) includes a parameter set track in a video file, such as video file150. Encapsulation unit30may signal the presence of sequence level SEI messages in the parameter set track within the TRAK box describing the parameter set track. MVEX boxes160may describe characteristics of corresponding movie fragments164, e.g., to signal that video file150includes movie fragments164, in addition to video data included within MOOV box154, if any. In the context of streaming video data, coded video pictures may be included in movie fragments164rather than in MOOV box154. Accordingly, all coded video samples may be included in movie fragments164, rather than in MOOV box154. MOOV box154may include a number of MVEX boxes160equal to the number of movie fragments164in video file150. Each of MVEX boxes160may describe characteristics of a corresponding one of movie fragments164. For example, each MVEX box may include a movie extends header box (MEHD) box that describes a temporal duration for the corresponding one of movie fragments164. As noted above, encapsulation unit30may store a sequence data set in a video sample that does not include actual coded video data. A video sample may generally correspond to an access unit, which is a representation of a coded picture at a specific time instance. In the context of AVC, the coded picture include one or more VCL NAL units, which contain the information to construct all the pixels of the access unit and other associated non-VCL NAL units, such as SEI messages. Accordingly, encapsulation unit30may include a sequence data set, which may include sequence level SEI messages, in one of movie fragments164. Encapsulation unit30may further signal the presence of a sequence data set and/or sequence level SEI messages as being present in one of movie fragments164within the one of MVEX boxes160corresponding to the one of movie fragments164. SIDX boxes162are optional elements of video file150. That is, video files conforming to the 3GPP file format, or other such file formats, do not necessarily include SIDX boxes162. In accordance with the example of the 3GPP file format, a SIDX box may be used to identify a sub-segment of a segment (e.g., a segment contained within video file150). The 3GPP file format defines a sub-segment as “a self-contained set of one or more consecutive movie fragment boxes with corresponding Media Data box(es) and a Media Data Box containing data referenced by a Movie Fragment Box must follow that Movie Fragment box and precede the next Movie Fragment box containing information about the same track.” The 3GPP file format also indicates that a SIDX box “contains a sequence of references to subsegments of the (sub)segment documented by the box. The referenced subsegments are contiguous in presentation time. Similarly, the bytes referred to by a Segment Index box are always contiguous within the segment. The referenced size gives the count of the number of bytes in the material referenced.” SIDX boxes162generally provide information representative of one or more sub-segments of a segment included in video file150. For instance, such information may include playback times at which sub-segments begin and/or end, byte offsets for the sub-segments, whether the sub-segments include (e.g., start with) a stream access point (SAP), a type for the SAP (e.g., whether the SAP is an instantaneous decoder refresh (IDR) picture, a clean random access (CRA) picture, a broken link access (BLA) picture, or the like), a position of the SAP (in terms of playback time and/or byte offset) in the sub-segment, and the like. Movie fragments164may include one or more coded video pictures. In some examples, movie fragments164may include one or more groups of pictures (GOPs), each of which may include a number of coded video pictures, e.g., frames or pictures. In addition, as described above, movie fragments164may include sequence data sets in some examples. Each of movie fragments164may include a movie fragment header box (MFHD, not shown inFIG.4). The MFHD box may describe characteristics of the corresponding movie fragment, such as a sequence number for the movie fragment. Movie fragments164may be included in order of sequence number in video file150. MFRA box166may describe random access points within movie fragments164of video file150. This may assist with performing trick modes, such as performing seeks to particular temporal locations (i.e., playback times) within a segment encapsulated by video file150. MFRA box166is generally optional and need not be included in video files, in some examples. Likewise, a client device, such as client device40, does not necessarily need to reference MFRA box166to correctly decode and display video data of video file150. MFRA box166may include a number of track fragment random access (TFRA) boxes (not shown) equal to the number of tracks of video file150, or in some examples, equal to the number of media tracks (e.g., non-hint tracks) of video file150. In some examples, movie fragments164may include one or more stream access points (SAPs), such as IDR pictures. Likewise, MFRA box166may provide indications of locations within video file150of the SAPs. Accordingly, a temporal sub-sequence of video file150may be formed from SAPs of video file150. The temporal sub-sequence may also include other pictures, such as P-frames and/or B-frames that depend from SAPs. Frames and/or slices of the temporal sub-sequence may be arranged within the segments such that frames/slices of the temporal sub-sequence that depend on other frames/slices of the sub-sequence can be properly decoded. For example, in the hierarchical arrangement of data, data used for prediction for other data may also be included in the temporal sub-sequence. FIG.5is a conceptual diagram illustrating examples of retrieval techniques according to the techniques of this disclosure. In particular, graph170depicts available bitrates over time during which segments of media data can be retrieved at times172A-172D (times172). In a first example shown inFIG.5, basic client174retrieves segments for each of times172at 4K resolution. Basic client174is considered “basic” in the sense that basic client174does not implement the techniques of this disclosure. Signaling of quality values in an ARI track according to the techniques of this disclosure is therefore backwards compatible with client devices that are not configured to perform these techniques. However, in another example, advanced client176is configured to perform the techniques of this disclosure. In this example, at times172A,172C, and172D, advanced client176determines that segments at 4K resolution have higher quality values. However, for time172B, advanced client176determines that a segment having 1080p resolution is higher quality (e.g., using an ARI track signaling quality values for the segments). Thus, advanced client176instead retrieves the segment from the resolution having 1080p resolution. FIG.6is a flowchart illustrating an example method of retrieving media data according to the techniques of this disclosure. The method ofFIG.6is explained with respect to client device40and server device60ofFIG.1for purposes of example and explanation. Initially, client device40may request a manifest file for a media presentation (200). Client device40may, for example, send an HTTP GET request to server device60including a URL for the manifest file. In response, server device60may send the requested manifest file to client device40(202). Client device40may then receive the manifest file (204). Using the manifest file, client device40may determine a switching set having a matching display resolution for a display of client device40(e.g., video output44) (206). The switching set may have a matching display resolution in the sense that the switching set includes representations having encoded resolutions equal to or less than the display resolution of the display for client device40. Client device40may further receive and process an addressable resource index (ARI) track for the switching set. Throughout streaming and playback of media data for the selected switching set, client device40may use data of the ARI track to determine quality values for temporally aligned segments of a particular presentation time. Client device40may then, for a particular presentation time, determine a segment (or chunk) having a highest quality value using the quality values of the ARI track (208). Client device40may request the determined segment (210). For example, client device40may construct an HTTP GET request specifying a URL of the determined segment and send the HTTP GET request to server device60. Server device60may, in turn, receive the request (212) and send the requested segment to client device40(214). Alternatively, client device40may receive a broadcast media stream including two or more representations or other sets of video data having different spatial resolutions. As discussed with respect toFIG.2, DVB middleware unit100may cache the received video data. Actions attributed to server device60above may instead be performed by proxy server unit102. That is, DASH client110may retrieve a determined segment (having a highest quality value as indicated by the ARI track) from proxy server unit102proxy server unit102. Client device40may then receive the segment (216) and send video data of the segment to video decoder48(218). In this manner, the method ofFIG.6represents an example of a method of retrieving video data including determining that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receiving a first portion of the first video data at the first spatial resolution for a first playback time; sending the first portion of the first video data at the first spatial resolution to a video decoder; receiving a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and sending the second portion of the second video data at the second spatial resolution to the video decoder. Various examples of the techniques of this disclosure are summarized in the following clauses: Clause 1: A method of receiving media data, the method comprising: determining that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receiving a first portion of the first video data at the first spatial resolution for a first playback time; sending the first portion of the first video data at the first spatial resolution to a video decoder; receiving a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and sending the second portion of the second video data at the second spatial resolution to the video decoder. Clause 2: The method of clause 1, further comprising: receiving an addressable resource index (ARI) track for the media presentation, the ARI track including data representing a first quality for the first video data at the first spatial resolution for the first playback time and a second quality of the second video data at the second spatial resolution for the first playback time; and determining that the first quality is higher than the second quality, wherein receiving the first portion of the first video data at the first spatial resolution for the first playback time comprises, in response to determining that the first quality is higher than the second quality, retrieving the first portion of the first video data at the first spatial resolution for the first playback time instead of the second video data at the second spatial resolution for the first playback time. Clause 3: The method of clause 2, wherein the second spatial resolution is higher than the first spatial resolution. Clause 4: The method of clause 2, wherein the first video data at the first spatial resolution for the first playback time has a bitrate that is equal to the second video data at the second spatial resolution for the first playback time. Clause 5: The method of clause 2, wherein the first quality and the second quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 6: The method of clause 2, wherein the ARI track further includes data representing a third quality for the first video data at the first spatial resolution at the second playback time and a fourth quality for the second video data at the second spatial resolution at the second playback time, the method further comprising determining that the fourth quality is higher than the third quality, wherein receiving the second portion of the second video data at the second spatial resolution for the second playback time comprises, in response to determining that the fourth quality is higher than the third quality, retrieving the second portion of the second video data at the second spatial resolution for the second playback time instead of the first video data at the first spatial resolution for the second playback time. Clause 7: The method of clause 6, wherein the first spatial resolution is higher than the second spatial resolution. Clause 8: The method of clause 6, wherein the first video data at the first spatial resolution for the second playback time has a bitrate that is equal to the second video data at the second spatial resolution for the second playback time. Clause 9: The method of clause 6, wherein the third quality and the fourth quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 10: The method of clause 1, wherein the media presentation includes a first representation including the first video data at the first spatial resolution and a second representation including the second video data at the second spatial resolution, and wherein the media presentation includes a switching set including both the first representation and the second representation. Clause 11: The method of clause 1, wherein receiving the first portion of the first video data at the first spatial resolution for the first playback time comprises receiving, by a middleware unit, the first portion of the first video data at the first spatial resolution for the first playback time via broadcast, the method further comprising: caching, by the middleware unit, the first portion of the first video data at the first spatial resolution for the first playback time; and retrieving, by a dynamic adaptive streaming over HTTP (DASH) client, the first portion of the first video data at the first spatial resolution for the first playback time from the middleware unit. Clause 12: The method of clause 1, wherein receiving the second portion of the second video data at the second spatial resolution for the second playback time comprises receiving, by a middleware unit, the second portion of the second video data at the second spatial resolution for the second playback time via broadcast, the method further comprising: caching, by the middleware unit, the second portion of the second video data at the second spatial resolution for the second playback time; and retrieving, by a dynamic adaptive streaming over HTTP (DASH) client, the second portion of the second video data at the second spatial resolution for the second playback time from the middleware unit. Clause 13: A device for retrieving media data, the device comprising: a memory configured to store video data; a video decoder configured to decode the video data; and one or more processors implemented in circuitry and configured to: determine that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receive a first portion of the first video data at the first spatial resolution for a first playback time; send the first portion of the first video data at the first spatial resolution to the video decoder; receive a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and send the second portion of the second video data at the second spatial resolution to the video decoder. Clause 14: The device of clause 13, wherein the one or more processors are further configured to: receive an addressable resource index (ARI) track for the media presentation, the ARI track including data representing a first quality for the first video data at the first spatial resolution for the first playback time and a second quality of the second video data at the second spatial resolution for the first playback time; and determine that the first quality is higher than the second quality, wherein the one or more processors are configured to, in response to determining that the first quality is higher than the second quality, retrieve the first portion of the first video data at the first spatial resolution for the first playback time instead of the second video data at the second spatial resolution for the first playback time. Clause 15: The device of clause 14, wherein the second spatial resolution is higher than the first spatial resolution. Clause 16: The device of clause 14, wherein the first video data at the first spatial resolution for the first playback time has a bitrate that is equal to the second video data at the second spatial resolution for the first playback time. Clause 17: The device of clause 14, wherein the first quality and the second quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 18: The device of clause 14, wherein the ARI track further includes data representing a third quality for the first video data at the first spatial resolution at the second playback time and a fourth quality for the second video data at the second spatial resolution at the second playback time, and wherein the one or more processors are further configured to determine that the fourth quality is higher than the third quality, wherein the one or more processors are configured to, in response to determining that the fourth quality is higher than the third quality, retrieve the second portion of the second video data at the second spatial resolution for the second playback time instead of the first video data at the first spatial resolution for the second playback time. Clause 19: The device of clause 18, wherein the first spatial resolution is higher than the second spatial resolution. Clause 20: The device of clause 18, wherein the first video data at the first spatial resolution for the second playback time has a bitrate that is equal to the second video data at the second spatial resolution for the second playback time. Clause 21: The device of clause 18, wherein the third quality and the fourth quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 22: The device of clause 13, wherein the media presentation includes a first representation including the first video data at the first spatial resolution and a second representation including the second video data at the second spatial resolution, and wherein the media presentation includes a switching set including both the first representation and the second representation. Clause 23: The device of clause 13, further comprising a middleware unit configured to receive the first portion of the first video data at the first spatial resolution for the first playback time via broadcast and to cache first portion of the first video data at the first spatial resolution for the first playback time via broadcast, wherein the one or more processors are configured to execute a dynamic adaptive streaming over HTTP (DASH) client to retrieve the first portion of the first video data at the first spatial resolution for the first playback time from the middleware unit. Clause 24: The device of clause 13, further comprising a middleware unit configured to receive the second portion of the second video data at the second spatial resolution for the second playback time via broadcast and to cache second portion of the second video data at the second spatial resolution for the second playback time via broadcast, wherein the one or more processors are configured to execute a dynamic adaptive streaming over HTTP (DASH) client to retrieve the second portion of the second video data at the second spatial resolution for the second playback time from the middleware unit. Clause 25: The device of clause 13, wherein the apparatus comprises at least one of: an integrated circuit; a microprocessor; and a wireless communication device. Clause 26: A computer-readable storage medium having stored thereon instructions that, when executed, cause a processor to: determine that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receive a first portion of the first video data at the first spatial resolution for a first playback time; send the first portion of the first video data at the first spatial resolution to a video decoder; receive a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and send the second portion of the second video data at the second spatial resolution to the video decoder. Clause 27: The computer-readable storage medium of clause 26, further comprising instructions that cause the processor to: receive an addressable resource index (ARI) track for the media presentation, the ARI track including data representing a first quality for the first video data at the first spatial resolution for the first playback time and a second quality of the second video data at the second spatial resolution for the first playback time; and determine that the first quality is higher than the second quality, wherein the instructions that cause the processor to receive the first portion of the first video data at the first spatial resolution for the first playback time comprise instructions that cause the processor to, in response to determining that the first quality is higher than the second quality, retrieve the first portion of the first video data at the first spatial resolution for the first playback time instead of the second video data at the second spatial resolution for the first playback time. Clause 28: The computer-readable storage medium of clause 27, wherein the second spatial resolution is higher than the first spatial resolution. Clause 29: The computer-readable storage medium of clause 27, wherein the first video data at the first spatial resolution for the first playback time has a bitrate that is equal to the second video data at the second spatial resolution for the first playback time. Clause 30: The computer-readable storage medium of clause 27, wherein the first quality and the second quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 31: The computer-readable storage medium of clause 27, wherein the ARI track further includes data representing a third quality for the first video data at the first spatial resolution at the second playback time and a fourth quality for the second video data at the second spatial resolution at the second playback time, the further comprising instructions that cause the processor to determine that the fourth quality is higher than the third quality, wherein the instructions that cause the processor to receive the second portion of the second video data at the second spatial resolution for the second playback time comprise instructions that cause the processor to, in response to determining that the fourth quality is higher than the third quality, retrieve the second portion of the second video data at the second spatial resolution for the second playback time instead of the first video data at the first spatial resolution for the second playback time. Clause 32: The computer-readable storage medium of clause 31, wherein the first spatial resolution is higher than the second spatial resolution. Clause 33: The computer-readable storage medium of clause 31, wherein the first video data at the first spatial resolution for the second playback time has a bitrate that is equal to the second video data at the second spatial resolution for the second playback time. Clause 34: The computer-readable storage medium of clause 31, wherein the third quality and the fourth quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 35: The computer-readable storage medium of clause 26, wherein the media presentation includes a first representation including the first video data at the first spatial resolution and a second representation including the second video data at the second spatial resolution, and wherein the media presentation includes a switching set including both the first representation and the second representation. Clause 36: The computer-readable storage medium of clause 26, wherein the instructions that cause the processor to receive the first portion of the first video data at the first spatial resolution for the first playback time comprise instructions that cause the processor to retrieve the first portion of the first video data at the first spatial resolution for the first playback time from a middleware unit. Clause 37: The computer-readable storage medium of clause 26, wherein the instructions that cause the processor to receive the second portion of the second video data at the second spatial resolution for the second playback time comprise instructions that cause the processor to retrieve the second portion of the second video data at the second spatial resolution for the second playback time from a middleware unit. Clause 38: A device for retrieving media data, the device comprising: means for determining that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; means for receiving a first portion of the first video data at the first spatial resolution for a first playback time; means for sending the first portion of the first video data at the first spatial resolution to a video decoder; means for receiving a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and means for sending the second portion of the second video data at the second spatial resolution to the video decoder. Clause 39: A method of receiving media data, the method comprising: determining that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receiving a first portion of the first video data at the first spatial resolution for a first playback time; sending the first portion of the first video data at the first spatial resolution to a video decoder; receiving a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and sending the second portion of the second video data at the second spatial resolution to the video decoder. Clause 40: The method of clause 39, further comprising: receiving an addressable resource index (ARI) track for the media presentation, the ARI track including data representing a first quality for the first video data at the first spatial resolution for the first playback time and a second quality of the second video data at the second spatial resolution for the first playback time; and determining that the first quality is higher than the second quality, wherein receiving the first portion of the first video data at the first spatial resolution for the first playback time comprises, in response to determining that the first quality is higher than the second quality, retrieving the first portion of the first video data at the first spatial resolution for the first playback time instead of the second video data at the second spatial resolution for the first playback time. Clause 41: The method of clause 40, wherein the second spatial resolution is higher than the first spatial resolution. Clause 42: The method of any of clauses 40 and 41, wherein the first video data at the first spatial resolution for the first playback time has a bitrate that is equal to the second video data at the second spatial resolution for the first playback time. Clause 43: The method of any of clauses 40-42, wherein the first quality and the second quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 44: The method of clause 40, wherein the ARI track further includes data representing a third quality for the first video data at the first spatial resolution at the second playback time and a fourth quality for the second video data at the second spatial resolution at the second playback time, the method further comprising determining that the fourth quality is higher than the third quality, wherein receiving the second portion of the second video data at the second spatial resolution for the second playback time comprises, in response to determining that the fourth quality is higher than the third quality, retrieving the second portion of the second video data at the second spatial resolution for the second playback time instead of the first video data at the first spatial resolution for the second playback time. Clause 45: The method of clause 44, wherein the first spatial resolution is higher than the second spatial resolution. Clause 46: The method of any of clauses 44 and 45, wherein the first video data at the first spatial resolution for the second playback time has a bitrate that is equal to the second video data at the second spatial resolution for the second playback time. Clause 47: The method of any of clauses 44-46, wherein the third quality and the fourth quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 48: The method of any of clauses 39-47, wherein the media presentation includes a first representation including the first video data at the first spatial resolution and a second representation including the second video data at the second spatial resolution, and wherein the media presentation includes a switching set including both the first representation and the second representation. Clause 49: The method of any of clauses 39-48, wherein receiving the first portion of the first video data at the first spatial resolution for the first playback time comprises receiving, by a middleware unit, the first portion of the first video data at the first spatial resolution for the first playback time via broadcast, the method further comprising: caching, by the middleware unit, the first portion of the first video data at the first spatial resolution for the first playback time; and retrieving, by a dynamic adaptive streaming over HTTP (DASH) client, the first portion of the first video data at the first spatial resolution for the first playback time from the middleware unit. Clause 50: The method of any of clauses 39-49, wherein receiving the second portion of the second video data at the second spatial resolution for the second playback time comprises receiving, by a middleware unit, the second portion of the second video data at the second spatial resolution for the second playback time via broadcast, the method further comprising: caching, by the middleware unit, the second portion of the second video data at the second spatial resolution for the second playback time; and retrieving, by a dynamic adaptive streaming over HTTP (DASH) client, the second portion of the second video data at the second spatial resolution for the second playback time from the middleware unit. Clause 51: A device for retrieving media data, the device comprising: a memory configured to store video data; a video decoder configured to decode the video data; and one or more processors implemented in circuitry and configured to: determine that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receive a first portion of the first video data at the first spatial resolution for a first playback time; send the first portion of the first video data at the first spatial resolution to the video decoder; receive a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and send the second portion of the second video data at the second spatial resolution to the video decoder. Clause 52: The device of clause 51, wherein the one or more processors are further configured to: receive an addressable resource index (ARI) track for the media presentation, the ARI track including data representing a first quality for the first video data at the first spatial resolution for the first playback time and a second quality of the second video data at the second spatial resolution for the first playback time; and determine that the first quality is higher than the second quality, wherein the one or more processors are configured to, in response to determining that the first quality is higher than the second quality, retrieve the first portion of the first video data at the first spatial resolution for the first playback time instead of the second video data at the second spatial resolution for the first playback time. Clause 53: The device of clause 52, wherein the second spatial resolution is higher than the first spatial resolution. Clause 54: The device of any of clauses 52 and 53, wherein the first video data at the first spatial resolution for the first playback time has a bitrate that is equal to the second video data at the second spatial resolution for the first playback time. Clause 55: The device of any of clauses 52-54, wherein the first quality and the second quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 56: The device of any of clauses 52-55, wherein the ARI track further includes data representing a third quality for the first video data at the first spatial resolution at the second playback time and a fourth quality for the second video data at the second spatial resolution at the second playback time, and wherein the one or more processors are further configured to determine that the fourth quality is higher than the third quality, wherein the one or more processors are configured to, in response to determining that the fourth quality is higher than the third quality, retrieve the second portion of the second video data at the second spatial resolution for the second playback time instead of the first video data at the first spatial resolution for the second playback time. Clause 57: The device of clause 56, wherein the first spatial resolution is higher than the second spatial resolution. Clause 58: The device of any of clauses 56 and 57, wherein the first video data at the first spatial resolution for the second playback time has a bitrate that is equal to the second video data at the second spatial resolution for the second playback time. Clause 59: The device of any of clauses 56-58, wherein the third quality and the fourth quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 60: The device of any of clauses 51-59, wherein the media presentation includes a first representation including the first video data at the first spatial resolution and a second representation including the second video data at the second spatial resolution, and wherein the media presentation includes a switching set including both the first representation and the second representation. Clause 61: The device of any of clauses 51-60, further comprising a middleware unit configured to receive the first portion of the first video data at the first spatial resolution for the first playback time via broadcast and to cache first portion of the first video data at the first spatial resolution for the first playback time via broadcast, wherein the one or more processors are configured to execute a dynamic adaptive streaming over HTTP (DASH) client to retrieve the first portion of the first video data at the first spatial resolution for the first playback time from the middleware unit. Clause 62: The device of any of clauses 51-61, further comprising a middleware unit configured to receive the second portion of the second video data at the second spatial resolution for the second playback time via broadcast and to cache second portion of the second video data at the second spatial resolution for the second playback time via broadcast, wherein the one or more processors are configured to execute a dynamic adaptive streaming over HTTP (DASH) client to retrieve the second portion of the second video data at the second spatial resolution for the second playback time from the middleware unit. Clause 63: The device of any of clauses 51-62, wherein the device comprises at least one of: an integrated circuit; a microprocessor; and a wireless communication device. Clause 64: A computer-readable storage medium having stored thereon instructions that, when executed, cause a processor to: determine that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; receive a first portion of the first video data at the first spatial resolution for a first playback time; send the first portion of the first video data at the first spatial resolution to a video decoder; receive a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and send the second portion of the second video data at the second spatial resolution to the video decoder. Clause 65: The computer-readable storage medium of clause 64, further comprising instructions that cause the processor to: receive an addressable resource index (ARI) track for the media presentation, the ARI track including data representing a first quality for the first video data at the first spatial resolution for the first playback time and a second quality of the second video data at the second spatial resolution for the first playback time; and determine that the first quality is higher than the second quality, wherein the instructions that cause the processor to receive the first portion of the first video data at the first spatial resolution for the first playback time comprise instructions that cause the processor to, in response to determining that the first quality is higher than the second quality, retrieve the first portion of the first video data at the first spatial resolution for the first playback time instead of the second video data at the second spatial resolution for the first playback time. Clause 66: The computer-readable storage medium of clause 65, wherein the second spatial resolution is higher than the first spatial resolution. Clause 67: The computer-readable storage medium of any of clause 65 and 66, wherein the first video data at the first spatial resolution for the first playback time has a bitrate that is equal to the second video data at the second spatial resolution for the first playback time. Clause 68: The computer-readable storage medium of any of clauses 65-67, wherein the first quality and the second quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 69: The computer-readable storage medium of any of clauses 65-68, wherein the ARI track further includes data representing a third quality for the first video data at the first spatial resolution at the second playback time and a fourth quality for the second video data at the second spatial resolution at the second playback time, the further comprising instructions that cause the processor to determine that the fourth quality is higher than the third quality, wherein the instructions that cause the processor to receive the second portion of the second video data at the second spatial resolution for the second playback time comprise instructions that cause the processor to, in response to determining that the fourth quality is higher than the third quality, retrieve the second portion of the second video data at the second spatial resolution for the second playback time instead of the first video data at the first spatial resolution for the second playback time. Clause 70: The computer-readable storage medium of clause 69, wherein the first spatial resolution is higher than the second spatial resolution. Clause 71: The computer-readable storage medium of any of clauses 69 and 70, wherein the first video data at the first spatial resolution for the second playback time has a bitrate that is equal to the second video data at the second spatial resolution for the second playback time. Clause 72: The computer-readable storage medium of any of clauses 69-71, wherein the third quality and the fourth quality represent one or more quality measurements other than spatial resolution and bitrate. Clause 73: The computer-readable storage medium of any of clauses 65-72, wherein the media presentation includes a first representation including the first video data at the first spatial resolution and a second representation including the second video data at the second spatial resolution, and wherein the media presentation includes a switching set including both the first representation and the second representation. Clause 74: The computer-readable storage medium of any of clauses 65-73, wherein the instructions that cause the processor to receive the first portion of the first video data at the first spatial resolution for the first playback time comprise instructions that cause the processor to retrieve the first portion of the first video data at the first spatial resolution for the first playback time from a middleware unit. Clause 75: The computer-readable storage medium of any of clauses 65-74, wherein the instructions that cause the processor to receive the second portion of the second video data at the second spatial resolution for the second playback time comprise instructions that cause the processor to retrieve the second portion of the second video data at the second spatial resolution for the second playback time from a middleware unit. Clause 76: A device for retrieving media data, the device comprising: means for determining that a media presentation includes first video data at a first spatial resolution and second video data at a second spatial resolution, the second spatial resolution being different than the first spatial resolution; means for receiving a first portion of the first video data at the first spatial resolution for a first playback time; means for sending the first portion of the first video data at the first spatial resolution to a video decoder; means for receiving a second portion of the second video data at the second spatial resolution for a second playback time later than the first playback time; and means for sending the second portion of the second video data at the second spatial resolution to the video decoder. In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code, and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. Various examples have been described. These and other examples are within the scope of the following claims.
104,637
11943502
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. DESCRIPTION OF EXAMPLE EMBODIMENTS Numerous details are described in order to provide a thorough understanding of the example embodiments shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example embodiments described herein. Overview Techniques for using a smartphone, or a similar end-user device, as a building block in a security enhanced conditional access (CA) system are described herein. Smartphones are common end-user devices. Typically, smartphones have more capable resources (e.g., CPUs and/or two-way transport paths) than set-top-boxes (STBs) and are more difficult to clone. Various embodiments disclosed herein use smartphones as a building block in a security enhanced CA system. In accordance with various embodiments, a method is performed at a headend. The method includes obtaining a security profile associated with a first device, a second device paired with the first device, and a user. The method further includes locating a first device key for the first device and a second device key for the second device. The method additionally includes regulating user access to a channel during an entitlement period, which further includes determining a first security ranking of the first device and a second security ranking of the second device based on the security profile, and assigning a first subset of service keys to be encrypted with the first device key and a second subset of service keys to be encrypted with the second device key based on the first security ranking and the second security ranking. The method also includes transmitting the first subset of service keys to the first device and the second subset of service keys to the second device. In accordance with various embodiments, a method is performed at a headend. The method includes scrambling media content associated with a channel during an entitlement period, which further includes encrypting the media content using at least one control word to generate encrypted media content, and selectively encrypting the at least one control word with a service key from a first subset of service keys assigned to a first device or a second subset of service keys assigned to a second device paired with the first device in order to generate at least one encrypted control word. The method also includes transmitting the at least one crypted control word along with the encrypted media content to at least one of the first device or the second device. Example Embodiments As described above, many conditional access (CA) systems rely on set-top-boxes (STBs) for security.FIG.1illustrates a block diagram of an exemplary CA system100. The exemplary CA system100includes a server side and a client side, as indicated by the dotted line. For example, the server side can include a headend110, and the client side can include a subscriber device160, such as an STB. In some embodiments, the headend110includes a device key generator112, which generates and delivers on demand unique device key(s)114to the subscriber device160. Once delivered, the subscriber device160stores the received device key(s)114in an internal or external storage162. In some embodiments, the subscriber device160includes a device key in the hardware and/or firmware of the subscriber device160. In such embodiments, as will be described below, the headend110obtains the device key from the client side, e.g., during registration and/or pairing. In some embodiments, the headend110also includes a service key generator122and an entitlement management message (EMM) generator124. Transmission of EMMs128is generally in response to a request from the subscriber160to a service provider. Further, at the request of the service provider, the service key generator122generates service keys126and provides the service keys126to the EMM generator124. In order to generate the EMMs128, the EMM generator124obtains subscriber data, including at least one entitlement118for the subscriber (e.g., payment for a particular channel), from an internal or external storage116and combines with the service keys126to form EMMs128. An EMM128typically includes fields such as the entitlement118for the subscriber, the service keys126encrypted with the device key(s)114, and/or a data integrity check field, among others. In some embodiments, the headend110further includes a control word (CW) generator132, a scrambler134, and an entitlement control message (ECM) generator136. The control word generator132generates control words138and provides the control words138to the scrambler134. The scrambler134obtains unencrypted media content142from an internal or external media content repository143and generates encrypted media content146by encrypting the media content142with the control words138. For further protection, the ECM generator136encrypts the control word138using the service keys126provided by the service key generator122. The encrypted control words are then included in the ECMs144to be transmitted together with the encrypted media content146. As used herein, the terms “scramble” and “encrypt” are used interchangeably, so are the terms “scrambler” and “encryptor” in some embodiments. On the client side, in some embodiments, the subscriber device160includes an EMM decoder166, an ECM decoder172, and a descrambler176in addition to the internal or external storage162. In some embodiments, once the subscriber device160receives the encrypted media content146, the descrambler176decrypts the encrypted media content146in preparation for rendering. In order to decrypt the media content146, in some embodiments, the EMM decoder166obtains the device key164from the storage162and applies the device key164to the EMMs128in order to derive an entitlement168for the subscriber and the decrypted service key169. The EMM decoder166further provides the entitlement168and the decrypted service key169to the ECM decoder172. The ECM decoder172then applies the decrypted service key169to the ECMs144to derive a decrypted control word174. The decrypted control word174is then used by the descrambler176to decrypt the encrypted media content146. As used herein, the terms “descramble” and “decrypt” are used interchangeably, so are the terms “descrambler” and “decryptor” in some embodiments. In some embodiments, in the key-based CA system100, the key in each level is encrypted or decrypted with the key of the previous level. For example, the device key(s)114are used for encrypting the service keys126in the headend110and for decrypting the service keys169on the subscriber device160. In some embodiments, the keys of the previous level are used for generating keys and/or decryption in other levels. For example, the device key(s)114are used as a seed for generating the service keys126and for deriving the service key169. Further, in some embodiments, one key can be generated as a function of multiple seeds (or keys) and vice-versa. As shown inFIG.1, the CA system100is exposed to various attacks. For example, attempting to view content that one has not paid for, a user may use an illegitimate device to duplicate the uniqueness of the subscriber device160for key and/or content sharing. In another example, a subscriber who purchased one service (e.g., service A) but has not paid for another service (e.g., service B) may attempt to manipulate the entitlement for service A (e.g., in the EMMs128) in order to gain access to service B. In yet another example, a user may modify the entitlement information in the EMMs128to prolong the service after the subscription expiration date. Typical off-the-shelf STBs have limited hardware and software resources to provide strong protection of the encrypted media content146, the ECMs144, the EMMs128, and/or the device keys114. As such, a CA system that merely relies on such STBs as security anchors is vulnerable to the above-mentioned attacks. Smartphones (and other end-user devices, such as tablets, wearable devices, computers, and/or portable multifunction devices, etc.) are becoming increasingly common and affordable around the globe. Typically, a smartphone is paired with its human owner and capable of having a two-way communication with a remote server. Further, in near range, the smartphone is capable of establish a secure two-way communication channel with another device, e.g., via Wi-Fi and/or Bluetooth. In addition, relative to off-the-shelf STBs, smartphones have more capable resources (e.g., stronger CPUs) and are more difficult to clone. Accordingly, the smartphone-based CA system disclosed herein in accordance with embodiments leverages the above-mentioned properties of smartphone (e.g., affordability, communication capability, processing capability, and/or security) and uses the smartphone as a building block for security enhancement. FIG.2Aillustrates an exemplary smartphone-based CA system200A for secure key assignment and distribution in accordance with some embodiments. The CA system200A includes a server side and a client side as indicated by the dotted line. In some embodiments, the server side includes a headend210for transmitting media content, and the client side includes multiple receiving devices used by a subscriber260(or a user260), e.g., a smartphone270and an STB/TV280. In the smartphone-based CA system200A, the smartphone270and the STB/TV280are paired, so that the smartphone270becomes a building block of the CA system200A. In some embodiments, the headend210includes a receiver220for receiving information from the smartphone270, a storage222for storing security profiles generated based on the information received from the receiver220, a device key generator230for locating and/or delivering device keys to receiving devices used by the subscriber260, a controller240, and a transmitter250for key distribution. In some embodiments, applications are installed on both the STB/TV280and the smartphone270. When the subscriber260registers and creates a user account with the service provider, e.g., via the application on the smartphone270, the profile of the smartphone270along with a user profile is transmitted from the smartphone270to the headend210, e.g., via a transceiver274of the smartphone and through the receiver220. In some embodiments, the profile of the smartphone270includes, but not limited to, the hardware, software, and/or firmware profile of the smartphone270. Further, in some embodiments, the STB/TV280displays (e.g., on a display of the STB and/or the TV) an identifier and other information of the STB/TV280(e.g., the hardware, software, and/or firmware profile of the STB/TV280). In some embodiments, when the smartphone270used by the subscriber260is within a threshold distance from the STB/TV280, the smartphone270obtains the displayed information, e.g., by scanning a QR code displayed on the STB/TV280in order to establish the pairing. In some embodiments, through one or more transceivers274of the smartphone270(e.g., WiFi and/or Bluetooth) and one or more transceivers284of the STB/TV280(e.g., WiFi and/or Bluetooth), the smartphone270and the STB/TV280exchange information and establish a secure near-range communication channel. Once the smartphone270and the STB/TV280are paired, the smartphone270sends the information of the STB/TV280along with the information exchanged with the STB/TV280to the headend210, e.g., via the transceiver274and through the receiver220. The headend210in turn stores the received information along with the information of the smartphone270and/or the user profile of the subscriber260received during registration in an internal or external storage222as a security profile for the subscriber260. In some embodiments, the security profile for the subscriber260changes over time. For example, in case of a breach of a particular phone model, the headend210(e.g., the controller240) updates security profiles stored in the storage222involving the particular phone model. Accordingly, in case the security profile for the subscriber260indicates that the subscriber260uses the particular phone model in the smartphone-based CA system200A, the updates affect the security profile for the subscriber260, e.g., indicating in the profile of the smartphone270that the smartphone270is less secure. In some embodiments, based on the security profiles, the device key generator230generates device key suites. In some embodiments, the device key suites include at least one unique device key for the smartphone270, denoted as KPHONE, and another unique device key for the STB/TV280(different from the device key for the smartphone270), denoted as KSTB. Once generated, the device keys are securely delivered to the smartphone270and the STB/TV280respectively, e.g., through the one or more transmitters250of the headend210, the one or more transceivers274of the smartphone270, the one or more transceivers284and/or the receiver288of the STB/TV280. Upon receiving the respective device key, the smartphone270stores KPHONEin a secure storage276associated with the smartphone270and the STB/TV280stores KSTBin a secure storage286associated with the STB/TV280. In some embodiments, KPHONEis included in the hardware and/or firmware of the smartphone270, e.g., burning the key to the device in the factory. Likewise, KSTBis included in the hardware and/or firmware of the STB/TV280. In such embodiments, the headend210locates KPHONEand KSTBfrom the receiving devices during registration and/or pairing, e.g., the device key generator230obtains the device keys from the information received from the smartphone270and/or from the security profiles stored in the storage222. In some embodiments, for a channel C in an epoch E, the controller240obtains service keys, denoted as {SKC,E1, . . . SKC,ES}, e.g., from the service key generator122(FIG.1). Based on the security profiles stored in the storage222, the controller240determines security rankings of the smartphone270and the STB/TV280. Further, based on the security rankings, for the epoch E and for the channel C to which the subscriber260is entitled, the controller240decides how many service keys would be encrypted with KPHONEand how many service keys would be encrypted with KSTBand encrypts the service keys {SKC,E1, . . . SKC,ES}, accordingly. In some embodiments, instead of determining separate security rankings, e.g., one for the smartphone270and the other for the STB/TV280, the controller240calculates one security ranking for a tuple associated with the receiving devices, e.g., <STB ID, smartphone ID, communication type between the smartphone and the STB>. In some embodiments, the controller determines the combined security ranking during the pairing and updates the combined security ranking when at least one of the profile of the smartphone270or the profile of the STB/TV280changes. For example, in accordance with a newly discovered security flaw in the STB/TV280, the controller240lowers the combined security ranking for the combination of the smartphone270and the STB/TV280. In some embodiments, as will be described below with reference toFIG.3, the controller240uses the combined security ranking for determining whether the receiving devices are secure enough to view the type of requested media content. In some embodiments, in the case that a service key is encrypted with KPHONE, the controller240directs the transmitter250to transmit the encrypted service key SKC,Eito the smartphone270. As used herein, the encrypted service key(s) that are transmitted to the smartphone270are denoted as SKPHONE. On the other hand, in the case that a service key is encrypted with KSTB, the headend system210transmits the encrypted service key SKC,Eito the STB/TV280. As used herein, the encrypted service key(s) that are transmitted to the smartphone270are denoted as SKSTB. As will be described below, the encrypted service keys will then be used by the respective receiving device to decrypt the encrypted media content. In some embodiments, as described above with reference toFIG.1, the encrypted service keys are transmitted in the EMMs128along with the entitlements for the subscriber260. Upon entitlement renewal, the headend repeats the service key assignment and distribution process described herein. On the receiving end, a controller272of the smartphone270uses the device key and the service key(s) received from the headend210(e.g., KPHONEand SKPHONE) to perform the decryption, such as performing the functions by the descrambler176, the ECM decoder172, and the EMM decoder166inFIG.1. Likewise, a controller282of the STB/TV280uses the device key and the service key(s) received from the headend210(e.g., KSTBand SKSTB) to perform the decryption, such as performing the functions by the descrambler176, the ECM decoder172, and the EMM decoder166inFIG.1. Further, as will be described below with reference toFIG.3, the controllers272and282perform additional functions for security enhancement. FIG.2Billustrates an exemplary smartphone-based CA system200B for secure content delivery in accordance with some embodiments. In some embodiments, the exemplary smartphone-based CA system200B uses the service key assignment and distribution mechanism described above with reference toFIG.2A, and uses the service keys for content protection. As a result, elements common toFIGS.2A and2Binclude common reference numbers, and the differences are described herein for the sake of brevity. In some embodiments, the controller240obtains unencrypted media content from a media content repository292. In order to protect the media content, the controller240encrypts the media content with control words (CWs). In some embodiments, the controller240obtains control words from a control word generator294(e.g., the control word generator132,FIG.1). When encrypting media content associated with a channel C and for an epoch E, the controller240obtains control words from the control word generator294. Further, for a control word, e.g., CWi, the controller240encrypts the control word with a selected service key, e.g., calculating ECWi=F(CWi, SKC,E1), where F is a reversible function and typically an encryption function. In some embodiments, for each ECWi, there is an indicator regarding which SK is used for decrypting the respective ECWi. In some embodiments, the controller240directs the transmitter(s)250to transmit the indicator to the subscriber260to facilitate the control word decryption. It should be noted that in some embodiments, there are n CWs in a particular crypto period, while there are s SKs. In other words, the controller240can choose to use some of the service keys per crypto period. As such, the index of the CW and the index of the SK may be different, e.g., i is different from l. As used herein, an “epoch” refers to the length of the minimal entitlement period, e.g., 30 to 60 days or shorter. Thus, the terms “epoch” and “entitlement period” are used interchangeably in some embodiments. Further as used herein, a “crypto period” refers to the time span during which a specific cryptographic key is authorized for use. For example, in every crypto period, the control word is changed. The controller240then causes the transmitter250to transmit the encrypted control word, e.g., broadcasting the encrypted control word ECWi to be received by the STB/TV280. Further, for every media content packet, the controller240chooses a control word, e.g., choosing CWj, and encrypts the media content packet with CWj before causing the transmitter250to transmit the encrypted media content packet to one of the receiving devices, e.g., broadcasting the encrypted media content along with the encrypted control word. In some embodiments, for each encrypted media content packet, there is an indicator regarding which CW is used for decrypting the respective packet. In some embodiments, the controller240directs the transmitter(s)250to transmit the indicator to the subscriber260to facilitate the media content decryption. As described above with reference toFIG.2A, based on the security rankings, in some embodiments, the headend210distributes a subset of encrypted service keys to the smartphone270and another subset of encrypted service keys (different from the subset to the smartphone270) to the STB/TV280. As such, a subset of encrypted control words would be decryptable by the smartphone270and another subset of encrypted control words would be decryptable by the STB/TV280. It naturally follows that a subset of encrypted media content would be decryptable by the smartphone270and another subset of encrypted media content would be decryptable by the STB/TV280. Accordingly, the smartphone-based CA system200B disclosed herein in accordance with embodiments spreads the media content protection over two devices (e.g., the smartphone270and the STB/TV280). Relative to the reliance of a single STB/TV as a security anchor in previously existing CA systems, the multi-point protection in the smartphone-based CA system described herein avoids the single point of failure in previously existing CA systems. It should be noted that components are represented in the exemplary CA systems200A and200B for illustrative purposes. Other configurations can be used and/or included in the exemplary CA systems200A and200B. For example, in addition to storing the security profiles in the storage222, entitlements and/or other subscriber data can be stored in the storage222as well. In other words, components can be divided, combined, and/or re-configured to perform the functions described herein. Further, although not shown inFIGS.2A and2B, the exemplary CA systems200A and200B can include other components and/or subcomponents to facilitate the secure key and content distribution. For example, the device key generator230can include one or more transmitters to transmit the device keys to the smartphone270and/or the STB/TV280. The various features of implementations described herein with reference toFIGS.2A and2Bmay be embodied in a wide variety of forms, and that any specific structure and/or function described herein is merely illustrative. In particular, although the CA systems200A and200B include the smartphone270, those of ordinary skill in the art will appreciate that various other types of end-user devices, including but not limited to tablets, wearable devices, computers, and/or portable multifunction devices, can be used in place of the smartphone270in order to facilitate the security enhancement described herein. FIG.3is a block diagram illustrating media content decryption in an exemplary smartphone-based CA system300in accordance with some embodiments. In some embodiments, the smartphone-based CA system300includes a first receiving device, e.g., a smartphone310, and a second receiving device, e.g., an STB/TV320. AlthoughFIG.3illustrates the STB/TV320as the receiving device for receiving the encrypted media content from the headend (not shown), one or more of the smartphone310and the STB/TV320are capable of receiving and decrypting packets. As such, in some embodiments, the smartphone310can be the receiving device for receiving the encrypted media content from the headend. As explained above with reference toFIGS.2A and2B, the smartphone310can include the transceiver(s)274(FIGS.2A and2B) for receiving packets from the headend210(FIGS.2A and2B). Further, the smartphone310can include the controller272(FIGS.2A and2B) to facilitate the decryption. In some embodiments, as shown inFIG.3, the smartphone310includes a descrambler312to decrypt media content packets, an ECM decoder314to decrypt control words, an EMM decoder316for decrypting service keys as well as deriving entitlements for a subscriber305, and a storage318for storing device keys. In some embodiments, the controller272shown inFIGS.2A and2Bperforms the function of the descrambler312, the ECM decoder314, and/or the EMM decoder316. As such, in some embodiments, the functions performed by the descrambler312, the ECM decoder314, and/or the EMM decoder316are carried out by the controller272(FIGS.2A and2B). Also as explained above with reference toFIGS.2A and2B, the STB/TV320can include the receiver288(FIGS.2A and2B) for receiving packets from the headend210(FIGS.2A and2B). Further, the STB/TV320can include the controller282(FIGS.2A and2B) to facilitate the decryption. InFIG.3, the STB/TV320includes a descrambler322for decrypting media content packets, an ECM decoder324for decrypting control words, an EMM decoder326for decrypting service keys as well as deriving entitlements for the subscriber305, and a storage328for storing device keys. As such, in some embodiments, the controller282shown inFIGS.2A and2Bperforms the functions of the descrambler322, the ECM decoder324, and/or the EMM decoder326. In some embodiments, the subscriber305makes a purchase through the smartphone310, e.g., the subscriber305chooses a channel C to purchase through the application on the smartphone310. The smartphone310sends the purchase request to the headend. In some embodiments, following the process described above with reference toFIG.2A, based on the security profiles, the headend decides whether the receiving devices (e.g., the smartphone310and/or the STB/TV320) are secure enough for viewing the media content, e.g., calculating security rankings for the receiving devices based on the security profiles for the subscriber305. Other methods for analyzing the security features of the receiving devices and for determining whether the receiving devices are secure enough for viewing the content can be used in place of or supplementing the calculation of the security rankings. In the case that the receiving devices are secure enough, e.g., one or both of the security rankings are above a threshold associated with a type of media content, the subscriber305submits payment (e.g., by a payment processing application on the smartphone310) and the entitlement for the subscriber305is recorded at the headend (e.g., in the subscriber data storage116inFIG.1and/or the security profile inFIGS.2A and2B). In some embodiments, the subscriber305submits the payment and the entitlement for the subscriber305is recorded when a combined security ranking of the smartphone310and the STB/TV320is above the threshold. Moreover, following the key assignment and distribution process described above with reference toFIG.2A, the headend transmits the device keys and/or service keys to the smartphone310and/or the STB/TV320. On the other hand, in the case that one or more of the receiving devices have weak security capabilities, the headend would not send the device keys and/or service keys to the insecure device(s). For instance, a threshold associated with high value media content such as 4K content may be higher than a threshold associated with lower resolution media content. Based on the security profile for the subscriber305, including the type of media content the subscriber305wants to view, the profile of the smartphone310and/or the profile of the STB/TV320, the headend determines that the STB/TV320and/or the smartphone310are not secure enough to watch high value 4K media content and would not distribute the keys for content viewing. On the other hand, in the case that the subscriber305requests to watch lower resolution media content, based on the entitlement information and the profiles of the smartphone310and the STB/TV320, the headend determines that the STB/TV320and/or the smartphone310are secure enough to watch such media content and would distribute the keys accordingly. In order to view the media content, in some embodiments, the subscriber305chooses the media content to view, e.g., by selecting a channel to view from a package the subscriber305has purchased. Utilizing the pairing between the smartphone310and the STB/TV320(e.g., through the transceiver(s)284of the STB/TV280and the transceiver(s)274of the phone270,FIGS.2A and2B), the smartphone310generates a request to the STB/TV320and instructs the STB/TV320to receive media content corresponding to the selected channel from the headend, e.g., using the smartphone310as a remote control to signal the STB/TV320. In response to the channel viewing request, in some embodiments, the STB/TV320tunes to the requested channel and receives from the headend encrypted media content packets associated with the channel through a one-way communication channel (e.g., via satellite communication). For every received encrypted media content packet, the STB/TV320determines whether it is encrypted with a control word that can be generated. For instance, in the case that SKC,E1that is used for decrypting the control word CW1associated with a first media content packet Packet1is part of SKSTB, the STB/TV320decrypts Packet1. On the other hand, in the case that SKC,E2, which is used for decrypting the control word CW2associated with a second media content packet Packet2, is part of SKPHONE, the STB/TV320forwards the packet (while still encrypted) to the smartphone310along with the encrypted control word CW2, e.g., via a secure communication channel between the smartphone310and the STB/TV320. In particular, on the STB/TV320side, to decrypt Packet1, the EMM decoder326of the STB/TV320determines that the EMM decoder326has received the encrypted service key SKC,Exthat is part of SKSTBin the EMM messages from the headend. The EMM decoder326obtains the device key KSTBfrom the storage328and applies the device key KSTBto SKC,Exto the encrypted service key SKC,Ex, e.g., in the EMMs, in order to decrypt SKC,Ex. The decrypted SKC,Exis provided to the ECM decoder324for decrypting the encrypted CW1received from the headend, e.g., in the ECMs. In order to decrypt CW1, the ECM decoder324applies the decrypted SKC,Exto the encrypted CW1to derive the unencrypted CW1, e.g., by executing a reverse function of ECW1=F(CW1, SKC,Ex). The ECM decoder324then provides the unencrypted CW1to the descrambler322, which uses the unencrypted CW1to decrypt Packet1. On the smartphone310side, the smartphone310receives the encrypted Packet2forwarded by the STB/TV320. In response to receiving the encrypted Packet2, the EMM decoder316of the smartphone310determines whether the EMM decoder316has received the encrypted service key SKC,Eythat is part of SKPHONEin the EMMs from the headend. In the case that the encrypted service key SKC,Eythat is part of SKPHONEin the EMMs from the headend, the EMM decoder316obtains the device key KPHONEfrom the storage318and applies the device key KPHONEto SKC,Eyto decrypt SKC,Ey. The decrypted SKC,Eyis then provided to the ECM decoder314and the ECM decoder314applies the decrypted SKC,Eyto the encrypted CW2from the headend, e.g., in the ECMs, in order to derive the unencrypted CW2, e.g., by executing a reverse function of ECW2=F(CW2, SKC,Ey). The ECM decoder314then provides the unencrypted CW2to the descrambler312, which uses the unencrypted CW2to decrypt Packet2. The smartphone310thus decrypts the encrypted media content packet and sends back the unencrypted Packet2to the STB/TV320over the secure channel established during the paring with the STB/TV320. In some embodiments, for performance consideration, the smartphone310generates unencrypted service keys which are part of SKPHONEand transmits the decrypted service keys to the STB/TV320. In such embodiments, instead of transmitting the encrypted media content packets and/or the encrypted control words to the smartphone310, the STB/TV320receives the decrypted service keys from the smartphone310and uses the decrypted service keys to decrypt the encrypted media content packets and/or the encrypted control words. In some embodiments, for security, the decrypted service keys are transmitted over a secure channel between the STB/TV320and the smartphone310. For example, the decrypted service keys may be locally encrypted prior to transmission. In some embodiments, to further protect the media content, the smartphone310embeds watermarking in the packets sent back to the STB/TV320. In some embodiments, the embedded watermarking includes a unique identity, so that the media content is identifiable, thus providing deterrence against pirated copies. In such embodiments, the smartphone310to receives the encrypted media content, decrypts it, embeds watermarking, and transmits the decrypted media content back to the STB/TV320. As such, the secure CA system300utilizes the communication capability and/or processing capability of the smartphone310to enhance security. Though not shown inFIG.3, in some embodiments, the STB/TV320and the smartphone310include additional subcomponents to facilitate the device pairing. For example, the STB/TV320can include a display (e.g., a display of TV and/or a display of an STB) connectable to the controller282(FIGS.2A and2B) and the smartphone310can include an image capturing device (e.g., a scanner or a camera). When being directed by the controller282(FIGS.2A and2B), the display displays an identifier of the STB/TV320that is scannable by the smartphone310, e.g., a QR code with embedded information of the STB/TV320. The smartphone310scans the displayed information and extracts the information to facilitate the pairing in accordance with some embodiments. The smartphone-based CA system disclosed herein in accordance with embodiments improves security over previously existing CA systems. By including the smartphone as a building block in the smartphone-based CA system, the smartphone-based CA system leverages a more capable receiving device to protect the media content against various attacks and avoids having a single point of failure as in previously existing CA systems. For instance, cloning is preventable, since cloning a smartphone is more difficult to accomplish than cloning an off-the-shelf STB. In another example, because each service has its own unique entitlements and the encrypted service keys included in the EMMs are specific for a channel in an epoch, the entitlement cannot be reused for another service. Nor can the entitlement be reused for the same service and another epoch. As such, the key assignment and distribution mechanism described above with reference toFIG.2Aprotects the media content against potential CA service manipulation and entitlement manipulation. In yet another example, because a subset of packets is decryptable by the smartphone, a hacker would have to distribute the encrypted control words to the smartphone in order to obtain the decrypted control words for content decryption. As such, control word sharing is complicated. Further, because the smartphone is capable of embedding watermarks in the decrypted media content packets to the STB/TV, content sharing can be detected. As such, the secure content delivery mechanism described above with reference toFIG.2Bprotects the media content against potential key sharing and content sharing. In some embodiments, the key and content delivery mechanisms described herein are dynamic.FIGS.4A-4Dare block diagrams illustrating dynamic key and content delivery in exemplary smartphone-based CA systems400A-400D in accordance with some embodiments. Elements common to these figures include common reference numbers, and the differences are described herein for the sake of brevity. To that end,FIG.4Ais a block diagram400A illustrating service key assignment and distribution in an exemplary CA system400A, where a smartphone420used by a subscriber405is more secure than an STB/TV 1430-1(e.g., an off-the-shelf STB) in accordance with some embodiments. InFIG.4A, the subscriber405uses the smartphone420to register with a headend410(e.g., the headend system210,FIGS.2A and2B). The headend410receives a profile of the smartphone420(e.g., hardware serial number, model number, configuration, firmware version, software installed, etc.) and a user profile of the subscriber405(e.g., user information and/or user entitlement). Once the smartphone420is paired with the STB/TV 1430-1, the smartphone420obtains a profile of the STB/TV 1430-1and transmits the profile of the STB/TV 1430-1to the headend410. As explained above with reference toFIGS.2A and2B, based on the profiles, the headend410(e.g., the controller240,FIG.2) stores a security profile for the subscriber405, locates two distinct device keys, and/or distributes one device key to the smartphone420, e.g., KPHONE,and the other device key to the STB/TV 1430-1, e.g., KSTB1. In some embodiments, as explained above with reference toFIGS.2A and2B, upon establishing the security profile for the subscriber405, e.g., including the profile of the smartphone420, the profile of STB/TV 1430-1, and the user profile of the subscriber405, the headend410calculates a first security ranking for the smartphone420and a second security ranking for the STB/TV 1430-1, e.g., calculating the respective security ranking as a function of attributes assigned to security features in hardware, software, and/or firmware of the respective receiving device. Further, based on the security rankings, the headend410determines how many of the service keys would be distributed and used by the smartphone420and how many would be distributed and used by the STB/TV 1430-1. For instance, in the case that the first security ranking is higher than the second security ranking, e.g., the smartphone420has more hardware, software, and/or firmware capabilities than the STB/TV 1430-1to protect media content from hacking, the headend410assigns more service keys to the smartphone420than the STB/TV 1430-1. As shown inFIG.4A, for instance, the headend410obtains three service keys, e.g., {SKC,E1, SKC,E2, SKC,E3}. For a specific epoch E and for every channel C to which the subscriber405is entitled, based on the security rankings, the headend410assigns two service keys to the smartphone420, e.g., SKPHONE={SKC,E1, SKC,E2} and one service key to the STB/TV 1430-1, e.g., SKSTB1=SKC,E3. The headend410then encrypts the service key set SKPHONEwith the device key KPHONEand encrypts the service key SKSTB2with the device key KSTB2. The encrypted service keys are then securely transmitted to the respective receiving devices, e.g., transmitting the encrypted SKPHONEset to the smartphone420and transmitting the encrypted SKSTB1to the STB/TV 1430-1. The ratio of 2:1 with respect to the service keys assignment and distribution shown inFIG.4Ais merely illustrative. Those of ordinary skill in the art will appreciate that other ratios based on the calculation of the security rankings, which is further determined based on the security profile of the subscriber405, can be determined and/or configured through the headend410. FIG.4Bis a block diagram illustrating service key assignment and distribution in an exemplary CA system400B, where an STB/TV 2430-2(e.g., an STB with a secure chip) is approximately as secure as the smartphone420in accordance with some embodiments. InFIG.4B, the smartphone420is paired with the STB/TV 2430-2, e.g., through near-range communication transceivers such as the transceiver(s)274of the smartphone270and the transceiver(s)284of the STB/TV280(FIG.2). Once the smartphone420is paired with the STB/TV 2430-2, the smartphone420obtains a profile of the STB/TV 2430-2and transmits the profile of the STB/TV 2430-2to the headend410. Based on the profiles, the headend410(e.g., the controller240,FIG.2) stores a security profile for the subscriber405, which includes but not limited to the profile of the smartphone420, the profile of the STB/TV 2430-2, and/or the user profile of the subscriber405. The headend410further locates and/or distributes one device key to the smartphone420, e.g., KPHONE, and another device key to the STB/TV 2430-2, e.g., KSTB2. In some embodiments, as explained above with reference toFIGS.2A and2B, based on the security profile for the subscriber405, the headend410calculates a first security ranking for the smartphone420and a second security ranking for the STB/TV 2430-2, e.g., calculating the respective security ranking as a function of attributes assigned to security features in hardware, software, and/or firmware of the respective receiving device. Further, based on the security rankings, the headend410determines how many of the service keys would be distributed and used by the smartphone420and how many would be distributed and used by the STB/TV 2430-2. In the case that the first security ranking is the same or approximately the same as the second security ranking, e.g., the smartphone420has sufficient CPU capacity to perform encryption and decryption and the STB/TV 2430-2has a security enhanced chip, the headend410assigns equal (or approximately equal) number of service keys to the smartphone420and the STB/TV 2430-2. For instance, inFIG.4B, the headend410obtains two service keys, e.g., {SKC,E4, SKC,E5}. For a specific epoch E and for every channel C to which the subscriber405is entitled, based on the security rankings, the headend410assigns one service key to the smartphone420, e.g., SKPHONE=SKC,E4and one service key to the STB/TV 2430-2, e.g., SKSTB2=SKC,E5. The headend410then encrypts the service key SKPHONEwith the device key KPHONEand encrypts the service key SKSTB2with the device key KSTB2. The encrypted service keys are then securely transmitted to the respective receiving devices, e.g., transmitting the encrypted SKPHONEto the smartphone420and transmitting the encrypted SKSTB2to the STB/TV 2430-2. The ratio of 1:1 with respect to the service keys assignment and distribution shown inFIG.4Bis merely illustrative. Those of ordinary skill in the art will appreciate that other ratios based on the calculation of the security rankings, which is further determined based on the security profile of the subscriber405, can be determined and/or configured through the headend410. As shown inFIGS.4A and4B, the secure key assignment and distribution mechanism disclosed herein in accordance with embodiments is dynamic. Based on the security profiles, the headend adjusts the number of key and/or content delivered to a receiving device, e.g., periodically or in response to requests. If a receiving device is more capable to protect the media content, the headend would deliver more keys and content to the more secure receiving device. In some embodiments, in the case that one receiving device is compromised, the headend can deliver the keys and the content to the uncompromised pairing device as shown inFIG.4C. InFIG.4C, initially, the smartphone420was paired with an STB/TV430. In some embodiments, the headend410obtains an indication that the STB/TV430is compromised. For instance, the headend410can periodically obtain the profile of the smartphone420, the profile of the STB/TV430, and/or the user profile of the subscriber405. Based on the analysis of the profiles, e.g., by comparing signatures of the hardware, software, and/or firmware of the smartphone420and/or the STB/TV430, the headend410determines whether any tempering of the receiving device(s) has occurred. In some embodiments, leveraging its two-way communication capability, the smartphone420periodically obtains the profile of the STB/TV430through the pairing and reports any compromise to the headend410. In response to obtaining the indication of compromise, the headend410ceases transmitting the keys, including but not limited to the service keys, the control words, and/or the device key, and/or the media content to the STB/TV430. In some embodiments, the smartphone420ceases the communication path between the smartphone420and the STB/TV430, e.g., in response to detecting the compromise of the STB/TV430by the smartphone420and/or in response to being directed by the headend410to cease the pairing with the STB/TV430. As such, without the pairing, the compromised STB/TV430would not be able to send a portion of the encrypted media content to the smartphone420for decryption. In conjunction with ceasing the transmission of keys and content to the STB/TV430, the headend410increases the number of keys assigned to the smartphone420, encrypts the service keys with the device key assigned to the smartphone420, and transmits the encrypted service keys to the smartphone420. In other words, in the case that one of the receiving devices, e.g., the STB/TV430as shown inFIG.4C, the CA system described herein in accordance with embodiments encrypts and sends all of the encrypted keys along with the encrypted media content to the uncompromised device. Without the keys, even if the compromised receiving device obtains the encrypted media content, the compromised receiving device would not be able to decrypt and derive the media content. In addition to being able to dynamically adjust the number of keys and the amount of media content delivered, the secure CA system disclosed herein in accordance with embodiments is flexible. In particular, the pairing information included in the security profile enables the flexibility of one watching entitled media content on different receiving devices. For instance, inFIG.4A, for high value media content, such as 4K videos, the off-the-shelf STB/TV 1430-1may not have capable hardware, firmware, and software to adequately protect the 4K videos. As such, in order to view the high value media content, the subscriber405may switch to a security enhanced receiving device, e.g., the STB/TV 2430-2with the secure chip inFIG.4B.FIG.4Dillustrates receiving device switching in a smartphone-based CA system400D in accordance with some embodiments. InFIG.4D, the smartphone420has been disconnected from the STB/TV 1, e.g., when the subscriber405with the smartphone420moving away from the STB/TV 1430-1and out of a threshold distance from the STB/TV 1430-1or by the subscriber405disconnecting the connection through the application on the smartphone420. Further, the smartphone420is paired with the STB/TV 2430-2, e.g., when the subscriber405with the smartphone420moving within a threshold distance from the STB/TV 2430-2or by the subscriber405establishing the connection through the application on the smartphone420. Utilizing the near-range communication transceivers (e.g., the transceiver(s)274of the smartphone270and the transceiver(s)284of the STB/TV280,FIGS.2A and2B), a secure near-range communication channel is established between the smartphone420and the STB/TV 2430-2. In some embodiments, once the smartphone420is paired with the STB/TV 2430-2, the smartphone420obtains the profile of the STB/TV 2430-2and transmits the profile of the STB/TV 2430-2to the headend410. As explained above with reference toFIG.4B, based on the profiles, the headend410assigns and distributes keys to the STB/TV 2430-2. In some embodiments, the headend410re-locates and/or re-distributes a different device key to the smartphone420in response to a profile update, e.g., the pairing change. Also as explained above with reference toFIG.4A, when the smartphone420was paired with the off-the-shelf STB/TV 1430-1, the headend410delivers more keys and content to the smartphone420. When the pairing changes such that the smartphone420is paired with the security enhanced STB/TV 2430-2, based on the updated security profile, the headend410adjusts the number of keys and content assigned to the receiving devices accordingly. For instance, prior to the switching, the headend410splits the service keys according to a ratio of 2:1 between the smartphone420and the STB/TV 1 with the smartphone420being more secure and receiving more service keys. After the switching, with the STB/TV 2430-2having the secure chip, the headend410splits the service keys evenly between the smartphone420and the STB/TV 2430-2. As such, the CA system disclosed herein allows content viewing on different receiving devices with continued security. FIG.5is a flowchart representation of a method500for service key assignment and distribution in a CA system, in accordance with some embodiments. In some embodiments, the method500is performed at a headend (e.g., the headend210inFIGS.2A and2Band/or the headend410inFIGS.4A-4D), which includes a controller (e.g., the controller240,FIGS.2A and2B), at least one non-transitory storage for storing security profiles (e.g., the storage222inFIGS.2A and2B), a device key generator (e.g., the device key generator230,FIG.2A), and a transmitter (e.g., the transmitter250,FIGS.2A and2B). Briefly, the method500includes obtaining a security profile including a profile of a first device, a profile of a second device paired with the first device, and a user profile; locating a first device key for the first device and a second device key for the second device; and regulating user access to a channel during an entitlement period, including determining a first security ranking of the first device and a second security ranking of the second device based on the security profile, and assigning a first subset of service keys to be encrypted with the first device key and a second subset of service keys to be encrypted with the second device key based on the first security ranking and the second security ranking, and transmitting the first subset of service keys to the first device and the second subset of service keys to the second device. To that end, as represented by block510, the method500includes the controller obtaining a security profile including a profile of a first device, a profile of a second device paired with the first device, and a user profile. In some embodiments, the headend further includes a receiver (e.g., the receiver220,FIGS.2A and2B) to facilitate obtaining the security profile. For instance, the first device can be a smartphone, e.g., the smartphone270inFIGS.2A and2B, the smartphone310inFIG.3, or the smartphone420inFIGS.4A-4D; and the second device can be an STB or a smart TV, e.g., the STB/TV280inFIGS.2A and2B, the STB/TV320inFIG.3, the STB/TV 1430-1inFIGS.4A and4D, or the STB/TV 2430-2inFIG.4B. The profile of the smartphone can include hardware and subcomponent model number(s), type(s) and version(s) of operating system, application(s) installed, SIM information, device identifier(s), and/or serial number(s), etc. The profile of the STB/TV can include make and model, type of chip(s), identifier(s), and/or firmware update(s), etc. As represented by block512, in some embodiments, the receiver receives, from the first device the user profile, the profile of the first device including an identifier of the first device, the profile of the second device including an identifier of the second device, and data exchanged during pairing of the first device and the second device. The controller then establishes the security profile based on the profile of the first device, the profile of the second device, and the user profile. In some embodiments, upon establishing the security profile for the subscriber, the headend stores the security profile in the non-transitory storage. For example, during registration or account setup, the smartphone sends the subscriber's information to the headend. Further, the STB/TV can display an identifier of the STB/TV for the smartphone to scan in near range. The smartphone scans the identifier of the STB/TV and utilizes the near-range communication device(s) to pair with the STB/TV. Once paired, the smartphone obtains further information about the STB/TV through the communication between the smartphone and the STB/TV. In some embodiments, the smartphone sends to the headend the information about the STB/TV, the smartphone, and the subscriber, as well as the communication between the smartphone paired with the STB/TV. Such information is then used by the headend (e.g., the controller240,FIGS.2A and2B) to establish the security profile for the subscriber and stores the security profile in the storage. The method500continues, as represented by block520, with the device key generator locating a first device key for the first device and a second device key for the second device. In some embodiments, the first device key and the second device key are transmitted by the transmitter of the headend to the first device and the second device. For example, inFIG.2A, the device key generator230generates the device key for the STB/TV280KSTBand the device key for the smartphone270KPHONE. Further, the device key generator230transmits (e.g., through the transmitter(s)250) KSTBto the STB/TV280and KPHONEto the smartphone270. In some embodiments, the headend obtains the first device key and the second device key from the first device and the second device, e.g., once the first device key and the second device are paired. In some embodiments, the device key generator locates and manages the first device key and the second device key for encryption at the headend. The method500continues, as represented by block530, with the controller regulating user access to a channel during an entitlement period. In some embodiments, in order to regulate user access, as represented by block540, the headend first determines a first security ranking of the first device and a second security ranking of the second device based on the security profile, as represented by block540. Further, as represented by block550, the headend regulates user access by assigning a first subset of service keys to be encrypted with the first device key and a second subset of service keys to be encrypted with the second device key based on the first security ranking and the second security ranking. Additionally, as represented by block560, the headend regulates user access by transmitting the first subset of service keys to the first device and the second subset of service keys to the second device. For example, inFIG.4A, for a channel C in an epoch E, the headend410obtains three service keys {SKC,E1, SKC,E2, SKC,E3} from a service key generator (e.g., the service key generator122,FIG.1). For the epoch E and for the channel C to which the subscriber405is entitled, the headend410determines the smartphone420has a higher security ranking than the STB/TV 1430-1. As such, based on the security rankings, the headend410assigns a subset of the service keys, e.g., two service keys to the smartphone420, denoted as SKPHONE={SKC,E1, SKC,E2} and another subset of service keys, e.g., one service key to the STB/TV 1430-1, denoted as SKSTB1=SKC,E3. The headend410then encrypts the service key set SKPHONEwith the device key KPHONEand encrypts the service key SKSTB2with the device key KSTB2. The encrypted service keys are then securely transmitted to the respective receiving devices, e.g., transmitting the encrypted SKPHONEset to the smartphone420and transmitting the encrypted SKSTB1to the STB/TV 1430-1. In some embodiments, as represented by block542, determining the first security ranking of the first device and the second security ranking of the second device based on the security profile includes associating values to security features extracted from the profile of the first device and the profile of the second device; and calculating the first security ranking of the first device and the second security ranking of the second device based on a function of the values associated with the security features. For instance, the headend extracts security features such as the processor type and speed, types of encryption and decryption software, storage capacity, etc. In some embodiments, the headend calculates the security rankings by assigning values to the security features, e.g., assigning 1 to a standard chip or an older version operating system in an off-the-shelf STB, assigning 2 to a secure chip in a security enhanced STB or a newer version firmware, assigning 3 to a crypto engine on a smartphone, etc. The headend then calculates the security rankings as a function of the values associated with the security features. In some embodiments, the values assigned to the security features are configurable, e.g., decreasing the value assigned to one type of chip overtime as more capable chips are developed, or decreasing the value assigned to one version of software in response to discovering security flaws. Still referring toFIG.5, in some embodiments, as represented by block552, assigning the first subset of service keys to be encrypted with the first device key and assigning the second subset of service keys to be encrypted with the second device key based on the first and the second security ranking includes assigning an equal number of service keys in the first and the second subset of service keys in accordance with a determination that the first security ranking of the first device is approximately the same as the second security ranking of the second device. On the other hand, if the first security ranking and the second security ranking are not approximately the same, the headend would assign more service keys to be transmitted to the receiving device with the higher security ranking. For instance, if a user has a secure smartphone, the security ranking of the smartphone is higher. As a result, the headend sends more service keys to the smartphone, e.g., distributing two service keys to the smartphone420and one service key to the off-the-shelf STB/TV 1430-1as shown inFIG.4A. In contrast, if an STB has a secure chip, the security ranking of the STB is higher or approximately the same as the smartphone. As a result, the headend sends more service keys to the STB or evenly splits the number of service keys distributed to the smartphone and the STB, e.g., distributing one service key to the smartphone420and one service key to the STB/TV 2430-2with a secure chip as shown inFIG.4B. In some embodiments, as represented by block562, the method500further includes: (a) receiving a request from the first device to access the channel; (b) determining whether or not at least one of the first device, the second device, or a combination of the first device and the second device is secure to access the channel based on the security profile in response to the request; and (c) performing assigning and transmitting of the first and the second subset of service keys in accordance with a determination that at least one of the first device or the second device is secure to access the channel. In other words, in some embodiments, in response to a request from the first device (e.g., channel purchasing and/or subscribing to a service package), the headend analyzes the security features in the profiles and determines a security score for the first device and a security score for the second device. If the headend determines that at least one of the first or the second device is secure, e.g., the security score for the first device and/or the security for the second device are above a threshold, the headend assigns and transmits the first subset of service keys and/or the second subset of service keys. On the other hand, if the first device and/or the second device do not have the capacity to provide adequate protection of the media content, e.g., when the subscriber requests high value media content, such as 4K content, the headend may determine that the subscriber cannot watch the 4K content using the respective weak security receiving device and therefore will not send the corresponding service keys (possibly the respective device key) to the respective receiving device. In some embodiments, as represented by block570, the method500further includes detecting an update to the security profile, including at least one update to the profile of the first device, the profile of the second device, or the user profile, and adjusting a number of service keys assigned to at least one of the first subset or the second subset of service keys based on the update. In other words, a subscriber may switch to a different smartphone or a different STB for viewing subscribed media content. In some embodiments, the headend can detect the changes, e.g., receiving an update to the pairing information or receiving a report from the smartphone that the STB has been compromised. In response to detecting the update, the headend dynamically adjusts the service key assignments accordingly to utilize the more secure receiving device for content protection. For instance, as shown inFIG.4D, once the pairing changes, based on the updated security profile for the subscriber405, the headend410determines the STB/TV 2430-2is as secure as the smartphone420, which is more secure than the STB/TV 1430-1. Accordingly, the headend changes the service key distribution ratio from 2:1 to 1:1 between two receiving devices. In another example, as shown inFIG.4C, once the STB/TV430is compromised, the security ranking of the STB/TV430decreases. As a result, the headend410sends 100% of the service keys to the uncompromised receiving device. In some embodiments, as represented by block580, the method500further includes regulating user access to the channel during a next entitlement period. In some embodiments, regulating user access to the channel during the next entitlement period includes: (a) determining whether or not a user is entitled to the channel during the next entitlement period based on the user profile; and (b) in accordance with a determination that the user is entitled to the channel, determining a third security ranking of the first device and a fourth security ranking of the second device based on the security profile, and assigning a third subset of service keys to be encrypted with the first device key and a fourth subset of service keys to be encrypted with the second device key based on the third and the fourth security rankings, and transmitting the third subset of service keys to the first device and the fourth subset of service keys to the second device. As such, when it is time to renew the entitlements (for the next epoch), the headend performs the steps in block530for every subscriber. In some embodiments, as represented by block590, the method500further includes: (a) encrypting at least one control word with at least one of the first subset of service keys or the second subset of service keys; (b) encrypting media content associated with the channel with the at least one control word; and (c) transmitting the encrypted media content and the at least one control word to the first device or the second device. For example, as described above with reference toFIG.2B, one set of service keys SKPHONEis distributed to the smartphone270and the other set SKSTBis distributed to the STB/TV280. The controller240uses the service keys to the encrypt the control words, e.g., calculating ECWi=F(CWi, SKC,Ej). Further, for every media content packet, the controller240chooses a control word, e.g., choosing CWj, and encrypts the media content packet with CWj. As such, the media content is transmitted in encrypted form and the smartphone270is a building block in the smartphone-based CA system. FIG.6is a flowchart representation of a method600for secure content delivery in a CA system, in accordance with some embodiments. In some embodiments, the method600is performed at a headend (e.g., the headend210inFIGS.2A and2Band/or the headend410inFIGS.4A-4D), which includes a controller (e.g., the controller240,FIGS.2A and2B), at least one non-transitory storage for storing security profiles (e.g., the storage222inFIGS.2A and2B), a device key generator (e.g., the device key generator230,FIG.2A), and a transmitter (e.g., the transmitter250,FIGS.2A and2B). Briefly, the method600includes scrambling media content associated with a channel during an entitlement period, including encrypting the media content using at least one control word to generate encrypted media content, and selectively encrypting the at least one control word with a service key from a first subset of service keys assigned to a first device or a second subset of service keys assigned to a second device paired with the first device in order to generate at least one encrypted control word; and transmitting the at least one crypted control word along with the encrypted media content to at least one of the first device or the second device. To that end, as represented by block610, the method600begins with the controller scrambling media content associated with a channel during an entitlement period. As represented by block612, in some embodiments, the scrambling includes encrypting the media content using at least one control word to generate encrypted media content. Further, as represented by block614, in some embodiments, the scrambling includes selectively encrypting the at least one control word with a service key from a first subset of service keys assigned to a first device or a second subset of service keys assigned to a second device paired with the first device in order to generate at least one encrypted control word. The method600continues, as represented by block620, with the controller instructing the transmitter to transmit the at least one crypted control word along with the encrypted media content to at least one of the first device or the second device. For example, as explained above with reference toFIG.2B, when encrypting media content associated with a channel C, for an epoch E, the controller240encrypts a control word CWi and calculates the encrypted control word ECWi as a function of CWi, and SKC,Ei, e.g., ECWi=F(CWi, SKC,Ej). Further, for every media content packet, the controller240chooses a control word, e.g., choosing CWj, and encrypts the media content packet with CWj before instructing the transmitter250to broadcast the encrypted media content packet along with the encrypted control word. In some embodiments, as represented by block630, the method600further includes receiving, from the first device or the second device, a user profile, a profile of the first device including an identifier of the first device, a profile of the second device including an identifier of the second device, and data exchanged during pairing of the first device and the second device; and establishing a security profile based on the profile of the first device, the profile of the second device, and the user profile for storage. In some embodiments, as represented by block640, the method600further includes generating and delivering a first device key to the first device and a second device key to the second device; encrypting the first subset of service keys with the first device key and encrypting the second subset of service keys with the second device key; and transmitting the encrypted first subset of service keys to the first device and the encrypted second subset of service keys to the second device. In some embodiments, as represented by block650, the method600further includes adjusting a number of service keys assigned to at least one of the first or the second subsets of service keys based on an update to at least one of a profile of the first device, a profile of the second device, or a user profile. In some embodiments, as represented by block660, the method600further includes detecting compromise of the second device; and ceasing transmitting the at least one encrypted control word and the encrypted media content to the second device. For instance, inFIG.4C, once the headend410obtains an indication that the STB/TV430is compromised, the headend410ceases transmitting the keys (e.g., the service keys, the control words, and/or the device key) and/or the media content to the STB/TV430. In some embodiments, as represented by block670, the first subset of service keys is assigned to the first device and the second subset of service keys is assigned to the second device based on a security profile. In some embodiments, the service key assignment is performed by determining a first security ranking of the first device and a second security ranking of the second device based on the security profile; and assigning the first subset of service keys to be encrypted with a first device key associated with the first device and assigning the second subset of service keys to be encrypted with a second device key associated with the second device based on the first security ranking and the second security ranking. In such embodiments, as represented by block672, the method600further includes scrambling the media content associated with the channel during a next entitlement period. In some embodiments, the content scrambling of the channel during the next entitlement period includes determining whether or not a user is entitled to the channel based on the security profile; and in accordance with a determination that the user is entitled to the channel, encrypting the media content using one or more control words to generate the encrypted media content, and selectively encrypting the one or more control words with another service key from the first subset of service keys or the second subset of service keys, wherein a number of service keys assigned to the first subset of service keys and the second subset of service keys is adjusted based on updates to the security profile. While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein. It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first device could be termed a second device, and, similarly, a second device could be termed a first device, which changing the meaning of the description, so long as all occurrences of the “first device” are renamed consistently and all occurrences of the “second device” are renamed consistently. The first device and the second device are both devices, but they are not the same device. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the embodiments and the appended claims, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting”, that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
72,896
11943503
DETAILED DESCRIPTION Mechanisms for generating a media quality score associated with the presentation of a content item are provided. Generally speaking, the mechanisms described herein can determine whether a video content item, such as a video ad impression, was viewed as intended by the creator or provider of the video content item. Moreover, the mechanisms can generate a media quality score that combines multiple score components on a frame-by-frame basis and then aggregates the frame scores into the media quality score for the presented video content item. In some embodiments, the mechanisms can include causing monitoring code to load in connection with a video content item on a page being presented by a web browsing application. It should be noted that the video content item can be presented in any suitable manner, such as using a web browsing application, using a mobile application, using a mobile web application, etc. In some embodiments, the mechanisms can include receiving, from the monitoring code, a viewability measurement stream, wherein the viewability measurement stream includes a viewability score for each frame of the video content item based on a percentage of a frame of the video content item that is in view. For example, the mechanisms can present a user interface for providing multiple criteria for determining a media quality score. In a more particular example, the multiple criteria can indicate that the media quality score is based on a viewability measurement stream, a screen size measurement stream, and an audio level measurement stream. In response to receiving the multiple criteria, browser code or any other suitable monitoring code can be generated, where the browser code is configured to, when transmitted to and loaded by a web browsing application, monitor the measurement streams corresponding to the multiple criteria associated with a video content item being presented using the web browsing application. In continuing this example, the browser code can be transmitted to the web browsing application and a subset of measurement streams corresponding to the presentation of the video content item can be received from the web browsing application. In another suitable example, the browser code can be configured to collect and transmit multiple measurement streams, where measurements from a subset of the measurement streams that are selected based on the multiple criteria can be used. In yet another suitable example, the browser code can be updated to collect a measurement stream that was not previously collected based on the multiple criteria. For example, in response to receiving a request to determine a media quality score based on audio level and in response to determining that audio level measurements have not been received, the browser code can be modified to collect audio level measurements associated with the presentation of a video content item using a web browsing application. Additionally or alternatively, the browser code can be modified to collect audio level measurement associated with the presentation of a video content item and comparing the audio level measurement with an intended audio level. It should be noted that any suitable measurement stream can be received from the monitoring or browser code. In some embodiments, the mechanisms can include receiving, from the monitoring code, a screen diagonal measurement stream, wherein the screen diagonal measurement stream includes a screen diagonal score for each frame of the video content item that compares a window diagonal of a video window presenting the frame of the video content item with an available screen diagonal. In some embodiments, the mechanisms can include receiving, from the monitoring code, an audio level measurement stream during the presentation of the video content item, wherein the audio level measurement stream includes an audio level score for each frame of the video content item that compares an audio level of an audio portion of the frame of the video content item with a maximum available audio level. It should be noted that the media quality score can include any suitable metric component. For example, other metric components can include brand safety metrics (e.g., content risk scores, environment layout scores, etc.), device type (e.g., physical device type, operating system type, browser type, etc.), device settings, ad skipability, fraud scores (e.g., how likely the view is being conducted by a bot or non-human entity), user-dependent scores (e.g., behavior metrics for user viewing impression), scrolling scores (e.g., a user scrolling during portions of the playback, reducing viewability score in response to determining that scrolling has occurred during the duration of the playback of the video ad), autoplaying scores (e.g., a score that reflects whether the video ad is set to autoplay, a score that reflects a default volume for the video ad, etc.), auto-refreshing scores, in-stream score (e.g., whether the video ad appears in stream with other content, a score that reflects the time fraction of the video ad relative to the total content), a creative-dependent metric (e.g., front loaded branding, a score proportional to human perception or attention, etc.), a player orientation score (e.g., landscape versus portrait), a load time score, a presence of objections obstructing or overlaying the video ad, clutter metrics (e.g., a score that reflects how many other advertisements or other pieces of content that are on the page), surrounding content metrics (e.g., a score that reflects the context of the ads, such as other videos on the page or other part of the page taking up resources; a score that reflects whether the video ad is integrated into the content of the page or ancillary to it; etc.), purchasing quality metrics (e.g., a score that reflects how the video ad was purchased), any suitable combination thereof, and/or any other suitable metric component. In a more particular example, the measurement stream can include: a viewability measurement stream, where the viewability measurement stream includes a viewability score for each frame of the video content item based on a percentage of a frame of the video content item that is in view; a duration measurement stream, where the duration measurement stream includes a duration score of the presentation of the video content item compared with one or more intended durations; a screen diagonal measurement stream, where the screen diagonal measurement stream includes a screen diagonal score for each frame of the video content item that compares a window diagonal of a video window presenting the frame of the video content item with an available screen diagonal; an audio level measurement stream, where the audio level measurement stream includes an audio level score for each frame of the video content item that compares an audio level of an audio portion of the frame of the video content item with a maximum available audio level; a contextual classification measurement stream, where the contextual classification measurement stream includes a contextual classification score for each frame of the video content item; a brand safety measurement stream, where the brand safety measurement stream includes a brand safety measurement score for each frame of the video content item; and/or a content adjacency stream, where the content adjacency stream includes a content adjacency score for each frame of the video content item based on other content items presented along with the frame of the video content item. It should be noted that measurement streams that correspond to, for example, brand safety, contextual classifications, and content adjacencies can be defined as dynamically changing over the presentation of the video content item. In some embodiments, the mechanisms can extract measurements from the multiple measurement streams and associate each measurement with a particular time position of the presentation of the video content item, where groups of measurements are associated with each time position. For example, each frame of a video content item can be associated with multiple measurements that are extracted from the multiple measurements streams. In some embodiments, the mechanisms can determine that a measurement stream provides measurements that are relatively constant over the presentation of the video content item. In response, the mechanisms can then determine that the measurement stream should no longer be collected by the monitoring or browser code. For example, the browser code can be modified to receive a single measurement associated with the measurement stream, where the measurement stream can be inhibited or otherwise prevent from being collected. Alternatively, the browser code can be modified to reduce a sampling rate associated with the measurement stream. This can, for example, reduce the amount of memory and network resources needed to obtain measurements associated with the presentation of a video content item. In response, the mechanisms can determine an interaction score between each measurement in a group of measurements at each particular time position of the measurement streams to obtain multiple interaction scores for the plurality of measurement streams at each time position. In some embodiments, in the instance where there are three metric components of viewability, screen size, and audio level, the mechanisms can determine an interaction between the viewability measurement stream, the screen diagonal measurement stream, and the audio level measurement stream by combining the viewability score, the screen diagonal score, and the audio level score at each frame to generate a plurality of frame scores for the plurality of frames in the video content item. In some embodiments, the mechanisms can determine that a measurement has not been extracted from a measurement stream for a particular time position of the presentation of the video content item. For example, the mechanisms can determine that multiple measurements have been extracted and associated with a particular time position with the exception of one measurement (e.g., an audio level measurement). In response, the mechanisms can assign a last known measurement to the missing measurement using a last observation carried forward approach. Alternatively, the mechanisms can determine the measurement from a time window of the video content item (e.g., an average of the measurements in the first minute of the video content item, an average of the measurement in the last ten seconds of the video content item, etc.). In some embodiments, the mechanisms can combine the plurality of frame scores to generate an overall quality score for the presentation of the video content item. In some embodiments, the mechanisms can transmit a notification of the overall quality score to an entity associated with the video content item. It should be noted that video ads pose the challenge in that ad is evolving as a viewer watches or does not watch the ad. For example, video ads can have a height and a width, but this is less static as player size is frequently variable. In another example, a user may interact with the ad similar to display ads by scrolling, or the ad may appear before other video content or mid-stream in other video content. In yet another example, a player is also frequently only a component of a page and when evaluating the context of a page, one not only needs to consider the content surrounding the player, but the other video content before and/or after the ad. Moreover, a video ad can have an intended duration, which may or may not be reached. It should be noted that, in some embodiments, intended audio duration and intended video duration can be components of the media quality score. It should also be noted that, in some embodiments, intended audio duration of a video ad impression can be set as being generally equivalent to intended video duration of the video ad impression. In each of these cases, the quality of the impression can vary as the video ad is being played back, and, moreover, individual, independently measured aspects of the playback can interact continuously in time rather than remaining static for the duration. For at least this reason, the mechanisms described herein can consider the time series of measurements in each category and combine them at each point in time to yield a quality function that varies with time. In another way, this quality function can be used to evaluate the quality of the video ad impression frame by frame. The quality function can then be aggregated to a single number. In some embodiments, assuming that the media quality score incorporates at least three metrics, where each metric varies with time. As used herein, these three metric components of the media quality score can be denoted as A(t), B(t), and C(t). Each metric component can be normalized so that: A(T),B(t),C(t)∈[0,1]. Furthermore, the media quality score can be defined by the following quality function: Q(t)=A(t)×B(t)×C(t). In order to aggregate Q(t), the media quality score can integrate with respect to time: Q=∫0TfQ(t)dt, where Tfcan denote the final time for which Q(t) is defined. Should the metrics be aggregated in time first and then combined to yield one score, an alternative metric can be: {tilde over (Q)}=(∫0TfA(t)dt)×(∫0TfB(t)dt)×(∫0TfC(t)dt). As such, it can be seen that {tilde over (Q)}≠Q. For at least this reason, the quality function can be created by first combining the metrics frame by frame and then using an aggregation function. An illustrative example of such a media quality score is: S=1TD⁢DS⁢AM⁢∫0Tf⁢P⁡(t)⁢D⁡(t)⁢A⁡(t)⁢dt, where TDcan denote the intended duration, DScan denote the screen diagonal, AMcan denote the maximum audio level, P(t) is the percent in view at time t, D(t) is the diagonal of the ad at time t, and A(t) is the audio level at time t. It should be noted that the media quality score, S, can be rewritten as: S=1TD⁢∫0Tf⁢P⁡(t)⁢D⁡(t)DS⁢A⁡(t)AM⁢dtor,⁢S=1TD⁢∫0Tf⁢P⁡(t)⁢D~⁡(t)⁢A~⁡(t)⁢dtwhere,⁢D~⁡(t)=D⁡(t)DS,A~⁡(t)=A⁡(t)AM. It should be noted that, as shown in the representation above, each component can range between 0 and 1. Accordingly, in order to yield a high media quality score, each component needs to be near 1, during the same time period. Moreover, this media quality score is an improvement over approaches that combines measurements first and marginalizes second (aggregating over a variable). For example, in instances where a video ad impression is audible and not in view for the first half, and in view but not audible for the second half. The above-mentioned approach can yield a score of zero for this video ad impression because for no portion of the video ad impression was the video ad experienced as it was intended (e.g., both seen and heard). The conventional approach, instead, would yield some score above zero, because it considers these components independently and not interactively. In some embodiments, weights can be applied to particular metric components. As such, multiple frame-by-frame media quality scores can be generated using different schemes. For example, should an audio metric not be as influential as a viewability metric (e.g., percent in view), the mechanisms can modify the audio metric with weights, such as: A~⁡(t)=0.5+0.5⁢A⁡(t)AM. In another example, should the metric components not be combined multiplicatively, the mechanisms can modify the frame-by-frame media quality score to be of the form: Q(t)=P(t)+{tilde over (D)}(t)+Ã(t). This quality function can, for example, create a combined time-based quality function first and then aggregate over time. It should be noted that the frame-by-frame metric can include any suitable number of components or take other function forms. Nevertheless, it should be noted that the quality function can determine media quality at any point during the impression and aggregated to obtain a single value over the entirety of the impression. It should be noted that any suitable approach can be used to create a combined time-quality function. FIG.1shows an illustrative example in which each component of the media quality function is measured by a series of discreet events in accordance with some embodiments of the disclosed subject matter. It should be noted that the state of each measured quantity may not change between update events. As shown inFIG.1, a measurement stream of a component of the media quality function is provided, where the jumps in the measurement stream correspond to a change in an update event. FIG.2shows an illustrative example in which combined measurement streams (e.g., multiplicatively) to generate a media quality score in accordance with some embodiments of the disclosed subject matter. It should be noted that, in order to obtain the media quality score as a single value, the combined measurement streams can integrate time (and, in some embodiments, normalized by intended duration or total time of the video ad impression). It should also be noted that this is generally equivalent to finding the (normalized) area under the quality curve. It should further be noted that normalization can remove the effect of varying intended durations. For example, a video ad that is viewable, audible, and presented in a full screen mode for 10 seconds can have a different quality if the video ad creative is supposed to run for an intended duration of 15 seconds (a 0.66 score) or for an intended duration of 30 seconds (a 0.33). FIG.3shows an illustrative flow diagram for a scoring methodology in which each measurement stream is collected independently, averaged over time, and combined after removing the time component from each. As shown, this can, for example, ignore how elements of a video ad impression interacts and does not measure whether the video ad creative was viewed as it was intended by the creator or provider of the video ad. In accordance with some embodiments of the disclosed subject matter,FIG.4shows an illustrative flow diagram for the media quality scoring mechanisms described herein in which a plurality of measurement streams that individually evaluate the state of a component of a video ad impression is received, the plurality of measurement streams are combined to generate a time-varying quality function at a frame-by-frame basis that varies during the duration of the video ad, and the frame scores from the time-varying quality function are aggregated to obtain a single media quality score. These mechanisms can be used in any suitable application. For example, in addition to obtaining a single media quality score for a video ad impression, the mechanisms can aggregate the media quality scores over different sets of impressions and transmit the aggregated score to a suitable entity. In a more particular example, the aggregated score can be an average score for a given host and day for a given ad campaign. In another example, in addition to obtaining a single media quality score for a video ad impression, the mechanisms can be used in connection with determining optimal exposure frequency and/or optimal exposure time. These and other features for inhibiting the transmission of media content based on frequency and exposure measurements are further described in commonly owned, commonly assigned U.S. Provisional Patent Application No. 62/502,436, which was filed on May 5, 2017. In yet another example, in addition to obtaining a single media quality score for a video ad impression, the mechanisms can be used to determine a bid price for ad inventory. In a further example, in addition to obtaining a single media quality score for a video ad impression, the mechanisms can be used to determine desired ad placements based on media quality scores. In another further example, multiple media quality scores based on the presentation of video content items can be generated. In continuing this example, outcome information associated with each of the video content items can be received, which can include lift information, conversion information, sales information, etc. Each of the plurality of media quality scores can be associated with a predicted outcome. This can, for example, allow a content provider to optimize for an outcome over the provision of a content item or campaign of content items by optimizing the media quality score for the given campaign. In addition, this can allow a content provider to target inventory having a higher predicted media quality score. In a more particular example, the outcome information can correspond to relevant advertising outcomes. It should be noted that such outcome information can be measured digitally. It should also be noted that, in some embodiments, such outcome information can be measured concurrently with the presentation of the video content item (e.g., an ad impression). For example, a relevant advertising outcome can include information indicating that a viewer has selected an advertisement impression and directed to a homepage using a browsing application. In another example, a relevant advertising outcome can include information indicating that a viewer has visited a landing page related to a previously viewed content item (e.g., an ad impression). It should further be noted that, in some embodiments, outcome information can correspond to offline events, such as an in-store transaction or brand recall that is measured using a survey. In some embodiments, the mechanisms can associate outcomes to users with groups of impressions or to individual ad impressions. FIG.5shows an example500of a generalized schematic diagram of a system on which the mechanisms for determining a media quality score associated with the presentation of a video content item as described herein can be implemented in accordance with some embodiments of the disclosed subject matter. As illustrated, system500can include one or more user devices510. User devices510can be local to each other or remote from each other. User devices510can be connected by one or more communications links508to a communication network506that can be linked to a server502via a communications link504. System500can include one or more servers502. Server502can be any suitable server or servers for providing access to the mechanisms described herein for determining a media quality score associated with the presentation of a video content item, such as a processor, a computer, a data processing device, or any suitable combination of such devices. For example, the mechanisms for determining a media quality score associated with the presentation of a video content item can be distributed into multiple backend components and multiple frontend components and/or user interfaces. In a more particular example, backend components, such as mechanisms for extracting measurements from measurement streams, associating measurements with time positions, determining an interaction score between each measurement in groups of measurements, combining the interaction scores to generate a media quality score, determining whether the video content item was viewed as intended, transmitting notifications regarding the media quality score, etc., can be performed on one or more servers502. In another more particular example, frontend components, such as presentation of a user interface for receiving criteria for obtaining measurement streams, etc., can be performed on one or more user devices510. In some embodiments, each of user devices510, and server502can be any of a general purpose device such as a computer or a special purpose device such as a client, a server, etc. Any of these general or special purpose devices can include any suitable components such as a hardware processor (which can be a microprocessor, digital signal processor, a controller, etc.), memory, communication interfaces, display controllers, input devices, etc. For example, user device510can be implemented as a personal computer, a laptop computer, a smartphone, a tablet computer, a mobile telephone, a wearable computer, any other suitable computing device, or any suitable combination thereof. Communications network506can be any suitable computer network or combination of such networks including the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a Wi-Fi network, a digital subscriber line (DSL) network, a frame relay network, an asynchronous transfer mode (ATM) network, a virtual private network (VPN), an intranet, etc. Each of communications links504and508can be any communications links suitable for communicating data among user devices510and server502, such as network links, dial-up links, wireless links, hard-wired links, any other suitable communications links, or any suitable combination of such links. Note that, in some embodiments, multiple servers502can be used to provide access to different mechanisms associated with the mechanisms described herein for determining a media quality score associated with the presentation of a video content item. FIG.6shows an example600of hardware that can be used to implement one or more of user devices510, and servers502depicted inFIG.5in accordance with some embodiments of the disclosed subject matter. Referring toFIG.6, user device510can include a hardware processor612, a display614, an input device616, and memory618, which can be interconnected. In some embodiments, memory618can include a storage device (such as a non-transitory computer-readable medium) for storing a computer program for controlling hardware processor612. Hardware processor612can use the computer program to execute the mechanisms described herein for determining a media quality score and/or for performing any other suitable task associated with the mechanisms described herein. For example, some of the above-mentioned features can be performed by hardware processor612(e.g., extracting measurements from the measurement streams, modifying the collection of measurements relating to the presentation of a video content item, etc.), while other features can be performed by a hardware processor executing on a server device (e.g., determining an interaction between the measurements at particular time positions, determining the media quality score, etc.). In some embodiments, hardware processor612can send and receive data through communications link508or any other communication links using, for example, a transmitter, a receiver, a transmitter/receiver, a transceiver, or any other suitable communication device. Display614can include a touchscreen, a flat panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Input device616can be a computer keyboard, a computer mouse, a touchpad, a voice recognition circuit, a touchscreen, and/or any other suitable input device. Server502can include a hardware processor622, a display624, an input device626, and memory628, which can be interconnected. In some embodiments, memory628can include a storage device (such as a non-transitory computer-readable medium) for storing data received through communications link1404or through other links. The storage device can further include a server program for controlling hardware processor622. In some embodiments, memory628can include information stored as a result of user activity (e.g., user instructions to specify one or more advertising management techniques for particular advertising placements, etc.), and hardware processor622can receive information about advertising placements from user devices510. In some embodiments, the server program can cause hardware processor622to, for example, receive a plurality of measurement streams associated with a presentation of a video content item comprising a plurality of frames, extract a plurality of measurements from the plurality of measurement streams, associate each measurement of the plurality of extracted measurements with a particular time position of the presentation of the video content item, wherein groups of measurements are associated with each time position of the presentation of the video content item, determine an interaction score between each measurement in a group of measurements at each particular time position of the plurality of measurement streams to obtain a plurality of interaction scores for the plurality of measurement streams at each time position, combine the plurality of interaction to generate a media quality score for the presentation of the video content item, determine whether the video content item was presented as intended by a content provider, transmit a notification that includes the media quality score and that includes an indication of the determination of whether the video content item was presented as intended by the content provider, and/or for perform any other suitable task associated with the mechanisms described herein. Hardware processor622can use the server program to communicate with user devices510as well as provide access to and/or copies of the mechanisms described herein. It should also be noted that data received through communications link504or any other communications links can be received from any suitable source. In some embodiments, hardware processor622can send and receive data through communications link504or any other communications links using, for example, a transmitter, a receiver, a transmitter/receiver, a transceiver, or any other suitable communication device. In some embodiments, hardware processor622can receive commands and/or values transmitted by one or more user devices510and/or one or more users of server502. Display624can include a touchscreen, a flat panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Input device66can be a computer keyboard, a computer mouse, a touchpad, a voice recognition circuit, a touchscreen, and/or any other suitable input device. In some embodiments, server502can be implemented in one server or can be distributed as any suitable number of servers. For example, multiple servers502can be implemented in various locations to increase reliability and/or increase the speed at which the server can communicate with user devices510. Additionally or alternatively, as described above in connection withFIG.6, multiple servers502can be implemented to perform different tasks associated with the mechanisms described herein. It should be understood that the mechanisms described herein can, in some embodiments, include server-side software, server-side hardware, client-side software, client-side hardware, or any suitable combination thereof. For example, the mechanisms described herein can encompass a computer program written in a programming language recognizable by server502and/or by user device510(e.g., a program written in a programming language, such as, Java, C, Objective-C, C++, C#, JavaScript, Visual Basic, or any other suitable approaches). As another example, the mechanisms described herein can encompass one or more Web pages or Web page portions (e.g., via any suitable encoding, such as Hyper Text Markup Language (“HTML”), Dynamic Hyper Text Markup Language (“DHTML”), Extensible Markup Language (“XML”), JavaServer Pages (“JSP”), Active Server Pages (“ASP”), Cold Fusion, or any other suitable approaches). It should be noted that any suitable hardware and/or software can be used to perform the mechanisms described herein. For example, a general purpose device such as a computer or a special purpose device such as a client, a server, etc. can be used to execute software for performing the mechanisms described herein. Any of these general or special purpose devices can include any suitable components such as a hardware processor (which can be a microprocessor, digital signal processor, a controller, etc.), memory, communication interfaces, display controllers, input devices, etc. This hardware and/or software can be implemented as part of other equipment or can be implemented as stand-alone equipment (which can be coupled to other equipment). In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as non-transitory forms of magnetic media (such as hard disks, floppy disks, and/or any other suitable magnetic media), non-transitory forms of optical media (such as compact discs, digital video discs, Blu-ray discs, and/or any other suitable optical media), non-transitory forms of semiconductor media (such as flash memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and/or any other suitable semiconductor media), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. Accordingly, methods, systems, and media for generating a media quality score associated with the presentation of a content item are provided. Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention. Features of the disclosed embodiments can be combined and rearranged in various ways.
34,109
11943504
DETAILED DESCRIPTION OF THE INVENTION Instead of relying on monolithic media automation systems to provide all or most backend services broadcast systems needed for automated broadcast of media content, various embodiments of media management and/or broadcast systems disclosed herein use multiple microservice applications accessed through a common application programming interface (API). The microservice applications provide limited functionality individually, but in combination provide at least as much functionality as that provided by conventional monolithic media automation systems. The microservices operate independently of one another, and communicate amongst themselves and with one or more broadcast systems in real-time via an application programming interface. The independent operation of the microservice applications allows for media management operations for multiple broadcast systems to be more easily distributed, while the common, or shared, API allows coordination of the independent microservice applications, and facilitates ease of access by multiple broadcast stations. The near real time nature of the data provided by the individual microservices improves the effectiveness of automated media delivery techniques, including programmatic advertisement insertion techniques, especially when media content scheduled for delivery requires modification. Additionally, various embodiments of media management and/or broadcast systems employing one or more of the microservices configurations and/or associated API(s) disclosed herein, provide improved disaster recovery capabilities in comparison to conventional, monolithic media automation systems. Microservices can be implemented using a Command and Query Responsibility Segregation (CQRS) architecture, an Event Driven architecture, sometimes referred to as an Event Sourcing architecture, or a combination of the two. The microservices themselves can be stateful, i.e., the microservices store session information associated with service requests, or stateless, i.e., the microservices do not maintain session states between requests. In some embodiments, one or more of the microservices may be implemented as a persistence microservice, which can be used to provide durable storage, or as an aggregator microservice, which is used for workflows. In various embodiments, an application program interface (API) implemented by a processing device including a processor and associated memory, is used to facilitate asynchronous communication between one or more microservices, and one or more broadcast systems. The API receives broadcast-related information from multiple media automation applications. Receiving the broadcast-related information can include, but is not limited to the following: receiving a traffic log from a first media automation application, and receiving spot information from at least a second media automation application. The first media automation application is configured to generate a traffic log associated with a media station, and the spot information is associated with spots included in the traffic log. The API stores the broadcast-related information received from the microservices in one or more databases, and transmits the broadcast-related information to a media playout system in response to a request from that media play out system. In various embodiments, the API receives, from the media playout system, an affidavit including as-played information, which indicates a playout status of the media content. The API then transmits the affidavit to the at least a second media automation application. In some implementations, the API receives, from the media playout system, a request for the traffic log, and in response to the request, transmits the traffic log from the API to the media playout system. The API can also receive, from the first media automation application, asset identifiers for media items associated with the traffic log, storing the asset identifiers, and transmit the asset identifiers from storage to the media playout system. In various implementations, the API receives, from the at least a second media automation application, media address information indicating a location from which to obtain media items associated with the at least a second media automation application. The API obtains the media items using the media address information, stores the media items, and transmits an affidavit including the media address information from the API to a third media automation application. The API may transmit, to the at least a second media automation, a first affidavit to indicating a time at which the media playout system begins playout of a spot; and a second affidavit from the API application indicating the time at which the media playout system began playout of the spot, a time at which the media playout system finished playout of the spot; and a uniform resource locator (URL) associated with the spot. In other implementations, the API transmits one or more requests for broadcast information to one or more traffic and continuity applications, wherein the broadcast information includes information related to media broadcasts of a plurality of media stations. The API receives the broadcast information, and stores it to generate cached broadcast information. In response to receiving, from a media playout system, a request for at least a portion of the cached broadcast information, the API transmits, to the media playout system, the at least a portion of the cached broadcast information. Additionally, the API can receive updated broadcast information from the one or more traffic and continuity applications, store the updated broadcast information to generate updated cached broadcast information, and transmit the updated cached broadcast information transmitting to the media playout system. The API may also receive a request for spot information. In response to the request for spot information, the API transmits a response including spot schedule information and an address from which media content associated with the spot can be requested. Additionally, the API can receive, from the one or more traffic and continuity applications, media address information associated with media assets stored in a media vault. The API can store the media address information as stored media address information, and transmit the stored media address information to the media playout system. In some embodiments, the media playout system interfaces with other media playout systems associated with other media stations. The API can, in various implementations, receive a trigger message590generated by a log manager. The trigger message identifies at least one broadcast log ready for dissemination to one or more of the individual media stations. In response to receiving the trigger message, the API automatically obtains spot information associated the at least one broadcast log. The API can also obtain, from a programmatic advertisement application, a spot schedule indicating advertisement content to be inserted into a media broadcast, and transmit, to the programmatic advertisement application, a first affidavit indicating that playout of the advertisement content has begun; and transmit, from the API to the programmatic advertisement application, a second affidavit indicating that playout of the advertisement content has been completed. A processing system configured to implement various methods described herein includes a processor configured implement an application program interface (API). In some embodiments, an API includes an ingestion module configured to: receive a traffic log from a log manager, wherein the traffic log includes spot breaks; receive, from a plurality of media automation applications, spot break information including spot metadata associated with the spot breaks; retrieve, based on the spot metadata, a media item to be inserted in a spot break; cache the media item at a storage location within a storage database; and store the spot break information in a relational database, wherein the relational database links the spot break information to the storage location of the media item within the storage database. The API further includes a break information module configured to: transmit the spot break information to a media playout system; and an affidavit module configured to receive playout affidavits posted by the media playout system, and forward the playout affidavits to one or more media automation applications of the plurality of media automation applications. In some implementations, the break information module is further configured to receive, from the media playout system, a request for at least a portion of the spot break information. In at least one embodiment, the media playout system is implemented as a cloud-based playout system configured to interface with a plurality of individual media playout systems associated with individual media stations. In response to receiving a request for spot break information, a break information module transmits the at least a portion of the spot break information, and the storage location of the media item within the storage database, to the media play out system. The ingestion module can be further configured to receive a trigger message generated by the log manager, wherein the trigger message identifies at least one broadcast log ready for dissemination to one or more individual media stations. The ingestion module can automatically obtain spot information associated with the broadcast log in response to receiving the trigger message. The ingestion module can also receive, from a programmatic advertisement application, a spot schedule indicating advertisement content to be inserted into a media broadcast. In some implementations, the ingestion module retrieves a media item from a media repository, and caches the media item in the storage database. The affidavit module can receive, from the media playout system, a first playout affidavit indicating that playout the advertisement content has begun, which the affidavit module transmits to the programmatic advertisement application. Additionally, the affidavit module may receive a second playout affidavit from the media playout system, where the second affidavit indicates that playout of the advertisement content has been completed. The affidavit module automatically transmits the second playout affidavit to the programmatic advertisement application. Various embodiments of a media broadcast system described herein include disaster recovery capabilities. In some such embodiments, an edge device including a processor and associated memory remotely access cloud-based media automation services by pointing an interface of the edge device to a cloud-based sequencer. The term “sequencer,” as used herein, refers generally to a device or application software that can obtain, organize, arrange, schedule, and/or assemble media items for playback via over-the-air broadcast, streaming, or the like. From the cloud-based sequencer, the edge device obtains broadcast logs defining a broadcast schedule, and media files specified in the broadcast logs. The edge device stores local versions of the broadcast logs and local versions of the media files in a storage device locally accessible to the edge device. The edge device monitors availability of the cloud-based sequencer, and in response to determining that the cloud-based sequencer is unavailable to the edge device, points the interface of the edge device to a local sequencer included in the edge device. Local sequencer emulates one or more media automation services normally provided by the cloud-based sequencer. In response to determining that the availability of the cloud-based sequencer has been restored, the edge device re-points its interface to the cloud-based sequencer. In at least one embodiment, storing local versions of the broadcast logs and media files includes storing media files referenced in a plurality of broadcast logs, covering a plurality of upcoming broadcast periods. The edge device provides the local versions of the media files for broadcast in accordance with the local versions of the broadcast logs during periods of time when the interface of the edge device is pointed to the local sequencer. In some such embodiments, during periods of time when the interface of the edge device is pointed to the local sequencer, the local sequencer maintains offline reconcile logs, which record any discrepancies between an actual broadcast and a broadcast schedule defined by the local versions of the broadcast logs. In response to determining that the availability of the cloud-based sequencer has been restored, the edge device transmits the reconcile logs to the cloud-based sequencer. In some implementations, in response to determining that the availability of the cloud-based sequencer has been restored, a local sequencer associated with one edge device negotiates with local sequencers associated with other edge devices to determine a local sequencer priority. In some embodiments, an edge device continually monitors messages from an edge-device manager, and determines the availability of the cloud-based sequencer based on the messages from the edge-device manager. In some such embodiments, the availability of the cloud-based sequencer is determined based on the content of messages received from the edge-device manager, the fact that one or more messages have been received from the edge-device manager, failure to receive one or more messages from the edge-device manager, or some combination thereof. In at least one embodiment, the edge device provides playout services for multiple radio stations both during periods of time when the interface of the edge device is pointed to the cloud-based sequencer, and during periods of time when the interface of the edge device is pointed to the local sequencer. In some embodiments, an edge device includes a processor and associated memory configured to implement a local sequencer, an offline controller, a media playout system, and a content replication service. The offline controller is configured to: monitor availability of a cloud-based sequencer providing media automation services to the edge device, wherein the media automation services include content, schedule, traffic, and advertisement insertion services; point an interface of the edge device to the cloud-based sequencer during periods of time when the cloud-based sequencer is available; and point the interface of the edge device to a local sequencer included in the edge device during periods of time when the cloud-based sequencer is unavailable. During periods of time when the edge device is pointed to the cloud-based sequencer: the edge device is configured to receive, from the cloud-based sequencer, broadcast logs defining a broadcast schedule and media files specified in the broadcast logs; the content replication service is configured to store local versions of the broadcast logs and local versions of the media files in a local storage device accessible to the media playout system; and the media playout system obtains the media automation services from the cloud-based sequencer. During periods of time when the edge device is pointed to the local sequencer: the local sequencer emulates the media automation services normally provided by the cloud-based sequencer; and the offline controller routes requests by the media playout system for media automation services to the local sequencer. In some embodiments, during the periods of time when the edge device is pointed to the cloud-based sequencer, the content replication service stores sufficient logs and broadcast content to allow the media playout system to broadcast media content for a plurality of days without access to the cloud-based sequencer. In at least one embodiment, the local sequencer is configured to: store reconcile logs during periods of time when the interface of the edge device is pointed to the local sequencer; and in response to the edge device determining that the availability of the cloud-based sequencer has been restored, transmit the reconcile logs to the cloud-based sequencer. In some implementations, the offline controller can include a negotiation module, configured to negotiate with local sequencers associated with other edge devices to determine a local sequencer priority in response to determining that the availability of the cloud-based sequencer has been restored. The offline controller can also include a status module configured to: continually monitor messages from an edge-device manager; and determine the availability of the cloud-based sequencer based on the messages from the edge-device manager. In some implementations, the status module is configured to determine the availability of the cloud-based sequencer based on at least one of: content of a message received from the edge-device manager; the fact that one or more messages have been received from the edge-device manager; or failure to receive one or more messages from the edge-device manager. In various implementations, the media playout system provides playout services for a plurality of radio stations during periods of time when the interface of the edge device is pointed to the cloud-based sequencer, and during periods of time when the interface of the edge device is pointed to the local sequencer. A system in accordance with some embodiments includes a processing system configured to implement a cloud-based sequencer serving media streaming end user devices and at least one edge device serving a plurality of media broadcast stations; backend systems and services, accessible through the cloud-based sequencer, wherein the cloud-based sequencer delivers media automation services provided by the backend systems and services; and an edge device. The edge device includes a processor and associated memory configured to implement a local sequencer, an offline controller, a media playout system, and a content replication service. The offline controller, in at least one embodiment, is configured to: monitor availability of a cloud-based sequencer delivering media automation services to the edge device, wherein the media automation services include content, schedule, traffic, and advertisement insertion services; point an interface of the edge device to the cloud-based sequencer during periods of time when the cloud-based sequencer is available; and point the interface of the edge device to a local sequencer included in the edge device during periods of time when the cloud-based sequencer is unavailable. In some such embodiments, during periods of time when the edge device is pointed to the cloud-based sequencer: the edge device is configured to receive, from the cloud-based sequencer, broadcast logs defining a broadcast schedule and media files specified in the broadcast logs; the content replication service is configured to store local versions of the broadcast logs and local versions of the media files in a local storage device accessible to the media playout system; and the media playout system obtains the media automation services from the cloud-based sequencer. During periods of time when the edge device is pointed to the local sequencer: the local sequencer emulates the media automation services normally provided by the cloud-based sequencer; and the offline controller routes requests by the media playout system for media automation services to the local sequencer. In some embodiments, during the periods of time when the edge device is pointed to the cloud-based sequencer, the cloud-based sequencer configured to deliver sufficient logs and broadcast content to the edge device to allow the edge device to broadcast media content for a plurality of days without access to the cloud-based sequencer. The cloud-based sequencer is, in at least one implementation, configured to: in response to the edge device reconnecting to the cloud-based sequencer after a period of time when the edge device has been disconnected from the cloud-based sequencer, receive reconcile logs from the edge device. The cloud-based sequencer can also include an edge device manager associated with the cloud-based sequencer, wherein the edge device manager is configured to transmit status messages to the edge device, wherein the status messages indicate availability of the cloud-based sequencer. In some embodiments, end-user devices can be used to determine that the cloud-based sequencer is unavailable, and in response to at least one media streaming end-user device determining that the cloud-based sequencer is unavailable, the at least one media streaming end-user device communicates directly with the backend systems and services. In other implementations, at least one of the backend systems and services determines that the cloud-based sequencer is unavailable; and in response to determining that the cloud-based sequencer is unavailable, the at least one of the backend systems and services initiates direct communication with the media streaming end user devices. Further explanations regarding the above-mentioned embodiments, and others, are provided with reference toFIGS.1-17. Referring first toFIG.1, a media broadcast system100will be discussed in accordance with embodiments of the present disclosure. Media broadcast system100includes cloud-based media management system45, edge device/media playout system50, streaming media station70, and media broadcast stations95. Streaming media station70provides streaming media to first end user80via communications network75, and to second end user85via communications network75and telecommunications carrier90. Cloud-based media management system45includes master application program interface (API)20, which communicates with media automation services/applications30, through service/application APIs25. Although illustrated by a single box, media automation services/applications30can be, and often are, implemented in a distributed manner, with different processing devices implementing one or more individual services or applications. Cloud-based media management system45optionally includes disaster recovery service40, and edge device manager35. In various embodiments, edge device/media playout system50is part of a media playout system, is associated with a media playout system, or includes a media playout system Edge device/media playout system50transmits first media content65to streaming media station70, which converts the first media content65into one or more media streams to be delivered to end user80and/or end user85. Edge device/media playout system50transmits second media content60and third media content55to one or more to streaming media broadcast stations95, which can be, for example, over-the-air radio broadcast stations. Media broadcast stations95broadcast second media content60and/or third media content55. In various embodiments, first media content65, second media content60and third media content55include substantially all primary and advertising content to be broadcast or streamed, so that media broadcast stations95and streaming media station70need only modulate or otherwise format the content for transmission, without altering the content itself. For example, third media content55may include a fully assembled broadcast, which is modulated onto a carrier frequency for over-the air transmission by one of the media broadcast stations95. In some such embodiments, the media broadcast stations95and the streaming media station70make no decisions about what content is to be broadcast. In other embodiments, the media broadcast stations95and the streaming media station70can insert additional or replacement content, for example station identification content required of broadcasters, local advertisements, or local primary content. However, in at least one embodiment, media broadcast stations95and the streaming media station70are not configured to insert such additional or replacement content. In an example of operation, edge device/media playout system50obtains schedules, primary content, advertising content, and tertiary content from media automation services/applications30via master API20, which are included in cloud-based media management system45. Edge device/media playout system50assembles the content into first media content65, second media content60and third media content55, which is eventually delivered to end users via streaming media station70or media broadcast stations95. As used herein, the term “primary content” refers to non-advertisement programming, shows, entertainment content, news, traffic, weather, music, or the like. The term “advertising content” refers to advertisements inserted into designated spots within a media broadcast schedule. “Tertiary content” refers to content such as filler material, station identifiers, liners or other content generated by on-air personalities, and the like. In some embodiments, discussed further with reference toFIGS.11-16, edge device/media playout system50operates to provide first media content65, second media content60and third media with content55, both during times when cloud-based media management system45is available, and during times when it is not. During times when cloud-based media management system45is available, it provides logs, advertisements, content, and other media automation services by routing communications between media automation services/applications30and edge device/media playout system50through master API20. For example, media automation services/applications30can include one or more of a log management service or application, an advertisement service or application, a media management service or application, and the like. In an example of operation, master API20obtains information from media automation services/applications30, and provides that information to disaster recovery service40, which stores the information and provides it to assist edge device/media playout system50to recover after a failure or loss of network communication, and to edge device/media playout system50, which uses the information to generate the media content delivered to end users via streaming media station70or media broadcast stations95. As used herein, the term “log,” or “broadcast log” refers to an ordered broadcast schedule that includes information identifying particular items to be broadcast or streamed at particular times. A log normally includes time slots for insertion of primary content, time slots for insertion of advertising content, and time slots for insertion of certain tertiary content; although the tertiary content is sometimes inserted into slots designated for primary or advertising content. The information in the log that identifies particular items can include titles, addresses from which the items can be obtained, or the like. Although a log can be said to include the media items to be broadcast, logs usually include information identifying the content or a location from which the content can be obtained. A log is generally created from a “clock,” which can include time slots and content type identifiers. Once particular items are designated to fill particular time slots, the clock with populated time slots is referred to as a log. Note that in some cases, the term “log” is also used herein to refer to a record of something that has happened. The meaning of the term “log” will be apparent by its context, and if not, the specific meaning will be indicated. Note that the term “spot” is sometimes used to refer to advertisements themselves, and at other times used to refer to a time slot in a log that is reserved for an advertisement. Thus, a spot (a position in a log) can be filled by a spot (an advertisement). The meaning of the term “spot” will be apparent by its context, and if not, the specific meaning will be indicated. Edge device manager35can be used to determine a change in the operational status of master API20, disaster recovery service40, media automation services/applications30, and/or edge device/media playout system50. Edge device manager35can also be used to perform other management functions related to edge device/media playout system50. In at least one embodiment, edge device manager35monitors communications between master API20and edge device/media playout system50. If no messages are transmitted from edge device/media playout system50to master API20within a threshold period of time, but messages are being transmitted from master API20to edge device/media playout system50, edge device manager35can conclude that edge device/media playout system50is offline. In response to the offline determination, edge device manager35can transmit a message notifying master API20of the offline determination. In some such embodiments, master API20continues to transmit information intended for edge device/media playout system50to disaster recovery service40. In other embodiments, if no messages are transmitted from master API20to edge device/media playout system50within a threshold period of time, but messages are being transmitted from edge device/media playout system50to master API20, edge device manager35can conclude that master API20is offline, and notify edge device/media playout system50to enter a local mode in which a local sequencer emulates the media automation services/applications30. In some embodiments, edge device manager35monitors the status of master API20and edge device/media playout system50independently of communications between the two. For example, edge device manager35can employ heartbeat messages with either or both of the master API20and edge device/media playout system50. In other embodiments, edge device manager35can evaluate the quality of one or more transmission paths, and whether the master API is available to edge device/media playout system50based on the quality of the one or more transmission paths. In some embodiments, edge device manager35can receive messages from other edge device/media playout systems (not illustrated) that indicate unavailability of media automation services/applications30and/or unavailability of master API20. In various embodiments, edge device manager35controls an operating mode of one or more edge device/media playout systems, each of which can be associated with one or more media stations. The operating modes can include an “online” mode, in which edge device/media playout system50obtains broadcast information originating from media automation services/applications30via master API20in near real time; a “disaster recovery mode” in which edge device/media playout system50obtains information from disaster recovery service during recovery from “local” mode; and a “local” mode, in which edge device/media playout system50obtains broadcast information stored locally, and emulates the services provided by media automation services/applications30. Edge device manager35can issue commands that command, instruct, or force control edge device/media playout system50to enter certain states, or transmit messages that edge device/media playout system50can use as input to determine whether it should change states. Other management functions that can be performed by edge device manager35include, but are not limited to, re-direction of all messages from edge device/media playout system50to an alternate master API (not illustrated), load balancing during disaster recovery to ensure that current content is provided from media automation services/applications30while still allowing backup content to be recovered from disaster recovery service40, caching information to account for variations in network latency and/or bandwidth, or the like. Referring next toFIG.2, a media management system425will be discussed in accordance with various embodiments of the present disclosure. Media management system425includes cloud-based playout control system235; Dynamic Ad API (DAAPI)105, which includes spot schedule information module110, change notification module120, DAAPI database125, and Affidavits module/API245; programmatic advertisement system130; log manager150; Media Placement System145; traffic system205; integrated services layer170; message queue240; inventory management hub175; syndicated content service180; paperless production workflow system (PPO)225; media editor230; Enterprise Copy module220; multiple instances of local broadcast systems185, each of which is associated with one or more separate broadcast, streaming, or other playout systems; and media vault (MV)190, which includes MV database195, and Dynamic Database200, which can function as an event manager in some implementations. Cloud-based playout control system235can be implemented using one or more processing systems that may, but need not be, hosted remotely from some or all of the other elements of media management system425. In various embodiments, Cloud-based playout control system235provides cloud-based control of local broadcast systems185, and can include edge device manager35(FIG.1), and edge device media playout system50(FIG.1), and in some embodiments disaster recovery service40(FIG.1). Cloud-based playout control system235obtains information used by local broadcast systems185to broadcast and/or stream media content as specified in one or more broadcast logs. In embodiments where cloud-based playout control system235includes disaster recovery capabilities, cloud-based playout control system235can obtain, through Dynamic Ad API (DAAPI)105, information to be provided to local broadcast systems185, to allow each of the local broadcast systems to recover to a current state. As used in this context, a “current state” refers to the state in which a local broadcast system should be operating at the time of recovery from a network outage or other failure that prevents local broadcast systems185from receiving information necessary for media broadcast through DAAPI105. For example, if a network outage occurs at 10:00 am, and is resolved a 10:20 am, the “current state” refers to a state local broadcast system would have been in at 10:20 am if no network outage had occurred. Disaster recovery implementations are discussed in greater detail with respect toFIGS.11-16. In various embodiments, cloud-based playout control system235transmits spot information request250to spot schedule information module110, which is included in Dynamic Ad API (DAAPI)105. The spot information request250can include, for example, a request for information related to a single spot referenced in a broadcast log, information related to all spots referenced in a broadcast log, information for new, or changed spots, a request for a broadcast log, or the like. The spots can be local spots scheduled for broadcast on one or more particular local broadcast systems, national spots, or the like. In response to spot information request250, spot schedule information module110transmits a spot information response255to Cloud-based playout control system235. In at least one embodiment, the spot information response255includes information indicating a location at which a spot (media item) referenced in a broadcast log can be retrieved from DAAPI database125, all or a portion of a broadcast log associated with the spot, information identifying a local broadcast system scheduled to broadcast the spot, or the like. For example, spot information response255can include a uniform resource locator (URL) that can be used to send a download request265to DAAPI database125. The download request265includes, but is not limited to a request for the DAAPI database125to transmit a spot to cloud-based playout control system235, and can include an address within DAAPI database125from which the spot can be retrieved. In response to the download request, DAAPI database125transmits a download response261that includes the requested spot to Cloud-based playout control system235. Cloud-based playout control system235can also provide and/or receive affidavits270from affidavits module/API245, which is included in Dynamic Ad API (DAAPI)105. Affidavits can include messages from local broadcast systems185, edge devices, advertisement/media insertion systems, or other media playout systems, where the affidavits indicate a playout status of one or more media items. For example, a media station associated with a particular local broadcast system can generate and transmit an affidavit indicating that a media item scheduled for transmission/broadcast has been transmitted successfully, that transmission/broadcast of a media item has been only partially completed, that transmission/broadcast of a media item has failed, a time at which transmission/broadcast of a media item has been started or completed, or the like. In addition to providing affidavits to, and receiving affidavits from, cloud-based playout control system235, affidavits module/API245also transmits various affidavits to log manager150, media placement system (MPS)145, and programmatic ad system (PAS)130. Dynamic Ad API (DAAPI)105is an example of a master API20, as illustrated inFIG.1, and serves as a central interface through which various microservices, broadcast systems, management systems, disaster recovery systems, obtain information needed to provide media content and advertising content customized for transmission by individual media stations. As noted above, Cloud-based playout control system235obtains broadcast and schedule information, including logs, spots, media items, changes to logs, affidavits, and the like, from DAAPI105. DAAPI105also provides some or all of the broadcast and schedule information, either directly or indirectly, to local broadcast systems185and other subsystems of media management system425. For example, spot schedule information module110transmits a spot information request280to programmatic ad system (PAS)130. The spot information request280includes information about spots (positions in a log) that PAS130is scheduled fill with advertisement items. In response to spot information request280, PAS130provides a spot information response255indicating which spots (media items) PAS130has selected to fill the spots (log positions), and an address indicating a location of the media items in DAAPI database125. In some embodiments, PAS system130transfers, e.g., in media item transfer message345, the media item to DAAPI database125for storage. In other embodiments, the media item may have been previously stored in DAAPI database125from another source. When local broadcast systems185need to obtain spots provided by PAS130, they can obtain the spots by requesting them from DAAPI105, and direct communication between local broadcast systems185and a processing system implementing PAS130is not necessary. DAAPI105can provide spot schedules305to log manager150, which provides spot break information310to traffic/billing system205based on the spot schedules305. In various embodiments log manager150provides completed or partial logs to DAAPI105for storage and later transmission to other microservice applications, local broadcast systems, and Cloud-based playout control system235. Completed logs can include one or more broadcast schedules with all slots, including slots for spots, primary content, liners, perishable content (e.g., traffic, weather, news) filled. Partial logs can include one or more blank unscheduled log positions to be filled with spots indicated by PAS130, with primary content provided by media placement system (MPS)145, or another media automation service or subsystem. Particular logs are generally associated with one or more media stations, markets, users, or devices. DAAPI105can also receive media items from media placement system (MPS)145. For example, DAAPI105can transmit a media information request300to MPS145. Media information request300includes information about positions in a log MPS145has been assigned to fill. Media information request300can include, but is not limited to, a request for a uniform resource locator (URL) or other information indicating a location from which one or more media items can be obtained, scheduled start and end times for broadcast of the one or more media items, media file identifiers, or the like. In response to media information request300, MPS145can generate a media placement response340in which the one or more media items are transferred to DAAPI database125for storage, and later retrieval by other microservice applications or local broadcast systems185. In other embodiments, the one or more media items may have been previously stored in DAAPI database125from another source. When local broadcast systems185need to media items provided by MPS145, they can obtain the media items by requesting them from DAAPI105, and direct communication between local broadcast systems185and a processing system implementing MPS145is not necessary. Programmatic advertisement system (PAS)130can implement a microservice providing programmatic insertion of advertisements into one or more media broadcasts. In particular, PAS130can be used to provide “bulk advertisements” to be inserted into local broadcasts. In some embodiments, for example, national and local advertisements are used to fill up as much available spot inventory on a media station as practical. Often, programmatic advertising is sold with no guarantee of exact timing of when programmatic advertisements will be aired. Thus, programmatic advertisements are often inserted into unscheduled spots without much advance notice. For example, in some embodiments traffic/billing system205is used to identify advertisements to be inserted into most available spots, but where there is unsold inventory, PAS130can be permitted to choose which advertisements are to be inserted into the unsold, schedule positions. Log manager150provides information to traffic/billing system205to allow traffic and billing system205to generate broadcast logs for local broadcast systems185. Log manager150also provides logs to DAAPI105. Media placement system (MPS)145is an automation import manager that places media items into automation systems associated with local broadcast systems185. The media provided by MPS145can include, but is not limited to, traffic, weather, and new information. However, rather than placing the media items directly into local broadcast systems185. MPS145transmits the media items to DAAPI105for storage, and later retrieval by local broadcast systems185, e.g., via an edge device50. Traffic/billing system205is a processing system configured to control traffic and billing functions such as editing orders and scheduling spots. Traffic/billing system205receives spot break information310from log manager150, syndicated content address and spot information290from integrated services layer (ISL)170, and metadata and asset ID information365from Enterprise Copy module220. Syndicated content address and spot information290is transmitted by syndicated content service180to inventory management hub175, which forwards spot information290to ISL170via message queue240. The syndicated content address and spot information290can include a uniform resource locator (URL) specifying a location in media vault database195from which the syndicated content can be obtained, as well as spot information. The spot information can include targeting information associated with the syndicated content, where the targeting information can be used by traffic/billing system205to select appropriate media spots to be scheduled for broadcast in any unfilled spots associated with the syndicated content. Traffic/billing system205provides metadata and asset ID information365received from Enterprise Copy module220to local broadcast systems185. In some embodiments, the metadata and asset ID information365allows local broadcast systems185to obtain media items for broadcast from media vault190, either directly or through an edge device50(FIG.1), when local broadcast systems185are unable to access DAAPI105. In some embodiments, local broadcast systems185obtain media items from storage in DAAPI database125. In various embodiments, inventory management hub175, retrieves syndicated content media items from syndicated content service using the URLs included in syndicated content address and spot information, and transmits the syndicated content media items to media vault190for storage in MV database195. In response to receiving the syndicated content media items, media vault190generates an asset ID420, and stores the asset ID420. In at least one embodiment, at the time an object, e.g., a syndicated content media item or some other media item, is requested from MV database195, dynamic database200locates that object within MV database195, allowing media vault190to respond to the request by providing the requested media item. In addition to storing syndicated content media items, MV database195can also store creatives and other media items generated by various media editors, such as media editor230. In some such embodiments, media editor230generates and uploads creative media and associated metadata410to paperless production workflow system (PPO)225. The creative media and associated metadata410is then transmitted from PPO225to enterprise copy module220, which uploads creative media and associated metadata410to media vault190. In some cases, the creative media is newly generated, while in other cases the creative media is an edited version of another media item obtained from media vault190. Where the creative media is an edited version of another media, enterprise copy module220obtains the asset ID400of a media item to be edited, and transmits the asset ID to PPO225. PPO225uses the asset ID400to obtain the media item to be edited, and transmits the media item to be edited to media editor230. Media vault190serves as a central repository for some or all previously broadcast media items, some or all media items currently scheduled for broadcast, and at least some media items anticipated to be used by media management system425. Thus, in at least some embodiments, the number of media items stored in media vault190far exceeds the number of media items currently scheduled for broadcast. In various embodiments, media vault190transmits currently scheduled media items415to DAAPI database125, which stores the currently scheduled media items for retrieval by local broadcast systems185, and Cloud-based playout control system235. While some such embodiments incur a storage penalty by maintaining more than one copy of media items, allowing local broadcast systems185to obtain the media items via DAAPI105can provide an improvement in delivery performance, because DAAPI105can retrieve only needed media items in advance, which limits the time needed to search for and deliver media items to the local broadcast systems185. Dynamic Ad API (DAAPI) also includes change notification module120, which can be used to assist Cloud-based playout control system235in maintaining necessary media items and system states that will be needed when recovering from a failure. Additionally, the change notifications can be used to generate the affidavits provided to log manager150, MPS145, and PAS130. Consider, for example, a case where one of the local broadcast systems185fails to stream, transmit, or otherwise broadcasts a media item included in its broadcast log. The local broadcast system can notify change notification module120of the discrepancy. Change notification module120DAAPI can pass the information to affidavits module/API245, which generates an affidavit that can be sent to log manager150, MPS145, and/or PAS130, which can each take appropriate action. By way of example, and without limitation, log manager150may add the missed spot to a pool of items having a high priority for being re-selected. Media placement system (MPS)145can re-submit the media item for inclusion in another available spot. Programmatic ad system (PAS) can update remove the skipped spot from one or more reports with or without attempting to reinsert the advertisement. Referring next toFIG.3, an ad management system430will be discussed in accordance with embodiments of the present disclosure. Ad management system430includes Ad-Tech system435, Dynamic Ad API (DAAPI)105, and broadcast system472. Ad-Tech system435can be implemented on one or more processors, each of which can be programmed to provide one or more of the following modules that provide advertisement automation micro-services: log manager150, programmatic ad System (PAS)130, media placement system (MPS)145, and media manager440. Ad-tech system435provides advertisement automation services to broadcast system472via DAAPI105. The advertisement automation micro-services are also referred to herein as “traffic and continuity application.” Log manager150, programmatic ad System (PAS)130, media placement system (MPS)145, each function generally as described with respect toFIG.2. For example, log manager150transmits traffic logs and associated metadata485to DAAPI105. Examples of metadata provided by log manager150include title information, system identifiers, genre, style, tempo, length, date recorded, artist information, industry identifiers, source information, and the like. PAS130, transmits and/or receives spot information and media items480to DAAPI105. MPS145transmits and/or receives media information and media items475to DAAPI105. Examples of media information provided by MPS145can include customer information, customer schedule information, media items, and media item metadata. The media items can include spots (advertisements) and/or primary content. Customer information can include, but is not limited to, customer identifiers and affiliate names. Customer schedule information can include, but is not limited to, customer identifiers, airtime, record time, length report name, delivery method, service type, market, and a Sponsored indicator, which includes information indicating whether a particular media item is sponsored. In response to a request for a media item from DAAPI105, MPS145can return information including a range of acceptable start times for the media item, a range of acceptable end times for the media item, a customer identifier, airtime, a copy identifier, a broadcast management system key, a traffic transmitter, a script, a service type, comments, a length of the media item, a prerecorded indicator, an advertiser identifier, an advertiser short name, makegood criteria, and/or the requested media item itself. Media manager440provides media items470to DAAPI105. DAAPI105can determine whether items scheduled for broadcast with a particular period of time are available from DAAPI Database125. DAAPI105transmits, to media manager440, a request for scheduled media items that are not currently available in DAAPI Database125. Media manager440returns the requested media items, along with metadata associated with those media items. In some embodiments, media manager440is part of media Vault190, as illustrated inFIG.2. In other embodiments, media manager440is implemented as an interface between DAAPI105and media vault190. Each of log manager150, PAS130, MPS145, and media manager440also receive one or more affidavit messages465from DAAPI105. Dynamic Ad API (DAAPI)105includes ingestion/update module445; DAAPI Database125, which includes relational database service (RDS)450and media storage database128; BreakInfo and Skipped Spot API455; and Affidavits module/API245. DAAPI105receives information used for automated insertion of advertisements in over-the-air media broadcasts, streaming media broadcasts, podcasts, or the like, and provides that information to broadcast system472. DAAPI105can obtain information necessary to cover a relatively long period of broadcasting, while still proving the ability to make last-minute changes to broadcast content. For example, DAAPI105may receive 24 hours-worth of broadcast content from Ad-Tech435, and store that content, and then stream or otherwise transmit a fully assembled broadcast to broadcast system472for over-the-air or streaming broadcast to end-users. As use herein, the term “fully formed broadcast” refers to one or more files that include all, or substantially all, media items to be broadcast, i.e., primary content, spots (advertisements), and tertiary content, with the possible exception of station identifiers, liners, sweepers, or the like. A fully formed broadcast will be arranged to allow sequential playout of media items in accordance with a broadcast log. Note that in some embodiments, fully assembled broadcast logs are generated by broadcast system472, and DAAPI105simply provides broadcast logs and content needed for broadcast system472to generate the fully assembled broadcast logs. DAAPI105also serves as a coordinator, and as its name implies, an interface, between Ad-Tech435and broadcast system472. Assume, for example, that twenty-four hours in advance of a broadcast, DAAPI105transmits a log and a fully assembled broadcast based on that log to multiple broadcast systems. Further assume that 2 hours before the fully assembled broadcast is scheduled to air, one of those broadcast systems, e.g., broadcast system472sends a request to DAAPI105indicating that a new advertisement is to be inserted in place of a scheduled advertisement already included in the fully formed broadcast. In this example, broadcast system472can notify DAAPI105of the requested change. DAAPI105can obtain an updated broadcast log and the new advertisement from Ad-Tech435, and generate a new broadcast to transmit to broadcast system472without having to rely on broadcast system472to make the changes to the log, and without having to retransmit any extra data to the other broadcast systems. This allows ad management system to provide improved performance relative to conventional systems, first by allowing last minute changes to a broadcast, and second by minimizing the amount of data that needs to be transmitted to accommodate those last-minute changes. Other improvements to existing automated scheduling systems, which include ad management systems, will also be apparent upon consideration of the present disclosure. Ingestion/update module445is the portion of DAAPI105that receives traffic logs and associated metadata485from log manager150, spot information and media items480from PAS130, media information and media items475from MPS145, and media items470from media manager440. In various embodiments, ingestion/update module445requests traffic logs from log manager150. In other embodiments, log manager150transmits a notification to ingestion update module445when original or updated logs are ready to be retrieved. So, for example, if log manager150generates an updated log in response to information in an affidavit indicating that an originally scheduled spot has been skipped or replaced, log manager150can modify a future log to add the originally scheduled spot into an upcoming broadcast. Ingestion/update module445processes and routes the information received from log manager150, PAS130, from MPS145, and media manager440for storage and later delivery to a disaster recovery service40(FIG.1), such as Cloud-based playout control system235(FIG.2), and/or broadcast system472. In some embodiments, ingestion/update module445can also transmit requests for updated media and/or logs based on requests received from broadcast system472. In at least one embodiment, ingestion/update module445divides received data into log-related information447and media data449. Log-related information can include spot (advertisement) identifiers, metadata, broadcast logs, log metadata, broadcast system identifiers, information used to link the log-related information to particular media data449, and the like. Media data449can include actual media files, information used to link the media files to the appropriate log-related information, and media-item metadata. In some such embodiments, ingestion/update module445routes log-related information447to relational database service (RDS)450for storage, while routing media items to media storage database128. Either or both of database service450and media storage database128can be implemented using stand-alone storage devices local to the same processing device used to implement ingestion/update module445, using dedicated machines connected to ingestion/update module445via a communications network, or as cloud-based services accessible to ingestion/update module445. BreakInfo and Skipped Spot API455provides break information256to broadcast system472, and receives skipped-spot information260from broadcast system472. In at least one embodiment, BreakInfo and Skipped Spot API455provides the interface between broadcast system472and both spot schedule information module110(FIG.2) and change notification module120(FIG.2). In some embodiments, spot schedule information module110and change notification module120are implemented as a single module that uses BreakInfo and Skipped Spot API455as its interface. In response to receiving skipped-spot information260, BreakInfo and Skipped Spot API455transmits skipped-spot information260to RDS450, which stores the information as log-related information, or uses the information to update previously stored log-related information associated with the skipped spot (slot or position in the log). BreakInfo and Skipped Spot API455also transmits skipped-spot information260to media storage database128, which can store the information in conjunction with a stored media item, or update media metadata associated with the skipped spot (media item). Affidavits module/API245receives affidavits270(sometimes referred to herein as playout affidavits) posted by broadcast system472, and transmits those affidavits to Ad-Tech435for delivery to one or more of the traffic and billing continuity applications, or microservices, included in Ad-Tech435. Additionally, affidavits module/API245transmits the affidavits for storage by RDS450. In various embodiments, ingestion/update module445can use the affidavits when deciding to request updated media items and/or logs from Ad-Tech435. Broadcast system472is, in at least one embodiment, one of the local broadcast systems185, as illustrated inFIG.2, which serves one or more streaming media stations, such as streaming media station70(FIG.1), one or more media broadcast stations95(FIG.1) or some combination thereof. In some embodiment, broadcast system472can be implemented in an edge device50. Note that broadcast system472can be controlled by Cloud-Based Playout Control235. Referring next toFIG.4a log management system490will be discussed in accordance with various embodiments of the present disclosure. Log management system490includes Dynamic Ad API (DAAPI)105; cloud-based playout system236, which is a cloud-based implementation of broadcast system472; log manager150, which includes log generation module610that generates trigger message590, get/break info interface615, and post/affidavits module635; and media vault190, which includes media repository620. Cloud-based playout system236can include some or all components of cloud-based playout control235(FIG.2), broadcast system472(FIG.3) or both. [see paragraph34“The API can, in various implementations, receive a trigger message590generated by a log manager. The illustrated embodiment of Dynamic Ad API (DAAPI)105includes playout-system-facing endpoints495, including break info and skip a spot API455(FIG.3) and affidavits module/API245. Break info and skip a spot API455includes spot schedule information module110and change notification module120. DAAPI105also includes cache500; DAAPI log database402, which is an example of RDS database450(FIG.3) and is used to store broadcast logs and log-related information and metadata obtained from log manager150; read logs module505; push log changes module510; break info queue service575; log changes queue service532; fetch queue service572; post/log trigger module595; LM get break info module605; fetch audio and metadata module555; and DAAPI media database403, which is an example of media storage database128(FIG.3), and is used to store media items obtained from media vault190, along with metadata associated with the media items. Spot schedule information module110provides cloud-based playout control system236with information about spots to be broadcast and/or streamed for example, spot schedule information module receives a spot information request250from cloud-based playout control system235, and returns a spot information response255. Spot information request250includes a request for break information, and spot information response255includes the requested break information. Break information can include spot break information about log positions and media items used to fill those log positions. For example, break information can include information indicating an address, e.g., a uniform resource locator (URL), at which a media item referenced in a broadcast log can be retrieved, all or a portion of a broadcast log associated with the media item, information identifying a local broadcast system to which the break information pertains, or the like. Change notification module120obtains skipped-spot information260from cloud-based playout control system235. Skipped-spot information260includes information indicating discrepancies between media items (spots) scheduled for broadcast according to a broadcast log, and media items actually broadcast. Skipped-spot information can include, but is not limited to, spot identifiers of fully or partially skipped spots, scheduled broadcast station and time, an indicator specifying whether a spot was fully skipped or partially skipped, or the like. Skipped spot information260can be used by DAAPI150to determine whether a future log may need to be updated to include the skipped media item, to update the metadata of stored media items and/or broadcast logs to reflect the fact that the media item has not been fully broadcast, to provide billing and/or tracking systems information needed to accurately reflect fulfillment of campaign requirements, or the like. Affidavits module/API245receives affidavits270from cloud-based playout system236, and transmits the affidavits to Post/affidavits module635, which is included in log manger150. Affidavits270, like skipped spot information, includes information indicating discrepancies between media items (spots) scheduled for broadcast according to a broadcast log, and media items actually broadcast. Unlike skipped spot information, however, affidavits270can also include playout information indicating that there are no discrepancies between scheduled and actually played-out media items. Information included in the affidavits can be used by log manager150in constructing or adjusting future broadcast logs to include the skipped media item. Cache500exchanges log-related information520with break info and skip a spot API455. Log-related information520can include, broadcast logs, break information, spot break information, or the like. For example, spot schedule information module110can receive a request for break information associated with a particular media station from cloud-based playout system236. In response to the request for break information, spot schedule information module110can transmit a log request via path515to read logs module505. Read logs module505obtains the requested log, via path517, from DAAPI log database402, and transmits the requested log to cache500via path515. Cache500caches the requested log for subsequent transmission to spot schedule information module110as log-related information520. There may be cases where a log previously provided to cloud-based playout system236is changed by log manager150. In some embodiments, push log changes module510retrieves the changed log, or in some cases the log changes, from DAAPI log database402via path570. In at least some implementations, push log changes module510pushes the changed log and/or the log changes to cache500via path525, without waiting for a log request. Cache500caches the changed log and/or the log changes for later transmission to spot schedule information module110as log-related information520. DAAPI log database402can transmit the changed log and/or the log changes to cache500via log changes queue service532. In some cases, log changes queue service532is not a receipt ordered queue, such as a first-in-first-out (FIFO) queue or a last-in-first-out (LIFO) queue. Instead, log changes queue service532can transmit changed log and/or the log changes based on a priority of each item in the queue, based on a “hold time” associated with each item, based on a station identifier or station status indicator associated with each item, or on some other basis. In some implementations, log changes queue service532can hold an item in queue for a given period of time, and if another change impacting the held item is received before the given period of time elapses, the older of the two items can be deleted from the queue before being delivered to push log changes module510. Post/log trigger module595provides a trigger notification585to break information queue service575, which queues the notification for delivery to log manager (LM) get break information module605. Trigger notification585, triggers LM get break information module605to send a get break info request600to get/break information interface615, which is part of log manager150. Get/break information module can respond by transmitting requested break information565, which can include complete logs, partial logs, log metadata or like, to LM get break information module605. LM get break information module605transmits break requested break information565to DAAPI log database402for storage and later delivery to cloud-based playout system236via break info and skip a spot API455. In response to receiving requested break information565, DAAPI log database402stores the requested break information565, transmits log changes included in the break information to log changes queue service532, and transmits fetch information to fetch queue service572. The fetch information provides information allowing media items identified in the requested break information to be obtained from media vault190. The fetch information can include, but is not limited to, media identifiers, address information identifying a location from which media items can be obtained, a creative identifier that uniquely identifies the media item to be fetched, or the like. Fetch queue service572transmits the fetch information to fetch media and metadata module555. In response to receiving the fetch information from fetch queue service572, fetch media and metadata module555transmits a fetch request625, which includes the fetch information, to get/media repository620, which is included in media vault190. Get/media repository620returns a fetch response630. Fetch response630can include the media item, metadata associated with the media item, an address of the media item if it is already stored within DAAPI media database403, or some combination thereof and receives a fetch response. In response to receiving fetch response630, fetch media and metadata module555transmits the media item, and optionally the media metadata, to DAAPI media database403for storage and later transmission to cloud-based playout system236. In conjunction with transmitting the media item to DAAPI media database403, fetch media and metadata module555transmits media metadata and an address at which the media has been stored to DAAPI log database402. Considering the following example. Log manager150transmits a log that is retrieved by LM get break info605, and stored in DAAPI log database402. Fetch media and metadata module555retrieves the media specified in the log, and stores that media at a particular location in DAAPI media database403. In that case, fetch media and metadata module555will send an address, for example a URL specifying the address at which the media has been stored in DAAPI media database403, to be stored in DAAPI log database402. The URL will be stored as log metadata, and associated with the log that specifies the media. In that way, when the log associated with the media item is later provided to cloud-based playout system236, cloud-based playout system236knows the address of the media, and can include that address of the media in download request265, which is sent to DAAPI media database403. In response to receiving download request265, DAAPI media database403can return the requested media item to cloud-based playout system236in a download response261. Log manager150include log generation module610, which generates new broadcast logs, and/or updates existing broadcast logs. The new broadcast logs can be generated using input from various media automation and/or traffic and billing systems, including without limitation Viero, Aquira, GSelector, or the like. Log generation module610can also use affidavits270received at post/affidavits module635from cloud-based playout system236to identify some items to be inserted into new or updated broadcast logs. In some embodiments, log manager150can also use inputs from other traffic and continuity applications to assist in generating logs and break information. Referring next toFIG.5a programmatic advertisement insertion system545will be discussed in accordance with embodiments of the present disclosure. Programmatic advertisement insertion system545includes Dynamic Ad API (DAAPI)105; cloud-based playout system236; and programmatic ad system (PAS)130, which includes PAS media storage670and Post PAS affidavits module665. The illustrated embodiment of Dynamic Ad API (DAAPI)105includes playout-system-facing endpoints495, including break info and skip a spot API455and affidavits module/API245. Break info and skip a spot API455includes spot schedule information module110and change notification module120. DAAPI105also includes cache500; DAAPI log database402, which is used to store broadcast logs and log-related information metadata obtained from log manager150; read logs module505; push log changes module510; DAAPI media database403, which is used to store media items obtained from PAS130, along with metadata associated with the media items. Each of these elements functions in a way substantially similar to those same elements, as discussed with respect toFIG.4. The illustrated embodiment of Dynamic Ad API (DAAPI)105also includes media download service541, monitoring service694, lambda function710station message queue service575, Programmatic Ad system (PAS) spot schedule get module685, spot information queue service737, real time query module730, comparison table720, and PAS affidavits module650. PAS130, provides advertisements for insertion into spots (log positions) reserved for programmatic ad insertion by log generation module610(FIG.4). At the time the broadcast log is generated, the log manager150(FIG.4) does not know which specific spots (advertisements) are to be inserted into the log positions reserved for programmatic ad insertion. Likewise, PAS130does not know which log positions the log manager150has reserved for PAS130to use. Consequently, PAS spot schedule get module685transmits a request for PAS media items675to PAS media storage670. The request for PAS media items675can specify information about log positions available for insertion of programmatic advertisements. For example, request for PAS media items675can specify a start time of an available spot (log position), an end time of an available spot, metadata associated with the available spot, media content surrounding the available spot, a full; or partial broadcast log, and/or other information that PAS130can use to select programmatic ads for insertion into the available spot. In response to receiving the request for PAS media items675, PAS media storage670returns a PAS response680to PAS spot schedule get module685. The PAS response680can include, but is not limited to, spot information including a URL or other address specifying a location from which a PAS media item can be obtained, a start time specifying when the PAS expects the PAS media item to broadcast, a duration of the PAS media item, metadata associated with the PAS media item, a unique system identifier that can be used to identify the PAS media item, or the like. PAS spot schedule get module685transmits PAS spot information message725to DAAPI log database for storage. PAS spot information message725includes PAS spot information, which can include all of the information included in PAS response680, including the address of the PAS media item. In some embodiments, the PAS spot information may include a limited subset of the information included in PAS response680. DAAPI log database402associates the address of the PAS media item with the appropriate spot (log position) in the appropriate log, and stores the association, along with information included in the PAS spot information message725, as log metadata, or as an element of a broadcast log associated with the PAS media item. In at least one embodiment DAAPI log database402can flag a particular broadcast log position as “filled” in response to receipt of a PAS spot to be inserted into that particular broadcast log position. DAAPI log database402transmits download information540, including the addresses specifying locations from which PAS media item can be obtained, e.g., URLs associated with PAS media items, to media download service541. Media download service541uses download information540to obtain the PAS media items, and transmits the PAS media items to DAAPI media database403for storage, and later retrieval by cloud-based playout system236 A comparison between the information provided by PAS media storage670and information already stored in DAAPI log database402can be triggered in response to receipt of PAS spot information message725. In some embodiments, the comparison can be triggered in response to DAAPI log database402determining that PAS spot information message725includes information about previously filled log positions. The determination about whether a particular log position has been previously filled can be made based on a “filled” flag, or based on a check of stored association metadata. In some such embodiments, trigger715initiates formation of table720, which includes a time ordered list of PAS media items and metadata previously associated with the log position indicated in PAS spot information message725. For example, assume that PAS spot information message725includes metadata and addresses of PAS media items to be broadcast between 2:05:15 am and 2:08:30 am on Station A the following day. In this example, table720will be populated with information related to PAS media items that have already been associated with spots (log positions) in Station A's broadcast log between 2:05:15 am and 2:08:30 am the next. If PAS media items have been associated with those spots (log positions), table720can be populated with information about those PAS media items. If there no PAS media items associated with those spots (log positions), table720can be empty. In some embodiments, rather than populating an empty table720an empty table need not be generated. A comparison745can be made between the contents of table720and the content of PAS spot information message725to determine if any changes to previously stored log-related information needs to be made. Comparison745can be made by using real-time query module730, and spot information queue service737to extract comparison information from PAS response680received by PAS spot schedule get module685. In other embodiments, real-time query module730, and spot information queue service737can be used to populate table720, and log-related information from DAAPI log database402can be compared to the contents of table720. Alternatively, two tables can be generated and compared. Regardless of which data is used to populate table720, if comparison745indicates differences between log-related information stored in DAAPI log database402and spot information included in PAS spot information message725, update notification750can be delivered to DAAPI log database402. In response to receiving update notification750, DAAPI log database402updates its stored log-related information, updates the download information540, and transmits download information540to media download service541. In some embodiments, monitoring service694continually monitors communications between media stations served by cloud-based playout system236to identify the occurrence of an event indicating that a media station is approaching a time when broadcasting of programmatic advertisements is to begin. In response to detecting such an event, monitoring service694transmits an event notification695to lambda function710, which obtains a list of media stations and/or log-related data690from DAAPI log database402. Lambda function710transmits the list of media stations and/or log-related data690to station message queue service575, which in turn passes the list of media stations and/or log-related data690to PAS spot schedule get module685. In response to receiving the list of media stations and/or log-related data690, PAS spot schedule get module685transmits the request for PAS media items675to PAS media storage670. Affidavits270are received at affidavits module/API245, which is included in DAAPI105. Affidavits270include PAS affidavits360, to be delivered to PAS system130. Affidavits module/API245transmits PAS affidavits360to PAS affidavits module650, which delivers PAS affidavit post message655to POST/PAS affidavits module665. POST/PAS affidavits module665responds to receipt of the In response to receipt of PAS affidavit post message655by transmitting a PAS affidavit response660back to PAS affidavits module650. The information included in PAS affidavits360includes, but is not limited to the following: a status code; a spot instance identifier; a uniform resource locator (URL), such as an aircheck URL indicating a location at which a record demonstrating actual playout of the media is stored, information included in PAS affidavit post message655, or the like. The status code and associated spot instance identifier can be used to notify PAS system130about a playout status of a particular spot (media item) selected and/or provided by PAS system130for broadcast/streaming by cloud-based playout system236. PAS affidavit post message655can include, but is not limited to, a playout start time, a playout end time, an aircheck URL, one or more system identifiers associated with the media item, or the like. In some embodiments, the information included in PAS affidavits360, PAS affidavit post message655, and/or PAS affidavit response660, can be stored in DAAPI log database402. Referring next toFIG.6, a method of moving broadcast related information between a media playout system and multiple media automation applications will be discussed in accordance with embodiments of the present disclosure. As used herein, the term “broadcast related information” can include, without limitation, any or all of the following: spot information, media data, media information, log-related information, metadata, media items, logs, scheduling information, preferences, targeting information, station specific information, asset identifiers, addresses, or other information used in the broadcasting and scheduling process by media automation systems, broadcast disaster recovery systems, ad insertion systems, media management systems, playout systems, or services and modules used by those systems. As illustrated by block755, an application programming interface (API) included in a media broadcast system receives broadcast related information from multiple media automation applications. These applications can be implemented as microservice applications, each of which provides specific types of functionality used to generate logs, populate logs with media items, and provide those logs and media items to media stations for over-the-air or streaming broadcast. The API can receive the broadcast related information in response to a request for broadcast related information received from an edge device and/or a media playout system associated with one or more media stations. As illustrated by block760, the API stores the broadcast related information in one or more databases, for later transmission to requesting media playout systems, media broadcast systems serving local stations, disaster recovery systems, or the like. In various embodiments, the API stores current, recently past, and upcoming broadcast information for use by requesting systems and/or applications. For example, the API can store broadcast data that is expected to be used by broadcast stations within the next 24-48 hours, broadcast data currently being used by broadcast stations, and broadcast data for the previous 48 hours. Keeping a limited amount of broadcast data stored in the API's databases allows the API to quickly service requests for broadcast related information from requesting applications and/or systems. In some embodiments, different databases are used to store different types of broadcast-related information, e.g., log-related information and media-related information. In other embodiments, broadcast-related information can be stored in a single database. The database, or databases, used to store the broadcast related information can be implemented using the same processing device(s) used to implement the API. In other implementations, one or more of the databases used to store the broadcast information can be implemented as a cloud-based data storage system. As illustrated by block765, the API receives requests for stored broadcast related information from one or more media automation applications, including media automation services applications, applications included in edge devices and media playout systems, and the like. By receiving and servicing requests for broadcast-related information obtained from media automation applications, the media automation applications can be isolated from the local media automation and/or media playout systems that use services provided by the media automation applications. Additionally, using the API allows the media automation services to provide media broadcast information well in advance of the time the media broadcast information is provided to the local broadcast stations. In modern broadcasting systems, some broadcast information is relevant to multiple different media stations. If changes to widely distributed broadcast information is first broadcast to a large number of local media stations, any changes would also have to be provided to those local systems, requiring the local systems to have the capability of inserting the changes into previously provided broadcast logs, etc. In various embodiments described herein, changes to the broadcast information can be provided to the API before the information is transmitted to the local stations, thereby decreasing the amount of data that must be transmitted to accommodate changes in previously scheduled broadcasts, and decreasing the computing complexity needed by local broadcast systems to deal with changes in broadcast-related information. As illustrated by block770, in response to receiving the request, the API transmits the stored broadcast related information to the requesting media playout system. In various embodiments, the media playout system plays out media items by broadcasting them via a stream or an over-the air broadcast, and transmits to the API an affidavit, which includes information about the playout of the media items. As illustrated by block775, the API receives the affidavits from the playout systems. The affidavits include as-played information indicating a playout status of the media content. For example, an affidavit can include information indicating that a particular media item was fully played out on a particular media station, information indicating a start and/or end time of playout of that particular media item, information indicating partial or skipped playout of a media item, status codes, spot instance identifiers, aircheck URLs, media address information, or the like. As illustrated by block780, playout affidavits received by the API are forwarded to appropriate media automation applications. The API can decide which media automation application is to receive a particular affidavit based on address information included in message headers used to transmit the affidavits, or based on the content of the affidavits. For example, an affidavit can include information specifying that a particular affidavit is to be forwarded to log manager150, PAS130, MPS145, or media manager440(FIG.3). In some embodiments, a first affidavit can be sent to notify a media automation services/applications when playout of a media item has started, and a second affidavit can be sent to that same media automation service when playout of the media item has been completed. In various embodiments, the API transmits an affidavit generated by a first media automation application to a second media automation application. Referring next toFIG.7a method of handling change requests in a media broadcast system will be discussed in accordance with embodiments of the present disclosure. As illustrated by block785, An API, for example DAAPI105, receives a change request from a media playout system, such as broadcast system472(FIG.3), Cloud-based playout system236(FIGS.4-5), or edge device/media playout system50(FIG.1). The change request can indicate a requested change in one or more media items scheduled for broadcast/streaming by one or more media stations. The requested change includes a request to substitute, add, or remove primary content, advertising content, and/or tertiary content. As illustrated by block790, the API transmits the change request to appropriate media automation application/service. For example, if the request is for modification of scheduled programmatic advertisement, the change request can be transmitted to PAS130(FIG.3) and/or log manager150. If the request is for a change to a news, weather or traffic media item, the change request can be delivered to MPS145(FIG.3) and/or log manager150. If the request is for a change to a primary content media item, the change request can be delivered to media manager440(FIG.3) and/or log manager150. As illustrated by block795, in response to providing the change request to one or more of the media automation applications/services, updated broadcast-related information is received from the appropriate media automation applications/services. As illustrated by block800, the API stores the updated broadcast-related information in one or more databases, for example DAAPI database125(FIG.3). Either subsequent to, or concurrent with, storing the updated broadcast-related information, the API transmits the updated broadcast related information to the requesting media playout system, as illustrated by block805. In various embodiments, if the change request is denied by one of the media automation applications/services, a notification can be sent to the requesting media playout system in place of the updated broadcast-related information. In other embodiments, failure to receive updated broadcast-related information at the requesting playout system will result in the requesting playout system playing out the media as originally scheduled. Referring next toFIG.8, a method of disseminating broadcast information in a media broadcast system will be discussed in accordance with embodiments of the present disclosure. As illustrated by block810, an API receives a trigger message from one or more media automation applications. The trigger message indicates that broadcast-related information is ready for dissemination to playout systems and/or other media automation systems that provide media broadcast and/or streaming services associated with one or more media stations. In at least one embodiment, the trigger message is received from a log manager, and identifies at least one broadcast log ready for dissemination to one or more of the individual media stations. In various embodiments, the API can also receive trigger messages from media automation application/services other than the log manager, for example, if an updated weather report to be inserted into a particular broadcast log has not yet been generated, MPS145can transmit a trigger message once the weather report becomes available to MPS145. A trigger message can include an application/service identifier that identifies the media automation application/service that transmits the trigger message. Additionally, the trigger message can include a media item identifier, a log identifier, a station identifier indicating one or more media stations impacted by the trigger message, or the like. In various embodiments, however, a trigger message includes information instructing the API to request the media automation application/service to provide unspecified broadcast information from that automation application/service. For example, if a log manager application/service transmits a trigger message to the API, the trigger message may not include information about which log is ready to be disseminated, only that the API should transmit a request for broadcast information to the media automation application/service. As illustrated by block815, in response to receiving a trigger message from a media automation application/service, the API obtains the media item, log, or other broadcast-related information from the media automation application/service that transmitted the trigger message. Obtaining the broad-cast related information can include transmitting a request for the media automation application/service to transmit specific broadcast related information to the API, a request for the media automation application/service to transmit all broadcast-related information ready for dissemination, a request to provide only updates to previously transmitted broadcast information, or some combination thereof. As illustrated by block820, the API stores the broadcast-related information in a database. The API transmits the stored broadcast-related information to one or more media playout systems, edge devices, and/or other media automation applications/services, as illustrated by block825. Referring next toFIG.9, a method of storing and transmitting media files in a media broadcast system will be discussed in accordance with embodiments of the present disclosure. As illustrated by block830, an API receives information indicating a storage location in a media vault from which media items included in a broadcast log can be obtained. The address can include, but is not limited to, a uniform resource locator (URL) to the media item itself, a URL to a table from which the address of the media item can be retrieved, a record identifier, or the like. In some embodiments, a message in which the API receives the information indicating a storage location in a media vault from which media items included in a broadcast log can be obtained can also include metadata associated with that URL. As illustrated by block835, a check can be made to determine if the API has previously retrieved the media item associated with a URL, and stored that media item in an API database, such as DAAPI database125(FIG.3). This check can be performed by, for example, comparing metadata stored the API database with metadata included in the message received by the API. In other implementations, the URL itself provides sufficient information for the API to determine whether the media item has already been stored in the API database. In yet further embodiments, the API can maintain a log of previously accessed URLs, and a comparison of the received URL to the log of previously accessed URLs can be used as a basis for the decision. As illustrated by block840, if the media item identified by the URL is not already stored in the database, meaning that the media file for that media item is to be retrieved and stored in the API database, the API obtains the media file from storage in the media vault using the received address. As illustrated by block845, after obtaining the media file, the API caches or stores the media file within the API database. In this context, caching refers to storing the media file within the API database for a limited amount of time corresponding to hours, days, weeks, or months. Caching in this context is not intended to imply storing the data only for milliseconds or minutes. In at least one embodiment, the API will maintain a media file in storage within the API database at least as long as a media item corresponding to the media file is referenced in a current or upcoming broadcast log. In various embodiments, the API can automatically remove the media file from storage in the API database a predetermined amount of time, for example 1 week, after the media item is no longer referenced in a current or upcoming broadcast log. By caching the media file in the API database for a limited period of time, instead of storing it permanently, the size of API database can be maintained at a manageable size, while still making the media file quickly accessible to the API. As illustrated by block850, the API can transmit the addresses of media files stored within the API database to playout systems, media automation systems, edge devices, and like. The addresses can be transmitted to the playout systems in conjunction with transmission of full or partial broadcast logs to those playout systems. For example, a partial broadcast log that specifies spots (advertisements) to be broadcast by a local media station during a given daypart can include metadata specifying address metadata indicating the location in the API database from which the spots can be retrieved. In some cases, the address metadata can be stored as media metadata associated with a media item, and the address metadata can be linked to particular spots (log positions) rather than being included in the log. The address of the media files corresponding to the media items referenced in the broadcast logs can be primary content, advertising content, or tertiary content. In an example embodiment, a partial broadcast log including the address metadata for the spots (advertisements), or a partial broadcast log and separate media metadata linked to the partial broadcast log, are sent to a playout system 24 hours before the local media station is scheduled to broadcast the spots (advertisements). Note that the media files need not be transmitted at this time—just the partial broadcast log and the address metadata. As illustrated by block855, the API receives from the playout system a media item download request asking the API to transmit the media files included in the full or partial broadcast log. In various embodiments, the request will include the address of the media file within the API database. Continuing with the previous example, at some point after receiving the address of the media files included in the partial broadcast log, but before the media items identified in the partial broadcast log are scheduled to be broadcast and/or streamed, the playout system sends a download request to the API. The download request can include the address provided to the playout system by the API. As illustrated by block860, in response to receiving the media item download request from the playout system, the API facilitates transfer of media files from the API database, for example DAAPI media database403(FIG.4), to the requesting playout system. Facilitating the transfer of the media files can include passing the media item download request to the API database, which will process the request and transmit the requested media files to the requesting play out system. As illustrated by block865, if the check at block835indicates that the media files are not to be stored in the API database, retrieval of the media files from the media vault can be bypassed by foregoing execution of blocks840-850. Referring next toFIG.10, operation of an application programming interface (API) in a media broadcast system will be discussed in accordance with embodiments of the present disclosure. As illustrated inFIG.10, blocks870,875,880,885, and890can be performed by an ingestion/update module445(FIG.3), blocks895,900, and905can be performed by a break information module455(FIG.3), and blocks910and915can be performed by affidavit module245. Although illustrated as being performed by the listed modules, other suitable module arrangements can be used to provide the same or similar functionality. As illustrated by block870, an API receives a traffic log from a log manager. The traffic log, which is an example of a broadcast log, includes spot breaks. The term “spot break” refers to a group of one or more log positions reserved for insertion of media items that may or may not be advertising content. For example, a spot break may include log positions reserved for insertion of advertising content, log positions reserved for insertion of weather, traffic, or other primary content, and in some embodiments, log positions reserved for insertion of tertiary content. The traffic log may specify particular media items to be used to fill the spot breaks, or it may specify types or sources of media items. For example, a traffic log may specify that a first spot (log position) of a spot break is to be filled with programmatic advertisements obtained from PAS130(FIG.5), specify that a second spot (log position) is to be filled with a weather report obtained from MPS145(FIG.3), and preassign a particular media item to a third spot (log position) of the spot break. As illustrated by block875, the API receives spot break information, including spot metadata or other media item metadata, from multiple media automation applications. For example, the API can provide all or part of the traffic log to one or more media automation applications/services. The traffic log can be a partial or complete traffic log, and can include metadata that the media automation applications/services can use to select media items for insertion into traffic log. If the traffic log includes information indicating that a first spot (log position) of the spot break is to be filled with a weather report for the Dallas, Texas metroplex, the API can receive, spot break metadata from MPS145(FIG.3), that identifies a media file including a weather report for north Texas. If the traffic log indicates that another spot (log position) in another spot break is reserved for an advertisement targeted to college-educated women between the ages of 26 and 29, then the API can receive, spot break metadata from PAS130(FIG.3), that identifies a media file including an advertisement for a restaurant appealing to that demographic. In at least one embodiment, the spot break information includes an address from which a media item can be obtained, a creative identifier, and the like. As illustrated by block880, the API can store the spot break information in a relational database, such as RDS450(FIG.3). As illustrated by block885, the API can use the spot metadata included in the spot break information to retrieve the appropriate media files, for example from media manager440(FIG.3). As illustrated by block890, the API can cache the media items at a storage location within an API database, such as media storage database128. The API can also transmit some or all of the spot break information to a media playout system, such as broadcast system472, as illustrated by block895, which uses the spot break information to obtain media items for broadcast. As illustrated by block900, the playout system reports any differences between actually broadcast/streamed media items and media items scheduled for broadcast/streaming by the traffic log in a change notification. The API receives the change notification at Break Info & Skip a Spot API455(FIG.3). As illustrated by block905, Break Info & Skip a Spot API455can provide the change notification to ingestion/update module445(FIG.3), provide the change notification to affidavits module/API245, store portions of the information included in the change notification to RDS450(FIG.3), store portions of the information included in the change notification to media storage database128, or some combination of these actions. Although not specifically illustrated inFIG.10, ingestion/update module445and/or affidavits module245can forward some or all of the information included in the change notification to one or more of the advertisement automation micro-services included in Ad-Tech system435(FIG.3). As illustrated by block910, the API, e.g., DAAPI105, can receive playout affidavits from the media playout system. In at least one embodiment, affidavits differ from change notifications because change notifications can be used to report discrepancies in media item playout, while affidavits can include information reporting both discrepancies in media item playout and successful playout of media items. As illustrated by block915, the API forwards received affidavits to one or more media automation applications/services, which can use the affidavits for proof of playout, billing, future scheduling, calculating impressions, or the like. Referring next toFIG.11a disaster recovery system920will be discussed in accordance with embodiments of the present disclosure. Disaster recovery system920includes media provider back-end systems and services930, end-user digital device85, cloud-based sequencer945, system-wide recovery content955, local edge device50, and media broadcast/streaming stations995. During periods when cloud-based sequencer is operating normally, for example when there are no network disruptions or outages, cloud-based sequencer945communicates with media provider back-end systems and services930via communications path940, and provides access to services provided by media provider back-end systems and services930through an API, such as master API20(FIG.1) or DAAPI105(FIGS.2-5), included in cloud-based sequencer945. During normal operation, cloud-based sequencer945communicates with end-user digital device85via communications path950, with system-wide recovery content955via communications path947, and with local edge device50via communications path935. During periods of time when cloud-based sequencer945is unavailable, end-user digital device85accesses media provider back-end systems and services930via communications path925. In some embodiments, when cloud-based sequencer945is unavailable media provider back-end systems and services930can access system-wide recovery content955via communications path924. In at least one embodiment, during periods of time when cloud-based sequencer945is unavailable, local edge device50has no access to media provider back-end systems and services930or to system-wide recovery content955. In some embodiments, alternate communication pathways (not illustrated) can be provided between local edge device50and either or both of media provider back-end systems and services930and system-wide recovery content955. However, in various embodiments described herein, alternate pathways are not necessary, because local edge device50can continue to provide broadcast automation services to media broadcast/streaming stations995, even during periods when local edge device50is unable to access media provider back-end systems and services930. In various embodiments described herein, media provider back-end systems and services930include, but are not limited to: selection of media content including primary content, advertising content, and tertiary content; scheduling of the media content; generating broadcast logs designating particular media items for playout at particular times; obtaining media items designated by the broadcast logs; and maintaining records of actually broadcast or streamed media items. The services provided by media provider back-end systems and services930can be implemented using microservice applications implemented on one or more processing systems, or using other suitably configured processing systems. The media management system425illustrated inFIG.2shows various elements of media provider back-end systems and services930, for example traffic/billing system205, log manager150, MPS145, programmatic ad system (PAS)130, syndication service180, inventory management hub175, media vault190, media editor230, and enterprise copy module220, among others. End-user digital device85can be smart phone, tablet, smart television, laptop, desktop, wearable computer, radio, set-top box, or any device capable of receiving and presenting digital media streams or broadcasts to an end user via a media presentation application being executed on the end-user digital device85. In some embodiments, end-user digital device85can receive media items and assembly instructions from cloud-based sequencer945or media provider back-end systems and services930, and fully or partially assemble a collection of media items that are presented by end-user digital device85to an end-user. In other embodiments, end-user digital device85can receive a media stream from cloud-based sequencer945or a streaming server included in media provider back-end systems and services930, and present the media stream to the end user. Cloud-based sequencer945provides API access to content, for example advertisements, songs, music, and video content; access to scheduling, traffic, and advertisement insertion services, as well as edge device management services. Cloud-based media management system45(FIG.1) is an example of a cloud-based sequencer945illustrated inFIG.11. System-wide recovery content955can be implemented as one or more databases, such as DAAPI database125, which can include RDS450and media storage database128as shown inFIG.3. In various embodiments, system-wide recovery content955can be populated with broadcast-related information that is transmitted to one or more local edge devices50when access to cloud-based sequencer945has been restored after having been previously unavailable. Data can be stored in system-wide recovery content955by cloud-based sequencer945during normal operation, and by media provider back-end systems and services during periods of time when cloud-based sequencer945is unavailable to failure of one or more systems, network outages, or the like. The data stored in system-wide recovery content955can include, but is not limited to, broadcast-related information transmitted to local edge device50by cloud-based sequencer945, broadcast information transmitted to end-user digital device85from either cloud-based sequencer945or media provider back-end systems and services930. In some embodiments, broadcast-related information that would otherwise have been transmitted from cloud-based sequencer945to local edge device50but-for a system failure or network outage that prevents local edge device50from receiving that broadcast information can be stored in system-wide recovery content955. Upon recovery from the failure or network outage, the broadcast-related information stored in stored in system-wide recovery content955can be provided to local edge device50to bring local edge device50back to a current state. In effect, the broadcast-related information stored in stored in system-wide recovery content955allows local edge device50to find out “what is missed” when local edge device50was operating on its own. The broadcast-related information stored in system-wide recovery content955can include broadcast-related information for multiple edge device, multiple playout systems, and multiple media stations. Local edge device50streams or otherwise provides media broadcasts to one or more media broadcast/streaming stations995. In some embodiments, during normal operation a fully assembled media broadcast is received from cloud-based sequencer945at local edge device50, and transmitted to one or more media broadcast/streaming stations995. In some of those same embodiments, when local edge device50does not have access to cloud-based sequencer945, local edge device50still provides a fully assembled media broadcast to media broadcast/streaming stations995, but generates the fully assembled media broadcast itself based on locally stored broadcast-related information. In effect, local edge device50temporarily performs the functions normally provided cloud-based sequencer945during periods when access to cloud-based sequencer945is unavailable. In various embodiments, local edge device50includes, among other things, a local sequencer, a local player, local content storage, and a local messaging service, and an offline controller. The local sequencer has functionality similar to cloud-based sequencer945, and allows local edge device50to manage logs, obtain media items from local content storage, maintain a record of affidavits and change notifications, and manage messages. In at least one embodiment, when local edge device50determines when access to cloud-based sequencer945has been lost, the offline controller repoints communication endpoints from the cloud-based sequencer945to the local sequencer. Referring next toFIG.12an edge device50will be discussed in accordance with embodiments of the present disclosure. Edge device50includes local sequencer/content manager960, offline controller1005, media playout module985, content replication service990, emulators for traffic and continuity applications998, local media database991, and local log database992. In various embodiments, when cloud-based sequencer945is on line and accessible to edge device50, edge device50communicates with cloud-based sequencer945, through various automation service requests1030, to obtain one or more of the following: media items, broadcast logs, other broadcast-related information, fully or partially assembled broadcasts. Edge device also provides cloud-based sequencer945with change notification messages, affidavits, and other playout-related information received from media broadcast/streaming stations995, or generated internally by edge device50. Edge device50streams or otherwise transmits fully assembled broadcasts to one or more of the media broadcast/streaming stations995based on information received from cloud-based sequencer945. The fully assembled broadcasts transmitted from edge device50to media broadcast/streaming stations995can be the same fully assembled broadcasts received from cloud-based sequencer945, or edge device50can generate the fully assembled broadcasts using locally stored broadcast information and media items received from cloud-based sequencer945. In some embodiments the operational state of cloud-based sequencer945affects a determination by edge device50whether to generate fully assembled broadcasts for transmission to media broadcast/streaming stations995. In other embodiments, edge device50generates the fully assembled broadcasts regardless of the operational state of cloud-based sequencer945. In various embodiments, offline controller1005controls whether automation service requests1030are routed to cloud-based sequencer945or to local sequencer/content manager960. During periods of time when edge device50has access to cloud-based sequencer945, automation service requests1030are routed to cloud-based sequencer945. Automation service requests include requests for services provided by any of the media automation applications/services that can be accessed through an API included in cloud-based sequencer945(e.g., master API20ofFIG.1and DAAPI105ofFIGS.2-5). Examples of media application/services accessed through the API are discussed with reference toFIGS.1-10. Automation service requests can include, but are not limited to, requests for media files of primary media content, requests for original or updated logs, requests for media files of advertising content to be inserted into spot breaks, change requests/notifications, playout affidavits, or the like. During periods of time when edge device50does have access to cloud-based sequencer945, automation service requests1030are routed to cloud-based sequencer945. During periods of time when edge device50does not have access to cloud-based sequencer945, automation service requests1030are routed to local sequencer/content manager960, which emulates, or replicates, the media automation services/applications. Emulating the media automation services/applications includes emulating the functions of the API included in cloud-based sequencer945, so that local sequencer/content manager960receives the same automation service requests1030that would otherwise be received by cloud-based sequencer945, and providing responses that are locally indistinguishable from responses that would have been provided by cloud-based sequencer945, had cloud-based sequencer945been available. For example, assume an automation service request1030for an advertisement to be inserted into a spot block of a broadcast log is routed to cloud-based sequencer945during a period of time when edge device50has access to cloud-based sequencer945. Assume further that, in response to the automation service request1030, that cloud-based sequencer945returns the following information: an address specifying a location from which the advertisement can obtained, a unique system identifier associated with the advertisement, and metadata associated with the advertisement. If that same automation service request1030is routed to local sequencer/content manager960during a period of time when edge device50does not have access to cloud-based sequencer945, the response from local sequencer/content manager960will also include: an address specifying a location from which the advertisement can obtained, a unique system identifier associated with the advertisement, and metadata associated with the advertisement. The information included in the response to the automation service request1030routed to local sequencer/content manager960may be different than the information that would have been included in the response if the automation service request1030had been routed to cloud-based sequencer945, but the potential differences will be transparent to the requestor, because the requestor has received the information it requested. The reason for the potential difference is that, in at least one embodiment, local sequencer/content manager960will be using locally stored information to emulate the response of the intended advertisement insertion application/service, which in this example cannot currently be accessed through cloud-based sequencer945. The locally stored content used by local sequencer/content manager960to generate the fully assembled media broadcasts can be obtained by content replication service990during periods of time when cloud-based sequencer945is accessible to edge device50. In at least one embodiment, content replication service990obtains broadcast-related information, including media items and broadcast logs, in response to automation service requests1030generated by media playout module985or content replication service990, and stores the information received in one or more local databases, such as local media DB991and local log DB992. In at least one embodiment, content replication service990stores 14 days of broadcast-related information, including logs and media items, including 7 days of broadcast-related information before a current time, and 7 days of broadcast-related information after a current time. Data older than 7 days can be deleted from storage. In various embodiments, the length of time past data is stored data can be varied based on storage capacity, and data stored for upcoming time periods can vary based on how far in advance broadcast-related information is available. In implementations where cloud-based sequencer945and local sequencer/content manager960provide fully assembled media broadcasts, media playout module985transmits the fully assembled media broadcasts to media broadcast/streaming stations995. In some implementations, however, cloud-based sequencer945and local sequencer/content manager960do not always provide fully assembled broadcasts, but instead can provide some combination of logs or log-related information, media-items, metadata, or partially assembled media broadcasts sufficient to allow media playout module985to generate a fully assembled media broadcast. In those implementations, when edge device50has access to cloud-based sequencer945, media playout module985can generate fully assembled media broadcasts based on the information received from cloud-based sequencer945, and transmit the fully assembled media broadcasts to media broadcast/steaming stations995. Or when edge device50does not have access to cloud-based sequencer945, media playout module985can generate fully assembled media broadcasts based on the information received from local sequencer/content manager960, and transmits the fully assembled media broadcasts to media broadcast/steaming stations995. In some embodiments, a single media playout module985can serve more than one media broadcast/streaming stations995. In other embodiments, a single media playout module985can provide dedicated service to a single media broadcast/streaming station. Local sequencer/content manager960includes spot selection module965, and log management module970. Log management module970includes change/affidavits module975and reconciliation module980. In various embodiments, spot selection module965emulates the combined functions of ingestion/update module445, PAS130, and MPS145. In those same embodiments, log management module970emulates the combined functions of ingestion/update module445and log manager150. In other embodiments, separate emulators, such as emulators for traffic and continuity applications998, are used to emulate one or more traffic and continuity applications, such as PAS130, MPS145, and log manager150. For example, during periods of time when automation service requests1030are routed to local sequencer/content manager960, spot selection module965receives from media playout module985spot information requests, which can include a request for spots (media items) to be inserted into spot breaks of a broadcast log. In response to receiving a spot information request, spot selection module965can forward the spot information request to emulators for traffic and continuity applications998, which selects a spot for inclusion in the log, and provides a spot information response to spot selection module965. Spot selection module965returns the spot information response to media playout module985. The spot information response includes information indicating a location at which a spot (media item) referenced in a broadcast log can be retrieved from local media database991. By way of contrast, a spot information response provided to media playout module985by cloud-based sequencer945would, in at least one embodiment, include information indicating a location at which the spot (media item) could be retrieved from DAAPI database125. In another example, during periods of time when automation service requests1030are routed to local sequencer/content manager960, log management module receives from media playout module985spot information requests, which can include a request for log-related information, such as broadcast logs and metadata associated with the broadcast logs. In response to receiving a spot information request including a request for log-related information, spot selection module965can forward the request to emulators for traffic and continuity applications998, which selects log-related information to be returned to media playout module985, and transmits the log-related information to spot selection module965. Spot selection module965returns the log-related information to media playout module985. In at least some embodiments, automation service requests1030can also include affidavits and change notifications generated by media playout module985. Log management module970uses change/affidavits module975, to perform/emulate the functions that would otherwise be performed by change notification module120(FIG.2) and affidavits module/API245(FIG.2). One difference between the functionality provided by change notification module120and affidavits module/API245, which are included in cloud-based sequencer945, and change/affidavits module975, which is included in local sequencer/content manager960, is that in some embodiments, change/affidavits module975stores change notifications and affidavits for later transmission by reconciliation module980. In at least one embodiment, change/affidavits module975stores change notifications to local log database992, until reconciliation module980transmits the change notification to cloud-based sequencer945in response to restoration of access to cloud-based sequencer945. Although illustrated as a single module, change/affidavit module975can be implemented as separate modules and/or systems. In some embodiments, emulators for traffic and continuity applications998and/or local sequencer/content manager960can be implemented as local versions of the various backend systems, media automation applications, or microservices to which cloud-based sequencer945provides access. In at least one embodiment, the functionality being emulated is more limited than that provided by the backend systems, media automation applications, or microservices being emulated. For example, simplified versions of advertisement targeting may be implemented to limit the processing resources needed. Or, in some implementations, there is a natural limitation imposed by the fact that edge device50is using locally stored media and log content, and may not have access to the full universe of potential media items available to the backend systems, media automation applications, or microservices being emulated. In some implementations, emulators for traffic and continuity applications998can use simplified or different selection and scheduling algorithms than those used by the backend systems, media automation applications, or microservices being emulated. Assume, as noted previously, that 14 days of media content and logs are stored in the local databases available to edge device50. Assume further that the broadcast logs for the current day are fully assembled broadcast logs, but that broadcast logs for the following day are incomplete. Given the above assumptions, if access to cloud-based sequencer945is lost, media playout module985can simply transmit the current day's broadcast logs to media broadcast/streaming stations995, because they are already fully assembled. However, if access to cloud-based sequencer945has not been restored by midnight, the following day's logs are not yet fully assembled, so media playout module985will send automation service requests1030to obtain media items needed to complete the following day's logs. In response to those automation service requests1030, local sequencer/content manager960can query emulators for traffic and continuity applications998. The emulators can obtain the appropriate log and spot metadata from local log database992, select a locally available media items from local media database991using the spot metadata, and provide the selection information to media playout module985. Media playout module985can retrieve the media item from local media database991, generate a fully assembled broadcast log, and transmit that fully assembled broadcast log to media broadcast/streaming stations995. Information about media items selected for inclusion by emulators for traffic and continuity applications998can be stored by reconciliation module980, for later reporting to cloud-based sequencer945. When access to cloud-based sequencer945is restored, reconciliation module980transfers information stored during the period of time cloud-based sequencer is offline to cloud-based sequencer945, for delivery to the appropriate traffic and continuity applications, microservices, backend systems, etc. For example, any advertisements, primary content items, or tertiary items selected for insertion into broadcasts by local sequencer/content manager960can be transmitted to DAAPI105(FIG.3) for delivery to log manager150(FIG.3). Affidavits related to playout of programmatic advertisements during the time cloud-based sequencer945was unavailable can be transmitted to DAAPI105for delivery to both PAS130(FIG.3) and log manager150. Reconciliation module980can also coordinate retrieval of broadcast-related information, such as media items and logs, which would have been received by edge device50but for the inability of edge device50to access cloud-based sequencer945. In at least one embodiment, access to cloud-based sequencer945may be restored during a current media broadcast based on a log that was fully assembled during a time when cloud-based sequencer945was unavailable. In that case, upon reconnection with cloud-based sequencer945, a new fully assembled media broadcast, which differs from the current media broadcast might be received by edge device50. In such a case, media playout modules985can continue transmitting the current media broadcast, and report any differences between the current media broadcast and the new fully assembled media broadcast as change notifications. In some embodiments, the switchover from the current media broadcast and the new fully assembled media broadcast can be delayed until a spot break in the current media broadcast substantially aligns with a spot break in the new fully assembled media broadcast. Offline controller1005includes a control module1015, which controls a switch module966to route automation service requests1030to either cloud-based sequencer945, or local sequencer/content manager960based on input from a negotiation module1020and a status module1025. Status module1025detects whether edge device945can access cloud-based sequencer945, and notifies control module1015to generate control signal1010in response to a total or partial loss of to cloud-based sequencer945. In response to detecting a total lack of access to cloud-based sequencer945, control module1015can instruct switch module966to reroute all automation service requests1030to local sequencer/content manager960. A partial loss of access includes an inability to access one or more of the media automation applications/services used by cloud-based sequencer945. In the event of a partial loss of access, control module1015can reroute all automation service requests1030to local sequencer/content manager960, or reroute only those automation service requests1030that rely on inaccessible media automation applications/services. Determining whether cloud-based sequencer945is totally or partially inaccessible can be based on the content of messages received from cloud-based sequencer945or an edge device manager35(FIG.1), the fact that one or more messages have been received from cloud-based sequencer945or an edge device manager35, a failure to receive one or more messages from cloud-based sequencer945or an edge device manager35, or some combination thereof. In some embodiments, network availability, status, and/or quality messages received from other devices can also be used by status module1025to determine when one or more automation service requests1030should be rerouted. In cases where multiple edge devices have been unable to access cloud-based sequencer945, each of the multiple edge devices can include a negotiation module1015, to allow an orderly recovery of the edge devices to a current state. Note that the term “current state” refers to a state in which edge device50would be if access to cloud-based sequencer had not been lost. In this context, the term “current state” does not mean any state in which edge device50happens to be at the current time. Thus, for example, for edge device50to be in a current state, local media database991and local log database992will have stored all broadcast information, including media and logs, which would have been transmitted to edge device50by cloud-based sequencer945during the time that cloud-based sequencer945was partially or wholly unavailable, or inaccessible. To bring edge device50into a current state, media items, logs, metadata, and other broadcast-related information may need to be obtained from cloud-based sequencer945. In some implementations, cloud-based sequencer945may not be able to provide all broadcast information to every edge device at the same time. To prevent the case in which performance of cloud-based sequencer945is impacted by excessive requests for “missing” information, negotiation module1020can negotiate with other edge devices to determine a priority in which each edge device will transmit requests for cloud-based sequencer945to provide the missing information, or in some cases an order in which automation services requests1030are rerouted from local sequencer/content manager960to cloud-based sequencer945. Negotiations among edge devices may or may not include the cloud-based sequencer. Negotiation module1020of edge device50can establish a priority of reconnection based on an order in which the edge devices regained access to cloud-based sequencer, based on a collision handling process, similar to those implemented in Ethernet protocols; based on passing of a token, similar to Token ring protocols, based on how much broadcast related information is stored in a particular edge device, based on how much extra broadcast information will need to be provided by cloud-based sequencer, based on a priority of stations served by a particular edge device, based on a number if stations served by a particular edge device, based on network statistics associated with each particular edge device, based on a time zone, or based on some other factor distinguishing one edge device from another. In at least one embodiment, negotiation between currently on-line sequencers, including local sequencers coming on-line after recovering from loss of access to the cloud-based sequencer, assists in rebalancing sequencer loads during reconnection of the local sequencers coming back on-line. Referring next toFIG.13a disaster recovery method implemented by an edge device will be discussed in accordance with embodiments of the present disclosure. As illustrated by block1075, a check is made to determine if a cloud-based sequencer is available. As illustrated by block1080, if the check at block1075indicates that the cloud-based sequencer is available, an edge device interface is pointed to a cloud-based sequencer. Pointing the edge device interface to the cloud-based sequencer can include addressing automation service requests, affidavits, and or other messages related to obtaining broadcast content to the cloud-based sequencer. As illustrated by block1082, a media playout system served by the edge device requests media automation services from the cloud-based sequencer whenever the interface of the edge device is pointed to the cloud-based sequencer. Media automation services can include, but are not limited to the following: obtaining media content selection services; including selection of primary content advertising content, and tertiary content; scheduling services, including log generation services; media access services, including the ability to obtain selected media content; programmatic advertisement services; media creation services; traffic and billing services, or some combination of these and other services. As illustrated by block1085, the edge device receives broadcast-related content, such as broadcast logs, media files, and metadata from the cloud-based sequencer in response to requests sent to the cloud-based sequencer by the edge device. In some embodiments broadcast-related content can be pushed to the edge device by the cloud-based sequencer whenever the interface of the edge device is pointed to the cloud-based sequencer. As illustrated by block1090, local versions of broadcast logs, media files, and metadata are stored local to the edge device by a content replication services module. The method returns to block1075, after content replication services module stores the broadcast-related content local to the edge device. As illustrated by block1105, if the check at block1075indicates that the cloud-based sequencer is unavailable, the edge device repoints its interface to a local sequencer. Pointing the edge device interface to the local sequencer can include addressing automation service requests, affidavits, and or other messages related to obtaining broadcast content to the local sequencer, instead of to the cloud-based sequencer. As illustrated by block1110, whenever the edge device interface is pointed to the local sequencer, the local sequencer emulates one or more media automation services that would normally be provided by the cloud-based sequencer. As illustrated by block1115, whenever the edge device interface is pointed to the local sequencer an offline controller routes requests for media automation services from a media playout system served by the edge device to the local sequencer. The method returns to block1075, where a check is made to determine whether the cloud-based sequencer has become available. As illustrated by the combination of blocks1075,1080, and1105, as long as the cloud sequencer is available, the interface of the edge device is pointed to the cloud-based sequencer, but when the cloud-based sequencer is unavailable, the interface of the edge device is pointed to a sequencer local to the edge device. Referring next toFIG.14a disaster recovery method implemented by an edge device manager, for example edge device manager35(FIG.1) will be discussed in accordance with embodiments of the present disclosure. As illustrated by block1120, an edge device manager determines a status of a cloud-based sequencer providing media automation services to an edge device. In some embodiments, edge device manager can determine the status of the cloud-based sequencer based on at least one of the following: content of a message received from the cloud-based sequencer, content of a message received from one or more edge devices being served by the cloud-based sequencer; the presence or absence of a message from the cloud-based sequencer; network outage messages; or failure of a cloud-based sequencer to respond, within a threshold amount of time, to a query transmitted to the cloud-based sequencer. Other suitable techniques can be used. For example, edge device manager can transmit a log request from an edge device to the cloud-based sequencer. If the edge device manager does not receive a response within a predetermined period of time, the edge device manager can determine that the status of the cloud-based sequencer is “unavailable.” If the cloud-based sequencer responds to requests for logs, but not requests for programmatic ad insertion, the edge device manager can determine that the status of the cloud-based sequencer is “partially unavailable.” If the edge device manager receives a status message from the cloud-based sequencer indicating that the cloud-based sequencer is fully functional, the edge device manager can determine that the status of the cloud-based sequencer is “available.” If the edge device manager receives a network message from a network router indicating that communication with the cloud-based sequencer has been lost, the edge device manager can determine that the status of the cloud-based sequencer is “unavailable.” And so forth. As illustrated by block1125, the edge device manager transmits a status message to the edge device indicating the status of the cloud-based sequencer, as determined by the edge device manager. As indicated by block1130, a check is made to determine whether the message sent by the edge device manager in block1125has been received by the managed edge device. This can be determined by, for example, by verifying that an acknowledgment message from the managed edge device has been received by the edge device manager. If the edge device manager determines at block1130that the edge device manager is no longer capable of communicating with the managed edge device, a notification is sent to the cloud-based sequencer, as indicated by block1144, and the method returns to block1130, where communications between the edge device manager and the edge device being managed are monitored until communication has been reestablished. As illustrated by block1135, if the edge device is still receiving messages from the edge device manager, and the status of the cloud-based sequencer has been determined to be “unavailable,” or in some embodiments “partially unavailable,” the edge device manager transmits a message to the edge device, as shown in block1145, that causes the edge device to point its interface(s) to a local sequencer. Note that in some embodiments employing an edge device manager, copies of messages and requests routed to the local sequencer can also be sent to the edge device manager to be place in a queue when the status of the cloud-based sequencer changes back to available. As illustrated by block1140, if the edge device is still receiving messages from the edge device manager, and the status of the cloud-based sequencer has been determined to be “available,” the edge device manager transmits a message to the edge device that causes the edge device to point its interface(s) to the cloud-based sequencer. Referring next toFIG.15a disaster recovery method implemented by an end-user device will be discussed in accordance with embodiments of the present disclosure. As illustrated by block1150, a media streaming end-user device, such as end-user digital device85(FIG.11) is connected to a cloud-based sequencer, such as cloud-based sequencer945(FIG.11), which provides a media stream for playout by the end-user device. As illustrated by block1155, media streaming end-user device detects that the cloud-based sequencer is unavailable. In at least one embodiment, unavailability of the cloud-based sequencer is indicated by a lack of streaming media content being received by streaming end-user media device. As illustrated by block1160, in response to detecting that the cloud-based sequencer is unavailable, the media streaming end-user device bypasses the cloud-based sequencer and connects directly to the backend systems and services used by the cloud-based sequencer, e.g., media provider back-end systems and services930. For example, if streaming end-user device is receiving a media stream through a cloud-based streaming service, and the cloud-based streaming service goes offline, and stops transmitting a media stream to the streaming end-user device, requests for media streams formerly being sent to the cloud-based streaming service can be sent directly to the backend systems that are providing the media streams to the cloud-based streaming service. In some embodiments, connecting directly to the back-end systems may result in some loss of customization, but will still allow the streaming end-user device to receive a “basic” media stream. For example, if a user is listening to an 80's radio station customized to include customized news, weather, and traffic, the user may still be able to receive the 80's station without the customized news, weather, and traffic. However, in other embodiments, the backend systems can request stream customization information directly from the media streaming end-user device. Furthermore, in embodiments where the streaming end-user device is capable of locally assembling a customized stream using broadcast-related information obtained from the cloud-based sequencer, the streaming end-user device can repoint its interfaces to the backend systems, and receive substantially the same broadcast-related information it would otherwise have received from the cloud-based sequencer. As illustrated by block1165, in at least one embodiment, even when receiving a stream and/or broadcast related information from the backend systems, the streaming end-user device can continue sending requests to the cloud-based sequencer at various intervals to detect when the cloud-based sequencer comes back online. As illustrated by block1170, when the streaming end-user device detects that the cloud-based sequencer is once again available, the streaming end-user device disconnects from the backend systems and reconnects to the cloud-based sequencer. Referring next toFIG.16a disaster recovery method implemented by backend systems and an end-user device will be discussed in accordance with embodiments of the present disclosure. As illustrated by block1175, a media streaming end-user device, such as end-user digital device85(FIG.11) is connected to a cloud-based sequencer that provides a media stream for playout by the end-user device. As illustrated by block1180backend systems, such as media provider back-end systems and services930used by cloud-based sequencer, such as cloud-based sequencer945(FIG.11), detects that the cloud-based sequencer is unavailable. The backend systems can detect that the cloud-based sequencer is unavailable if it stops receiving service requests from the cloud-based sequencer, if the cloud-based sequencer fails to respond to a message from the backend systems, or the like. As illustrated by block1185, in response to the backend systems detecting that the cloud-based sequencer is unavailable, the backend system can initiate direct communication with the media streaming end-user device, thereby ensuring that the media streaming end-user device receives the same, or a similar, media stream as it was receiving from the cloud-based sequencer. Initiating direct communication with the media streaming end-user device can include transmitting a query to the last known address of the media streaming end-user device. The query can include a request for the media streaming end-user device to provide the backend system with information identifying the last media item received from the cloud-based sequencer, information about stream customization parameters, or the like. In other embodiments, the customization information can be obtained from previous requests received from the cloud-based sequencer. This customization information allows the backend systems to provide a substantially fully customized media stream to the media streaming end-user device. In some embodiments, for example where the media streaming end-user device is capable of assembling the stream locally using received broadcast-related information, the backend system can receive and respond to requests for broadcast-related information received directly from the media streaming end-user device. As illustrated by block1190, in at least one embodiment, the backend systems continually monitor the availability of the cloud-based sequencer to detect when the availability of the cloud-based sequencer has been restored. As illustrated by block1195, when the backend systems detects that the cloud-based sequencer is once again available, the backend systems notify the media streaming end-user device to disconnect from the backend systems and reconnect to the cloud-based sequencer. Referring now toFIG.17, a high-level block diagram of a processing system is illustrated and discussed. Methods and processes and other embodiments discussed previously may be implemented in a processing system executing a set of instructions stored in memory, or on a removable computer readable medium. An example of a processing system according to some embodiments is illustrated inFIG.17. Computing system1200includes one or more central processing units, such as CPU A1205and CPU B1207, which may be conventional microprocessors interconnected with various other units via at least one system bus1208. CPU A1205and CPU B1207may be separate cores of an individual, multi-core processor, or individual processors connected via a specialized bus1206. In some embodiments, CPU A1205or CPU B1207may be a specialized processor, such as a graphics processor, other co-processor, or the like. Computing system1200includes random access memory (RAM)1220; read-only memory (ROM)1215, wherein the ROM1215could also be erasable programmable read-only memory (EPROM) or electrically erasable programmable read-only memory (EEPROM); and input/output (I/O) adapter1225, for connecting peripheral devices such as disk units1230, optical drive1236, or tape drive1237to system bus1208; a user interface adapter1240for connecting keyboard1245, mouse1250, speaker1255, microphone1260, or other user interface devices to system bus1208; communications adapter1265for connecting processing system1200to an information network such as the Internet or any of various local area networks, wide area networks, telephone networks, or the like; and display adapter1270for connecting system bus1208to a display device such as monitor1275. Mouse1250has a series of buttons1280,1285and may be used to control a cursor shown on monitor1275. It is noted that terminologies as may be used herein such as bit stream, stream, signal sequence, etc. (or their equivalents) have been used interchangeably to describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, text, graphics, audio, etc. any of which may generally be referred to as ‘data’). As may be used herein, the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. For some industries, an industry-accepted tolerance is less than one percent and, for other industries, the industry-accepted tolerance is 10 percent or more. Other examples of industry-accepted tolerance range from less than one percent to fifty percent. Industry-accepted tolerances correspond to, but are not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, thermal noise, dimensions, signaling errors, dropped packets, temperatures, pressures, material compositions, and/or performance metrics. Within an industry, tolerance variances of accepted tolerances may be more or less than a percentage level (e.g., dimension tolerance of less than +/−1%). Some relativity between items may range from a difference of less than a percentage level to a few percent. Other relativity between items may range from a difference of a few percent to magnitudes of differences. As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to.” As may even further be used herein, the term “configured to,” “operable to,” “coupled to,” or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with,” includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably,” indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2, a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1. As may be used herein, the term “compares unfavorably,” indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship. As may be used herein, one or more claims may include, in a specific form of this generic form, the phrase “at least one of a, b, and c” or of this generic form “at least one of a, b, or c,” with more or less elements than “a,” “b,” and “c.” In either phrasing, the phrases are to be interpreted identically. In particular, “at least one of a, b, and c” is equivalent to “at least one of a, b, or c” and shall mean a, b, and/or c. As an example, it means: “a” only, “b” only, “c” only, “a” and “b,” “a” and “c,” “b” and “c,” and/or “a,” “b,” and “c.” As may also be used herein, the terms “processing module,” “processing circuit,” “processor,” “processing circuitry,” and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, processing circuitry, and/or processing unit may be, or may further include memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, processing circuitry, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, processing circuitry, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, processing circuitry and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, processing circuitry and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture. The term “module” is used in the description of one or more of the embodiments. A module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions. A module may operate independently and/or in conjunction with software and/or firmware. As also used herein, a module may contain one or more sub-modules, each of which may be one or more modules. One or more embodiments have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules, and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with one or more other routines. In addition, a flow diagram may include an “end” and/or “continue” indication. The “end” and/or “continue” indications reflect that the steps presented can end as described and shown or optionally be incorporated in or otherwise used in conjunction with one or more other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained. The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones. One or more functions associated with the methods and/or processes described herein can be implemented via a processing module that operates via the non-human “artificial” intelligence (AI) of a machine. Examples of such AI include machines that operate via anomaly detection techniques, decision trees, association rules, expert systems and other knowledge-based systems, computer vision models, artificial neural networks, convolutional neural networks, support vector machines (SVMs), Bayesian networks, genetic algorithms, feature learning, sparse dictionary learning, preference learning, deep learning and other machine learning techniques that are trained using training data via unsupervised, semi-supervised, supervised and/or reinforcement learning, and/or other AI. The human mind is not equipped to perform such AI techniques, not only due to the complexity of these techniques, but also due to the fact that artificial intelligence, by its very definition— requires “artificial” intelligence—i.e., machine/non-human intelligence. One or more functions associated with the methods and/or processes described herein can be implemented as a large-scale system that is operable to receive, transmit and/or process data on a large-scale. As used herein, a large-scale refers to a large number of data, such as one or more kilobytes, megabytes, gigabytes, terabytes or more of data that are received, transmitted and/or processed. Such receiving, transmitting and/or processing of data cannot practically be performed by the human mind on a large-scale within a reasonable period of time, such as within a second, a millisecond, microsecond, a real-time basis, or other high speed required by the machines that generate the data, receive the data, convey the data, store the data and/or use the data. One or more functions associated with the methods and/or processes described herein can require data to be manipulated in different ways within overlapping time spans. The human mind is not equipped to perform such different data manipulations independently, contemporaneously, in parallel, and/or on a coordinated basis within a reasonable period of time, such as within a second, a millisecond, microsecond, a real-time basis or other high speed required by the machines that generate the data, receive the data, convey the data, store the data and/or use the data. One or more functions associated with the methods and/or processes described herein can be implemented in a system that is operable to electronically receive digital data via a wired or wireless communication network and/or to electronically transmit digital data via a wired or wireless communication network. Such receiving and transmitting cannot practically be performed by the human mind because the human mind is not equipped to electronically transmit or receive digital data, let alone to transmit and receive digital data via a wired or wireless communication network. One or more functions associated with the methods and/or processes described herein can be implemented in a system that is operable to electronically store digital data in a memory device. Such storage cannot practically be performed by the human mind because the human mind is not equipped to electronically store digital data. One or more functions associated with the methods and/or processes described herein may operate to cause an action by a processing module directly in response to a triggering event—without any intervening human interaction between the triggering event and the action. Any such actions may be identified as being performed “automatically,” “automatically based on” and/or “automatically in response to” such a triggering event. Furthermore, any such actions identified in such a fashion specifically preclude the operation of human activity with respect to these actions—even if the triggering event itself may be causally connected to a human activity of some kind. Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art. As may further be used herein, a computer readable memory includes one or more memory elements. A memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, a quantum register or other quantum memory and/or any other device that stores data in a non-transitory manner. Furthermore, the memory device may be in a form a solid-state memory, a hard drive memory or other disk storage, cloud memory, thumb drive, server memory, computing device memory, and/or other non-transitory medium for storing data. The storage of data includes temporary storage (i.e., data is lost when power is removed from the memory element) and/or persistent storage (i.e., data is retained when power is removed from the memory element). As used herein, a transitory medium shall mean one or more of: (a) a wired or wireless medium for the transportation of data as a signal from one computing device to another computing device for temporary storage or persistent storage; (b) a wired or wireless medium for the transportation of data as a signal within a computing device from one element of the computing device to another element of the computing device for temporary storage or persistent storage; (c) a wired or wireless medium for the transportation of data as a signal from one computing device to another computing device for processing the data by the other computing device; and (d) a wired or wireless medium for the transportation of data as a signal within a computing device from one element of the computing device to another element of the computing device for processing the data by the other element of the computing device. As may be used herein, a non-transitory computer readable memory is substantially equivalent to a computer readable memory. A non-transitory computer readable memory can also be referred to as a non-transitory computer readable storage medium. While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
162,933
11943505
DETAILED DESCRIPTION The ensuing description provides preferred exemplary embodiment(s) only, and is not intended to limit the scope, applicability or configuration of the disclosure. Rather, the ensuing description of the preferred exemplary embodiment(s) will provide those skilled in the art with an enabling description for implementing a preferred exemplary embodiment of the disclosure. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth in the appended claims. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments maybe practiced without these specific details. For example, circuits may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Also, it is noted that the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. Various embodiments will now be discussed in greater detail with reference to the accompanying figures, beginning withFIG.1. FIG.1illustrates a content distribution system100, in accordance with disclosed embodiments of the present disclosure. For brevity, system100is depicted in a simplified and conceptual form, and may generally include more or fewer systems, devices, networks, and/or other components as desired. Further, the number and types of features or elements incorporated within the system100may or may not be implementation-specific, and at least some of the aspects of the system100may be similar to a cable television distribution system, an IPTV (Internet Protocol Television) content distribution system, and/or another type of media or content distribution system. The system100may include content television service provider system102(e.g., a television service provider system), satellite uplink104, a plurality of orbiting (e.g., geosynchronous) satellites106, satellite receiver108, one or more television receivers110, one or more content sources112(e.g., online content sources), computing devices116-1,116-2,116-3,116-4(referenced generally herein with116), and service provider systems103. In some embodiments, each of the television receivers110may include a content composite subsystem111. Additionally or alternatively, the content provider102may include a content composite subsystem111in whole or in part. Additionally or alternatively, one or more service provider systems103may include a content composite subsystem111in whole or in part. Additionally or alternatively, one or more computing devices116may include a content composite subsystem111in whole or in part. The content composite subsystem111may be configured to facilitate various content composite generation/control features in accordance with various embodiments disclosed herein. The extent to which the media devices116may be configured to provide features of the subsystem111(e.g., by way of software updates and communications from the system102-1) may depend on the processing power and storage capabilities of a given device116. The system102-1may communicate with a given device116to pull specifications and current device capability information from the device116. Based on such communications, the system102-1may the extent to which the device116can be configured to provide features of the subsystem111and may operate accordingly. For example, the system102-1may push one or more software packages to the device116to configure the device116to provide a set of one or more features of the subsystem111. In instances where the device116lacks sufficient processing and/or storage capabilities, the subsystem111may operate on the system102-1. As one example with respect to many features disclosed herein, the filtering of composites180may be performed on the backend at system102-1when the device116lacks sufficient resources to perform the filtering itself. Further, in some embodiments, additionally or alternatively, one or more service provider systems103-1may include a content composite subsystem111in whole or in part. The content composite subsystem111may be configured to facilitate various content adaptation features in accordance with various embodiments disclosed herein. In general, the system100may include a plurality of networks120that can be used for bi-directional communication paths for data transfer between components of system100. Disclosed embodiments may transmit and receive data, including video content, via the networks120using any suitable protocol(s). The networks120may be or include one or more next-generation networks (e.g., 5G wireless networks and beyond). Further, the plurality of networks120may correspond to a hybrid network architecture with any number of terrestrial and/or non-terrestrial networks and/or network features, for example, cable, satellite, wireless/cellular, or Internet systems, or the like, utilizing various transport technologies and/or protocols, such as radio frequency (RF), optical, satellite, coaxial cable, Ethernet, cellular, twisted pair, other wired and wireless technologies, and the like. In various instances, the networks120may be implemented with, without limitation, satellite communication with a plurality of orbiting (e.g., geosynchronous) satellites, a variety of wireless network technologies such as 5G, 4G, LTE (Long-Term Evolution), 3G, GSM (Global System for Mobile Communications), another type of wireless network (e.g., a network operating under Bluetooth®, any of the Institute of Electrical and Electronics (IEEE) 802.11 suite of protocols, and/or any other wireless protocol), a wireless local area network (WLAN), a HAN (Home Area Network) network, another type of cellular network, the Internet, a wide area network (WAN), a local area network (LAN) such as one based on Ethernet, Token-Ring and/or the like, such as through etc., a gateway, and/or any other appropriate architecture or system that facilitates the wireless and/or hardwired packet-based communications of signals, data, and/or message in accordance with embodiments disclosed herein. In various embodiments, the networks120and its various components may be implemented using hardware, software, and communications media such wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing and/or the like. In some embodiments, the networks120may include a telephone network that may be circuit switched, package switched, or partially circuit switched and partially package switched. For example, the telephone network may partially use the Internet to carry phone calls (e.g., through VoIP). In various instances, the networks120may transmit data using any suitable communication protocol(s), such as TCP/IP (Transmission Control Protocol/Internet protocol), SNA (systems network architecture), IPX (Internet packet exchange), UDP, AppleTalk, and/or the like. Many embodiments may include a large number of content provider systems102, data source systems103, and/or such media devices. The content provider systems102may distribute broadcast video content to the endpoint devices116and receivers110via one or more networks of the networks120. For example, a content provider system102may be configured to stream, via the networks120, television channels, live sporting events and other competitions, on-demand programming, movies, other shows, television programs or portions of television programs following and/or during a live broadcast of the television programs, announcement content and commercials, programming information (e.g., table data, electronic programming guide (EPG) content, etc.), and/or other services to endpoint devices116and receivers110via satellite, 5G, 4G, and/or LTE wireless communication protocols and network components, in accordance with embodiments disclosed herein. The content provider system102may include one or more content server systems configured to stream television programming, including televised events such as sports events, to the computing devices116via the network120. When the streaming content servers stream content to the computing devices116, the stream content may be processed and displayed by the computing devices116using one or more applications installed on the computing devices116. Some such streaming services may require a subscription and may require user authentication, e.g., with a username and/or password which may or may not be associated with an account map to the television receiver110. Accordingly, the streaming services may make a television program available for streaming or download during the live broadcast of the television program. The content provider system102may include one or more adaptable content object repositories176. The content provider system102may store adaptable content objects in a repository176. The one or more adaptable content object repositories176may be implemented in various ways. For example, one or more data processing systems may store adaptable content objects. One or more relational or object-oriented databases, or flat files on one or more computers or networked storage devices, may store adaptable content objects. In some embodiments, a centralized system stores adaptable content objects; additionally or alternatively, a distributed/cloud system, network-based system, such as being implemented with a peer-to-peer network, or Internet, may store adaptable content objects. Content objects176and/or content object objects177may correspond to any one or combination of raw data, unstructured data, structured data, information, and/or content which may include media content, text, documents, files, instructions, code, executable files, images, video, audio, audio video, and/or any other suitable content suitable for embodiments of the present disclosure. For example, the adaptable content objects176may correspond to visual and/or audiovisual announcements with graphical and/or audio components particularized to certain types of services. In some embodiments, the announcements may correspond to commercials to be presented during commercial breaks of television programming, such as televised events, and/or streamed events. In some instances, the content objects176may be sourced by one or more of the service provider systems103. In some embodiments, the adaptable content objects176may correspond to video and/or audio video file structures with one or more transition points, hooks, frames, windows, and/or the like for merging with one or more particularized content objects, content objects177, particularized to certain services (e.g., services of online gambling/betting platforms, content, and features) of the service provider systems103. As disclosed herein, the adaptable content objects176may be merged, blended, joined, overlaid, customized, and/or the like in any suitable manner with other particularized content objects177in order to create electronic content composites180particularized to certain types of services. In various embodiments, as further disclosed herein, the adaptable content objects176and/or the other content objects adjusted, and/or otherwise prepared to facilitate the merging, blending, joining, overlaying, customizing, and/or the like and insertion into a content stream as disclosed further herein. In some embodiments, the content provider system102may provide the adaptable content objects176and, in some embodiments, the particularized content objects177, to the devices116via one or more of the networks120. Additionally or alternatively to providing the adaptable content objects176and/or particularized content objects177, the content provider system102may provide content composites180to the television receiver110through such means. In some embodiments, the content provider system102may provide the adaptable content objects176and, in some embodiments, the particularized content objects177, to the television receiver110as part of a data transfer that is sent through the satellite106. For example, in some embodiments, the television receiver110may receive a downlinked satellite signal that includes the data for adaptable content objects176and/or particularized content objects177transferred on a designated line in the vertical blanking interval (VBI) or other portion of the broadcast service communication that does not interfere with other transmitted content. Additionally or alternatively, the content provider102may provide adaptable content objects176and/or particularized content objects177to the television receiver110via the one or more data networks120. The television receiver110may store the adaptable content objects176and/or particularized content objects177in an adaptable content object176repository and/or a particularized content objects177repository included in the television receiver110or otherwise local to the television receiver110. Consequently, the television receiver110may use one or more of the adaptable content objects176and one or more of the particularized content objects177to create electronic content composites180in accordance with embodiments disclosed herein. Additionally or alternatively to providing the adaptable content objects176and/or particularized content objects177, the content provider system102may provide content composites180to the television receiver110through such means. The content provider system102and satellite transmitter equipment (which may include the satellite uplink104) may be operated by a content provider. A content provider may distribute television channels, on-demand programing, programming information, and/or other services to users via satellite and one or more of the networks120. The content provider system102may receive feeds of such content from various sources. The television channels may include multiple television channels that contain the same content (but may be in different formats, such as high-definition and standard-definition). To distribute such video content to endpoint devices116, feeds of the video content may be relayed to endpoint equipment and the endpoint devices116via one or more satellites in the form of transponder streams. Satellite transmitter equipment may be used to transmit a feed of one or more television channels from the content provider system102to one or more satellites106. While a single content provider system102and satellite uplink104are illustrated as part of the television distribution system100, it should be understood that multiple instances of transmitter equipment may be used, possibly scattered geographically to communicate with satellites106. Such multiple instances of satellite transmitting equipment may communicate with the same or with different satellites106. The data source systems103may correspond to any suitable sources of data such as one or more computer systems, databases, websites, portals, any repositories of data in any suitable form, server systems, other endpoint devices like endpoint devices116but functioning as data sources, and/or the like. In some instances, the data source systems103may include one or more mobile computing device locator services that provide information regarding the location of one or more of the endpoint devices116and/or the adaptive routers110. In various instances, the data source systems103may provide various details relating to IP addresses, cellular tower identification and location data, mobile device triangulation data, LAN identification data, Wi-Fi identification data, access point identification and location data, and/or the like data that facilitates location of one or more of the endpoint devices116and/or the adaptive routers110. In various embodiments, the data from one or more of the data source systems103may be retrieved and/or received by a content provider system102via one or more data acquisition interfaces through network(s)120and/or through any other suitable means of transferring data. In various embodiments, the content provider system102the data source systems103could use any suitable means for direct communication. In various embodiments, the content provider system102may actively gather and/or pull from one or more of the data source systems103. Additionally or alternatively, the content provider system102may wait for updates from one or more of the data source systems103. The data collected (location data, IP address, content objects177, etc.) may be curated so that only the data necessary for the transaction is collected. The one or more data acquisition interfaces may include one or more application programming interfaces (APIs) that define protocols and routines for interfacing with the data source systems103. The APIs may specify application programming interface (API) calls to/from data source systems103. In some embodiments, the APIs may include a plug-in to integrate with an application of a data source systems103. The data acquisition interfaces, in some embodiments, could use a number of API translation profiles configured to allow interface with the one or more additional applications of the data sources to access data (e.g., a database or other data store) of the data source systems103. The API translation profiles may translate the protocols and routines of the data source systems103to integrate at least temporarily with the system and allow communication with the system by way of API calls. The television receivers110, as described throughout, may generally be any type of television receiver (such as an STB (set-top box), for example) configured to decode signals received for output and presentation via a display device160. In another example, television receiver110(which may include another remote television receiver110) may be integrated as part of or into a television, a DVR, a computing device, such as a tablet computing device, or any other computing system or device, as well as variations thereof. In some embodiments, a television receiver110may be a component that is added into the display device160, such as in the form of an expansion card. A television receiver110and network120together with television receivers110and/or one or more computing devices116, may form at least a portion of a particular home computing network, and may each be respectively configured such as to enable communications in accordance with any particular communication protocol(s) and/or standard(s) including, for example, TCP/IP (Transmission Control Protocol/Internet Protocol), DLNA/DTCP-IP (Digital Living Network Alliance/Digital Transmission Copy Protection over Internet Protocol), HDMI/HDCP (High-Definition Multimedia Interface/High-Bandwidth Digital Content Protection), etc. While only a limited number of television receivers110, display devices160, computing devices116, etc. are illustrated inFIG.1, it should be understood that multiple (e.g., tens, thousands, millions) instances of such equipment, corresponding to various users in various geolocations, may be included the system100. In some embodiments, broadcast televised events may be delivered to television receivers, including a television receiver110, via satellite according to a schedule. On-demand content may also be delivered to a television receiver110via satellite. Satellites106may be configured to receive uplink signals122from satellite uplink104. In this example, uplink signals122may contain one or more transponder streams of particular data or content, such as particular television channels, each of which may be supplied by content provider102. For example, each of uplink signals122may contain various media content such as HD (High Definition) television channels, SD (Standard Definition) television channels, on-demand programming, programming information (e.g., table data), and/or any other content in the form of at least one transponder stream, and in accordance with an allotted carrier frequency and bandwidth. In this example, different media content may be carried using different satellites of satellites106. Further, different media content may be carried using different transponders of a particular satellite (e.g., satellite106-1); thus, such media content may be transmitted at different frequencies and/or different frequency ranges. For example, a first television channel and a second television channel may be carried on a first carrier frequency over a first transponder (as part of a single transponder stream) of satellite106-1, and a third, fourth, and fifth television channel may be carried on a second carrier frequency (as part of another transponder stream) over a transponder of satellite106-3, or, the third, fourth, and fifth television channel may be carried on a second carrier frequency over a second transponder of satellite106-1, etc. The satellites106may be further configured to relay uplink signals122to the satellite receiver108as downlink signals124. Similar to the uplink signals122, each of the downlink signals124may contain one or more transponder streams of particular data or content, such as various encoded and/or at least partially scrambled television channels, on-demand programming, etc., in accordance with an allotted carrier frequency and bandwidth. The satellite receiver108, which may include a satellite dish, a low noise block (LNB), and/or other components, may be provided for use to receive television channels, such as on a subscription basis, distributed by the content provider102via the satellites106. For example, the satellite receiver108may be configured to receive particular transponder streams as downlink signals124, from one or more of the satellites106. Based at least in part on the characteristics of a given television receiver110and/or satellite receiver108, it may only be possible to capture transponder streams from a limited number of transponders of the satellites106concurrently. For example, a tuner of the television receiver110may only be able to tune to a single transponder stream from a transponder of a single satellite, such as the satellite106-1, at a time. The television receiver110, which may be communicatively coupled to the satellite receiver108, may subsequently select, via a tuner, decode, and relay television programming to a television for display thereon. Broadcast television programming or content may be presented “live,” or from a recording as previously stored on, by, or at the television receiver110. For example, an HD channel may be output to a television by the television receiver110in accordance with the HDMI/HDCP content protection technologies. Other embodiments are possible. For example, in some embodiments, an HD channel may be output to a television in accordance with the MoCA® (Multimedia over Coax Alliance) home entertainment networking standard. Other embodiments are possible. The television receiver110may select via tuner, decode, and relay particular transponder streams to one or more of television receivers110, which may in turn relay particular transponder streams to one or more display devices160,160-1. For example, the satellite receiver108and the television receiver110may, respectively, be configured to receive, decode, and relay at least one television channel to a television by way of a television receiver110. Similar to the above-example, a television channel may generally be presented “live,” or from a recording as previously stored by the television receiver110, and may be output to the display device160,160-1by way of the television receiver110in accordance with a particular content protection technology and/or networking standard. Other embodiments are possible. In various embodiments, the content resources126may be used to provide the television receiver110with content (e.g., televised and streamed events). The content resources126may be used to retrieve televised and/or otherwise streamed events or portions of thereof following and/or during a live broadcast of the televised and/or otherwise streamed events. The content resources126may include the content provider102, the service provider systems103, the online content sources112, one or more other television receivers110, and/or the like. The content provider102, which may distribute broadcast televised and/or otherwise streamed events to the television receivers110via a satellite-based television programming distribution arrangement (or some other form of television programming distribution arrangement, such as a cable-based network, fiber-based network, or IP-based network, may use an alternate communication path, such as via one or more of the networks120, to provide televised and/or otherwise streamed events to the television receivers110. The television receivers110may be permitted to request various television programs or portions of televised and/or otherwise streamed events from the content provider102via the network120. For instance, the content provider102may be permitted to transmit a portion of a television program or an entire television program during and/or after a time at which the television program was broadcast live by the content provider via a satellite-based television programming distribution arrangement. In some embodiments, the content provider102may provide a televised and/or otherwise streamed event via on-demand content. Such on-demand content may be provided via the satellite-based distribution arrangement and/or via the network120. On-demand content provided via the satellite-based distribution arrangement may be stored locally by the television receiver110to allow on-demand access. On-demand content may also be retrieved via the network120from the content provider102. The computing devices116represent various computerized devices that may be associated with a user of the television receiver110and that may be configured to facilitate various adaptive content features disclosed in various embodiments herein. As indicated by116a, the computing devices116may include a laptop computer, a desktop computer, a home server, or another similar form of computerized device. As indicated by116band116c, the computing devices116may include a cellular phone and/or smartphone, a tablet computer, or another similar form of mobile device. As indicated by116d, the computing devices116may include smart glasses or another similar form of wearable computing device. In various embodiments, the television receiver110may be provided with access credentials that allow access to content stored and/or accessible through one or more of the computing devices116. Likewise, in various embodiments, one or more of the computing devices116may be provided with access credentials that allow access to content stored and/or accessible through the television receiver110and/or account associated therewith and/or associated with an application installed on the one or more of the computing devices116. It should be understood that computing devices116are exemplary in nature. Content may be accessible through a lesser or greater number of computerized devices associated with a user of the television receiver110. In some embodiments, the online content sources112may represent content resources through which televised and/or otherwise streamed events may be retrieved by the television receiver110via the network120. Each of the online content sources112may represent different websites available via the Internet. Periodically, the television receiver110may poll online content sources112to determine which televised and/or otherwise streamed events are available and/or which television programs are scheduled to be available in the future. In some embodiments, the television receiver110may poll online content sources112regarding the availability of at least a portion of a specific televised and/or otherwise streamed event. The service provider systems103may correspond to one or more data sources112that are any suitable source of data to facilitate embodiments disclosed further herein. In various embodiments, the service provider systems103may include one or more computer systems, a database, a website, a portal, any repository of data in any suitable form, a server system, and/or the like. With some embodiments, the data sources112may include one or more mobile computing device locator services that provide information regarding the location of one or more computing devices116. With some embodiments, the data sources112may provide various details relating to IP addresses, cellular tower identification and location data, mobile device triangulation data, LAN identification data, Wi-Fi identification data, access point identification and location data, and/or the like data that facilitates location of one or more computing devices116. With some embodiments, the data sources112may provide demographic data about an area encompassing the location of one or more computing devices116. In various embodiments, the data from the one or more data sources112may be retrieved and/or received by the content provider system102and/or the subsystem(s)111via one or more data acquisition interfaces through network(s)120and/or through any other suitable means of transferring data. Data, as referenced herein, may correspond to any one or combination of raw data, unstructured data, structured data, information, and/or content which may include media content, text, documents, files, instructions, code, executable files, images, video, audio, and/or any other suitable content suitable for embodiments of the present disclosure. In various embodiments, the content provider system102and/or the subsystem(s)111and the data sources112could use any suitable means for direct communication. In various embodiments, content objects176and/or177may be actively gathered and/or pulled from one or more data sources112, for example, by accessing a repository and/or by “crawling” various repositories. Additionally or alternatively, the content provider system102and/or the subsystem(s)111may wait for updates from one or a combination of the content source systems112. Content objects176and/or177pulled and/or pushed from the one or more data sources112may be transformed, and the transformed content objects176and/or177and/or other data generated based thereon may be made available by the content provider system102and/or the subsystem(s)111for use by the subsystem(s)111in creating content composites180. The television receiver110may be able to retrieve at least a portion of a television program through other television receivers110, which can function as content resources. Similarly, the television receiver110may be able to cast at least a portion of a television program through other television receivers110and/or to computing devices116. For instance, a Slingbox® (or other form of media streaming device) functioning in concert with a television receiver110may permit television programs to be captured and streamed over the network120. In some embodiments, the television receivers110may have such media streaming capabilities integrated. In some embodiments, the television receivers110may cast programming content to the computing devices116via wireless signals. For example, the programming content from the television receiver110may be indirectly transmitted via a local network (e.g., via Wi-Fi) or directly transmitted to the computing device116via a casting device integrated with the television receiver110or coupled to the television receiver110(e.g., via a dongle). In some embodiments, the programming content may be cast to the computing device116via a wired connection (e.g., via one or more of HDMI, USB, lightning connector, etc.). Some embodiments of the television receivers110may provide for simulcasting such that the same programming that is being displayed on the display device160is being displayed on one or more of the computing devices116simultaneously or substantially simultaneously. While network configuration data may be broadcast repeatedly via satellite to television receivers110, it should be understood that a similar arrangement may be used in cable-based television programming broadcast networks to broadcast network configuration. For any of the various type of network, various other forms of data may be transmitted via an Internet-based network120connection rather than using the content provider's proprietary network. For instance, EPG data may be transmitted to television receivers via the network120(e.g., Internet) connection. As another example, firmware and/or software updates may be transmitted on demand to a television receiver via the Internet rather than the television receiver receiving the update via the television programming broadcast network. The system102with the content composite subsystem111may be configured to perform one or more methods for facilitating adaptive content composite generation and interaction with respect to digitally distributed content corresponding to an event, as disclosed herein. The one or more methods may include containerizing and adapting content objects as content composites, as disclosed herein. The event may correspond to a live event. The digital distribution of content corresponding to the event may include one or a combination of streaming, live streaming, other online delivery, over the air delivery, cable-television distribution, satellite television distribution, and/or the like. Thus, as one example, the event may correspond to a live, televised event. In various embodiments, one or more media devices (e.g., one or more of devices110and/or116and/or the system102) may perform all or part of the methods, with a single media device or multiple media devices performing the methods. In various embodiments, part or all of the methods may be performed while an endpoint media device (e.g., one or more of device110and/or116) is receiving televised, streamed and/or otherwise digitally distributed content and/or is outputting the content for display. In various embodiments, at least part of the methods may be performed in advance of the televised, streamed and/or otherwise digitally distributed event and, thus, may be performed while the content is scheduled but before the content is transmitted to endpoint media devices and/or before the content is output by an endpoint media device for display. Teachings of the present disclosure may be implemented in a variety of configurations that may correspond to the configurations disclosed herein. As such, certain aspects of the methods disclosed herein may be omitted, and the order of the steps may be shuffled in any suitable manner and may depend on the implementation chosen. Moreover, while the aspects of the methods disclosed herein, may be separated for the sake of description, it should be understood that certain steps may be performed simultaneously or substantially simultaneously. FIG.6illustrates an example method600for content composite generation with respect to digitally distributed content corresponding to an event, in accordance with embodiments of the present disclosure. One or a combination of the aspects of the method600may be performed in conjunction with one or more other aspects disclosed herein, and the method600is to be interpreted in view of other features disclosed herein and may be combined with one or more of such features in various embodiments. As indicated by block602, one or more sets of one or more electronic communications may be received via one or more interfaces (e.g., of a media device) and detected. As indicated by block604, one or more indicators of one or more events of a set of one or more events for which corresponding content is specified for digital distribution may be detected from the one or more sets of one or more electronic communications. For example, the one or more events may include televised events, and the set may include a set of one or more televised events. The corresponding content may be specified for upcoming digital distribution (e.g., for the next 24 hours or any suitable upcoming time period) and/or current, ongoing digital distribution (e.g., a live event is currently in progress and being televised, streamed, etc.). The indicators may include or otherwise correspond to metadata regarding the events, as disclosed further herein. The metadata may be received on-demand (e.g., it may be pulled when needed or on scheduled basis), periodically, and/or on a scheduled basis, in various embodiments. For example, the metadata may be pulled responsive to user activation of a device116, which may correspond to opening of an app installed on the device116. As indicated by block606, in various embodiments, a set of observation data may be processed, the set of observation data corresponding to indications of detected media device operations associated with a set of one or more media devices110and/or116in a particular geolocation and mapped to the set of one or more events. The set of observation data may correspond to indications of detected media device operations associated with a plurality of media devices110and/or116in the particular geolocation. Additionally or alternatively, the set of observation data may correspond to indications of detected media device operations associated with a particular endpoint identifier. Such detected media device operations may include one or more previous interactions with one or more previous content composites180. Additionally or alternatively, such detected media device operations may include one or more previous operational settings of the plurality of media devices110and/or116and/or of one or more media devices110and/or116associated with the particular endpoint identifier. As indicated by block608, based at least in part on the set of observation data, an identifier of a particular event of the set of one or more events may be selected. Accordingly, the event may be identified for content composite creation. As indicated by block610, a content composite180corresponding to the particular event may be created, as disclosed further herein. For example, the content composite180may be created at least in part by one or a combination of the following. As indicated by block612, an adaptable content object176corresponding to the particular event may be identified based at least in part on the set of observation data. This may include selecting the adaptable content object176from a plurality of adaptable content objects176. As disclosed further herein, such selection may be based at least in part on matching specifications of one or more adaptable content objects176with specifications (e.g., the one or more indicators) of the particular event. In various embodiments, the adaptable content object176may be received from a system (e.g.,102or103) that is remote from the one or more processing devices (which, in various embodiments, may be included in the devices116,110, and/or system102). As indicated by block614, a particularized content object177corresponding to the particular event may be processed. The content object177may be received by the one or more processing devices from a system (e.g.,102or103) that is remote from the one or more processing devices. As disclosed further herein, the content object177may be pushed from the remote system or pulled from the remote system by the one or more processing devices responsive to the one or more processing devices transmitting one or more specifications of the particular event, the adaptable content object176, and/or the content object177. In some embodiments, the creating the content composite180may be a function of the current geolocation of the media device110and/or116associated with the particular endpoint identifier and a set of rules mapped to the current geolocation. In various embodiments, one or both of the adaptable content object176and the content object177are identified as a function of a current geolocation mapped to the media device110and/or116associated with the particular endpoint identifier. As indicated by block616, the adaptable content object176may be configured with the content object177so that the content composite180causes presentation of the adaptable content object176adapted with the content object177for at least part of a presentation time when the content composite180is presented, as disclosed further herein. As indicated by block618, subsequently, the content composite180may be output by the one or more processing devices for presentation. Consequent to providing the content composite to an endpoint media device110and/or116, the endpoint media device110and/or116may perform at least one operation relating to the content composite180, as disclosed further herein. As indicated by block620, at least one signal corresponding to the at least one operation relating to the content composite180that is triggered by the content composite180and that is indicative of activation of an interface option caused by the content composite180may be processed, also as disclosed further herein. FIG.7illustrates an example method700for state-based content composite generation with respect to digitally distributed content corresponding to an event, in accordance with embodiments of the present disclosure. One or a combination of the aspects of the method700may be performed in conjunction with one or more other aspects disclosed herein, and the method700is to be interpreted in view of other features disclosed herein and may be combined with one or more of such features in various embodiments. In various embodiments, one or more of the methods may additionally or alternatively include one or a combination of the following. As indicated by block702, a first set of one or more electronic communications may be received via one or more interfaces and detected. As indicated by block704, one or more indicators of one or more events of a set of one or more events for which corresponding content is specified for digital distribution may be detected from the one or more sets of one or more electronic communications. As indicated by block706, the one or more processing devices may detect one or more additional electronic communications received via the one or more interfaces. As indicated by block708, when listening for data changes in the one or more additional electronic communications with respect to one or more events of a set of one or more events (e.g., the first set, the second set, or another set), the one or more processing devices may detect one or more data changes that are generated based at least in part on at least one event of the set of one or more events. As indicated by block710, the one or more processing devices may identify a specification of criteria that apply to the one or more data changes. As indicated by block712, the one or more processing devices may determine whether the one or more data changes correspond to one or more state changes specified in the specification of criteria. As indicated by block714, consequent to determining that the one or more data changes correspond to the one or more state changes, a content composite180corresponding to the at least one event may be created. As indicated by block716, the content composite180may be created at least in part by identifying an adaptable content object176corresponding to the at least one event. As indicated by block718, the content composite180may be created at least in part by processing a content object177received from a system102or103that is remote from the one or more processing devices. As indicated by block720, the content composite180may be created at least in part by configuring the adaptable content object176with the content object177so that the content composite180causes presentation of the adaptable content object176adapted with the content object177for at least part of a presentation time when the content composite180is presented. As indicated by block722, the content composite180may be output by the one or more processing devices for presentation, where, consequent to providing the content composite180to an endpoint media device110and/or116, the endpoint media device110and/or116may perform at least one operation relating to the content composite180. As indicated by block722, at least one signal corresponding to the at least one operation relating to the content composite180that is triggered by the content composite180and that is indicative of activation of an interface option caused by the content composite180may be processed. As disclosed further herein, the content object177may be based at least in part on the one or more state changes. The one or more state changes may correspond to one or more changes occurring within the at least one event. One or more indicators of the one or more state changes with respect to the event may indicate real-time, real-world, and/or physical state changes such as those disclosed further herein. Additionally or alternatively, the one or more state changes correspond to one or more changes from at least one previous parameter corresponding to the at least one event to at least one updated parameter corresponding to the at least one event, as disclosed further herein. Thus, the content composite180may cause display of an interface element that allows user selection to cause communication to the system102or103of an instruction to configure an executable function in accordance with a set of parameters, where the set of parameters includes the at least one updated parameter. This may correspond to placing a bet with a process-performing system103in accordance with a set of parameters that may specify the event, the type of bet, odds, amount placed, and/or the like. Further, the determining whether the one or more data changes correspond to one or more state changes specified in the specification of criteria may include scoring the at least one event according to the criteria. The one or more data changes may correspond to a plurality of data changes, the at least one event may correspond to a plurality of events, and each event may be scored according to the specification of criteria. The plurality of events may be ranked according to the scoring, and the at least one event may be the a top-ranked event according to the ranking. Further, the one or more state changes may include upcoming state changes such as commercial breaks, upcoming within a time threshold (e.g., a number of seconds and/or minutes). In some embodiments, the operations of the methods may begin with or may be initiated by the media device116or110receiving and processing an electronic communication from a user interface, the electronic communication corresponding to an indicator of content. For example, the indicator may correspond to a selection corresponding to a televised/streamed event, may correspond to an initialization/powering up of the device116or110, a channel and/or stream selection such as a selection to tune to a channel that is streaming the event or that is scheduled the stream the event, an application selection such as a selection to download or otherwise stream the event which may be by way of an application installed on the device116or110, a selection to view and/or record a particular event, and/or the like. Additionally or alternatively, the operations of the methods may begin with or may be initiated separately from such communications. For example, one or more of the operations may be performed as background processes. The media device116or110may receive content corresponding to an event and may output the content for display with a display device160and/or with a display component of a device116. The content may be determined to include the televised/streamed event, as preceding the event, and/or as being delivered within a temporal proximity to the event. This may include identifying one or more specifications of the event from the programming content; metadata associated with the programming content; EPG or other schedule data received by the receiver110and/or device116from the content provider system102and mapping such data to the content, channel, and/or current or upcoming time/time period; and/or the like. Some embodiments may further include identifying the televised/streamed event as likely be viewed by a particular viewer based at least in part on viewer pattern data, even though the viewer has not yet made a selection to view and/or record the event. For example, as disclosed herein, the pattern data may indicate a preference for a particular type of event. The subsystem111may determine that the event corresponds to the preference and that temporal specifications for the event satisfy one or more temporal thresholds. In some instances, the subsystem111may determine that the event is currently ongoing and available for viewing on another channel, stream, or other viewing options that the viewer has not yet selected. Likewise, in some instances, the subsystem111may determine that the event is scheduled to be available within a suitable time for threshold (e.g., a number of minutes, hours, days, weeks, and/or the like) for viewing on the same channel, stream, or other viewing option that the viewer has selected or on a different channel, stream, or other viewing option that the viewer has not yet selected. FIG.2illustrates a functional diagram of an adaptive content composite generation/control system200, in accordance with disclosed embodiments of the present disclosure. In various embodiments, the content composite system200may be included in whole or in part in the content provider system102and/or an endpoint media device116. In some embodiments, the content composite system200may be separate from, and provide content to, the content provider system102. In some embodiments, the content composite system200may be included in the end-user system and may be included in the television receiver110and/or one or more of the computing devices116. In some embodiments, various features of the content composite generation/control system200may be distributed between the television receiver110and upstream of the television receiver110. Likewise, in some embodiments, various features of the content composite generation/control system200may be distributed between one or more of the computing devices116and upstream of the one or more computing devices116. While not all components of the adaptive content composite generation/control system200are shown, the system200may include one or a combination of such components. As depicted, the content composite generation/control system200includes a content composite subsystem111. The content composite subsystem111may include or otherwise correspond to an audiovisual control engine that, as with disclosed embodiments of the other engines, may include instructions retained in processor-readable media and to be executed by one or more processors. The content composite subsystem111may be communicatively coupled with interface components and communication channels (e.g., of the television receiver110and/or the computing device116, which may take various forms in various embodiments as disclosed herein) configured to receive programming content202, which may correspond to televised sporting events, other competitions, television programs, portions thereof, and/or the like. In various embodiments, the programming content202may include audiovisual content broadcast and/or otherwise transmitted by the content provider102and/or one or more other service providers103by way of one or a combination of streaming, live streaming, other online delivery, over the air delivery, cable-television distribution, satellite television distribution, and/or the like. The programming content202may include various components, including without limitation, one or more video tracks, audio tracks, audio video tracks, metadata tracks, close captioning information, and/or the like. In some embodiments, the content composite generation/control system200may retain received programming content202in content storage222. The content storage222may include any suitable form of storage media, such as any suitable form disclosed herein. The content composite subsystem111may be further configured to receive adaptable content objects176and particularized content objects177. The content composite subsystem111may include a harvesting engine236configured to aggregate adaptable content objects176, particularized content objects177, and/or programming content202in order to facilitate content composite generation/control features disclosed herein. The content composite subsystem111may include a matching engine238, which, in various embodiments, may be configured to analyze, classify, categorize, characterize, tag, and/or annotate adaptable content objects176, particularized content objects177, and/or programming content202. The content composite subsystem111may include a content composite splicing engine242. In some embodiments, the content composite splicing engine242may include a multiplexer. In various embodiments, the multiplexer may create a digital stream of data packets containing the video, audio, and, in some embodiments, the metadata to output the programming content202, adaptable content objects176, and/or the composites180created with selected adaptable content objects176. In various embodiments, the content composite splicing engine242may be implemented at the receiver110, the device116, and/or the service provider system102. In embodiments where the content composite splicing engine242is implemented at the service provider system102, the multiplexed data stream may be transmitted via the one or more networks124for provisioning to computing devices116or via a particular transponder stream via a transponder of a satellite four provisioning to receivers110. The multiplexer may create a digital stream of data packets containing the video, audio, and entitlement control messages (ECMs), to be transmitted on the transponder data stream. The data stream, which includes video and/or audio data packets that are not scrambled, may be passed to a scrambling engine, which may use a control word to scramble video or audio present in a data packet. Some audio and video packets may also pass through with no scrambling, if desired by the content provider102. A control word generator may generate the control word that is used by a scrambling engine to scramble the video or audio present in the data packet. Control words generated by the control word generator may be passed to a security system, which may be operated by the content provider or by a third-party security provider. The control words generated by control word generator may be used by security system to generate an ECM. Each ECM may indicate two control words. The control words indicated may be the current control word being used to scramble video and audio, and the control word that will next be used to scramble video and audio. The security system may output an ECM to the multiplexer for communication to subscribers' set-top boxes. Each data packet, whether it contains audio, video, an ECM, or some other form of data, may be associated with a particular PID. This PID may be used by the set-top box in combination with a networking information table to determine which television channel the data contained within the data packet corresponds. Accordingly, the transponder data streams may contain scrambled video packet stream and audio packet stream and also an encrypted ECM packet stream which contains the control words necessary to descramble the scrambled video and audio packets. In some embodiments, the harvesting engine236may be configured to receive, pull, process, buffer, organize, rank, and/or store adaptable content objects176, particularized content objects177, programming content202, and/or data source input212. In various embodiments, the content provider system102, the television receiver110, and/or the computing device116may include one or more applications to facilitate the subsystem111analyzing and consolidating data source input212(e.g., data feeds and/or event updates) received from various data sources112, which may or may not be included in the service provider systems103. As an example, data source input212may include, but are not limited to, updates (real-time and/or otherwise) and/or continuous data streams received from one or more data sources112, which may include real-time events related to bookmakers, bookies, sportsbooks, oddsmakers, sports event and/or other competition information, gambling/betting, Twitter® feeds, Instagram® posts, Facebook® updates, and/or the like. As disclosed above, the adaptable content objects176may be particularized to certain services. In some embodiments, the adaptable content objects176may correspond to commercials to be presented during commercial breaks of the programming content202. Additionally or alternatively, the adaptable content objects176may correspond to announcements or other content to be presented as overlays, in windows/frames and/or in pop-ups during, before, and/or after events, in various embodiments. Additionally or alternatively, the adaptable content objects176may correspond to announcements or other content to be transmitted and presented as text messages, push notifications, email notifications, and other notifications which would typically be received via one or more apps on the device116and cause user-detectable notifications in response during, before, and/or after events, in various embodiments. Additionally, the adaptable content objects176may allow for invoking, waking up, opening, and/or otherwise activating an application of the endpoint media device116, in some instances, when the application is offline and/or otherwise not online with respect to the system102,200, and/or another system103. In various embodiments, the content objects176may include audiovisual content broadcast and/or otherwise transmitted by the content provider102. In some embodiments, adaptable content objects176may be pushed by the content provider102to the subsystem111. In addition or in alternative, adaptable content objects176may be pulled by the subsystem111(e.g., by the harvesting engine236) from the content provider102. The particularized content objects177may correspond to content that is particularized to certain types of services and that is sourced by one or more of the service provider systems103. In various embodiments, the service provider systems103may correspond to process-performing systems that may receive instructions to configure executable functions in accordance with set of parameters and, in response, configure the executable functions in accordance with set of parameters. In various embodiments, the service provider systems103may correspond to one or more sources of data and/or services corresponding to bookmakers, bookies, sportsbooks, oddsmakers, sports information, event information, gambling/betting, social media websites, and/or the like, and particularized content objects177may correspond to the specific data and/or services sourced by a specific service provider system103for a specific event. For example, the data may correspond to odds information with respect to a particular sporting event and a particular outcome of the sporting event and/or of certain potential results/actions that could occur within the event. The data may correspond to particular digital content, a matrix code such as a QR code, and/or the like. The services may, for example, correspond to the bookmaker/sportsbook services offered to facilitate betting with respect to the sporting event. In some embodiments, particularized content objects177may include content that is particularized to one or more viewers based at least in part on observation data learned about the one or more viewers, as disclosed further herein. As disclosed above, the adaptable content objects176and/or the content object objects177may correspond to any one or combination of raw data, unstructured data, structured data, information, and/or content which may include media content, text, documents, files, instructions, code, executable files, images, video, audio, audio video, and/or any other suitable content suitable for embodiments of the present disclosure. In various embodiments, sets of one or more adaptable content objects176and/or sets of one or more content object objects177may be transmitted to the subsystem111in batches. For example, sets of one or more adaptable content objects176and/or sets of one or more content object objects177may be transmitted to the subsystem111on a periodic or otherwise scheduled basis. The subsystem111may store the adaptable content objects176locally and, subsequently select one or more of the adaptable content objects176when needed for presentation during an upcoming break in the programming content202corresponding to an event and/or when needed for presentation during the programming content202corresponding to the event based at least in part on the subsystem111determining specifications of the event, a temporal progression in the event (e.g., the fourth quarter, the second round, etc.), a state change in the event (e.g., a score change, one team or competitor leading or falling behind, etc.), and/or the like. In various embodiments, sets of one or more adaptable content objects176and/or sets of one or more content object objects177may be transmitted to the subsystem111on an as-needed basis when the subsystem111is receiving programming content202corresponding to a certain type (e.g., a televised sporting event for which sports betting information and services are available), is scheduled to receive such programming content202, is predicted to receive programming content202based at least in part on a detected viewing pattern of past viewing of previous programming content202(e.g., of a certain type of event, at certain times, on certain days, etc.), and/or is predicted to receive programming content202based at least in part on a detected pattern of past viewer responses to content composites for previous programming content202of that type. Additionally or alternatively, in some embodiments, sets of one or more adaptable content objects176and/or sets of one or more content object objects177may be selected (e.g., the service provider system102) as tailored for particular event viewing habits, betting patterns, and inferred interests of viewers. In various embodiments, sets of one or more adaptable content objects176may be selected (e.g., by the service provider system102) for particular time periods and may be transmitted to the subsystem111with an assignment (e.g., by way of tag data or other metadata) for the designated time period. Additionally or alternatively, in some embodiments, sets of one or more adaptable content objects176may be selected (e.g., by the service provider system102) for particular channels and/or television programs and may be transmitted to the subsystem111with an assignment (e.g., by way of tag data or other metadata) for the designated channels and/or television programs. The communication of the sets of one or more adaptable content objects176may be in response to the subsystem111pulling the sets of one or more adaptable content objects176from the service provider system102. For example, the subsystem111may pull adaptable content objects176based at least in part on detecting programming content202currently being viewed via a television receiver110or computing device116, detecting programming content202scheduled to be viewed or recorded, predicting programming content202of interest to a viewer based on detected viewing and/or betting patterns (e.g., patterns of interacting with content objects176,177), determining upcoming programming content202based on electronic programming guide information received, and/or the like. In a similar manner, sets of one or more content object objects177may be pulled from or pushed by one or more service provider systems103, in various embodiments using one or more of the various methods disclosed, to the subsystem111directly or indirectly (e.g., by way of the content provider system102, which may then transmit the content object objects177to the subsystem111) for particular time periods, with assignments for designated channels and/or television programs. For example, in conjunction with the selection of sets of one or more adaptable content objects176, sets of one or more content object objects177that match the one or more adaptable content objects176may be pulled from one or more service provider systems103. In various examples, the matching may be based at least in part on specifications of the event, a temporal progression in the event (e.g., the fourth quarter, the second round, etc.), a state change in the event (e.g., a score change, one team or competitor leading or falling behind, etc.), and/or the like. In disclosed embodiments, the content provider102and/or the subsystem111may selectively aggregate content. By way of example,FIG.3is a simplified illustration of a portion of the content composite generation/control system200with aggregation and transformation features, in accordance with disclosed embodiments of the present disclosure. In some embodiments, the subsystem111may correspond at least in part to the content provider102and may include one or more data management servers. The subsystem111may include one or more aggregation and/or transformation engines231, which may correspond to the content harvesting engine236in some embodiments. In various embodiments, the aggregation and/or transformation engine231may correspond to an integral engine or separate engines working in conjunction. The aggregation/transformation engines231may translate, transform, or otherwise adjust data collected. The aggregation and transformation engines231may provide a pipeline that processes data input from regulatory data sources, applies rules, transforms the data into jurisdiction-specific regulatory rules218, and uses the rules218to adaptively control content composite creation, the delivery of such content, and interactions with such content. In various embodiments, the harvesting engine236may include or otherwise operate in conjunction with the matching engine238, which may operate at partially as a consolidation engine. The matching engine238may process manifold data sets that may, for instance, come from different sources112or the same source112, for example, by way of one or more updates to data previously provided by a particular source112, and the matching engine238may consolidate the data sets to form a composite data set. The consolidation may include organizing, categorizing, qualifying, and/or comparing the sets of information; detecting, identifying, and/or handling errors/discrepancies; identifying redundancies; removing redundancies; and/or otherwise processing the data sets. In some embodiments, regulatory objects are consolidated to form set of consolidated rules218. The objects may correspond to structured data, text, files, documents, and/or the like specifying conditions, criteria, and requirements of jurisdiction-specific regulations. In some embodiments, objects are consolidated and transformed into organized, indexed, categorized, and qualified rules, workflows, and/or decision trees. In some embodiments, the matching engine238may identify a subset of data, regulatory rules, and/or one or more data sources112(e.g., regulatory authority) that are more important than the rest, and may process the subset first. In some embodiments, the matching engine238may follow an order of precedence in consolidating the data, rules, and/or data sources. In some embodiments, the consolidation may include removing and/or minimizing redundancy of information and requirements of rules to form a compact set of composite information, requirements, and/or restrictions for a particular location, a particular type of event, a particular type of betting, a particular type of bookmaker/sportsbook, a particular type of device, and/or the like. The matching engine238may operate to build one or more sets of data, content, rules, one or more indexes, one or more workflows, one or more decision trees, and/or one or more files particularized to one or more locations and based at least in part on selectively aggregated rules218. In some embodiments, the matching engine238may build multiple sets that relate to one or more rules, but are tailored for different geolocations, jurisdictions, types of events, types of betting, types of bookmakers/sportsbooks, and/or the like. In some embodiments, the matching engine238may translate the data into understandable data, information, and/or content. The transformed data, information, and/or content may be directed to certain tables, indexes, and/or storages based on one or more particular rules, geolocations, jurisdictions, types of events, types of betting, types of bookmakers/sportsbooks, and/or the like. In some embodiments, the selective aggregation, consolidation, and/or feed actions may be performed on an as-needed basis. For example, the selective aggregation, consolidation, and/or feed actions may be triggered when a change to rules is detected consequent to periodic polling of data source systems103for updates to rules and/or comparing newly harvested information to previously harvested information. In some embodiments, the selective aggregation, consolidation, and/or feed actions may be performed on a periodic basis based on any suitable time period. The service provider systems103may include manifold content source systems112, including, for example, sources112of objects corresponding to federal information, state information, local information, and/or the like. The harvesting engine236may include logic for implementing content aggregation features in various embodiments. In some embodiments, the harvesting engine236may be configured to gather data about rules from one or more service provider systems103and/or other data source systems sourcing information (e.g., government systems) through one or more networks120. By way of example without limitation, the engine(s) with one or more the processors, may utilize one or more network interfaces to pull and/or push code from various entities. As disclosed herein, content may be actively gathered by accessing a repository that corresponds to such entities, and content could be gathered by “crawling” the various repositories in some embodiments. Updates for content source systems112may be periodically found. Additionally or alternatively, the content provider system102and/or the subsystem(s)111may wait for updates from one or a combination of the content source systems112. With some embodiments, any one or combination of the content source systems112may provide notifications to the content provider system102and/or the subsystem112of data to be transferred, such as updated information not previously pulled/pushed to the content provider system102and/or the subsystem112. Certain embodiments may also include data being pre-loaded and/or directly transferred to the content provider system102and/or the subsystem112(e.g., via a storage medium) in addition to or in lieu of transferring data via a network120. The harvesting engine236could handle processing, extracting, formatting, and/or storing in content storage222data including data for code portions. The harvested data may then be analyzed to determine one or more attributes of the code portions. Various sets of rules218may provide for various types of restrictions and/or specifications on creating and/or provisioning content composites180. In addition to geolocation restrictions/specifications, the various types of restrictions and/or specifications may include time restrictions, such as limits on a time of day when content composites180may be presented, limits on time in advance particular event (e.g., days, hours, etc.) and/or portion thereof (e.g., round, quarter, period, etc.) ahead of which content composites180may be presented, and the like. Additionally or alternatively, the various types of restrictions and/or specifications may include restrictions on and/or specifications of types of events (e.g., football, soccer, martial arts, types of racing, etc.) for which content composites180may or may not be presented and the manner in which content composites180may be presented for the different types of events. In some instances, the number and/or frequency of composite180presentation may be limited on a per-event basis. Further, the type of betting (e.g., the actions subject to wager) may be restricted by the rules218in various ways, depending on the location. Thus, provisioning of content composites180may be further differentiated according to event type, with time, place, and/or manner restrictions/specifications contingent on event type. Restrictions on and/or specifications of the manner in which content composites180may be presented may include distinguishing types of devices (e.g., smart phone versus laptop computer, laptop computer versus television receiver, etc.) which will display the content composites180. To facilitate geo-discrimination to differentiate which sets of rules218apply to a given content provisioning instance with respect to an event, disclosed embodiments may provide for capturing and analyzing location data for the receiver110and/or device116to determine a current location of the receiver110and/or device116and which content objects176,177to select and present as a function of the current device116location, disclosed embodiments may provide for capturing and analyzing location data for the device116to determine a current location of the device116. Location data may be captured to facilitate geo-sensitive adaptive content composite generation/control adaptive content composite generation/control with respect to content202corresponding to a televised event as a function of a location detected for the receiver110and/or device116that receives the content and is to cause display of content composites180in conjunction with the content. In various embodiments, the matching engine238may include a location correlation engine that may correlate location data to a set of one or more zip codes (or other postal codes) and a corresponding rule set identifier for a set of one or more rules218mapped to the set of one or more zip codes (or other postal codes) via identifiers keyed with one or more tables and/or indexes. In various embodiments, location data may be determined by television receivers110and/or devices116, and such data may be sent to the system102. The television receivers110and/or devices116may, in some embodiments, have location detection capabilities based at least in part on location data provided by way of device GPS capabilities, Wi-Fi, cellular, other access points, subscriber/account information, and/or the like techniques for determining a current location of the respective receiver110and/or device116, and corresponding location data may be transmitted to the system102. In some embodiments, the system102may gather the location data. In some embodiments, where the location data does not explicitly indicate a geolocation, the system102may determine geo-locations by cross-referencing subscriber/account identifiers with stored geolocation data associated with subscribers/accounts. In some embodiments, the receiver110and/or device116may include at least one antenna for wireless data transfer to communicate through a cellular network, a wireless provider network, and/or a mobile operator network, such as GSM, for example without limitation, to send and receive Short Message Service (SMS) messages or Unstructured Supplementary Service Data (USSD) messages. The antenna may include a cellular antenna (e.g., for sending and receiving cellular voice and data communication, such as through a network such as a 3G, 4G, or 5G network). In addition, the receiver110and/or device116may include one or more interfaces in addition to the antenna, e.g., a wireless interface coupled to an antenna. The receiver110and/or device116may include one or more communications interfaces that can provide a near field communication interface (e.g., contactless interface, Bluetooth, optical interface, etc.) and/or wireless communications interfaces capable of communicating through a cellular network, such as GSM, or through Wi-Fi, such as with a wireless local area network (WLAN). Accordingly, the receiver110and/or device116may include may be capable of transmitting and receiving information wirelessly through both short range, radio frequency (RF) and cellular and Wi-Fi connections. Additionally, in some embodiments, the receiver110and/or device116may be capable of communicating with a Global Positioning System (GPS) in order to determine to location of the respective receiver110and/or device116. The antenna may be a GPS receiver or otherwise include a GPS receiver. In various embodiments, communication with the receiver110and/or device116may be conducted with a single antenna configured for multiple purposes (e.g., cellular, transactions, GPS, etc.), or with further interfaces (e.g., three, four, or more separate interfaces). In some embodiments, an application installed on the receiver110and/or device116may cooperate with the system102to facilitate tracking of locations of the receiver110and/or device116. For example, the receiver110and/or device116may transmit location data to any suitable backend system component. The location data may be a combination of data based on one or a combination of GPS, Wi-Fi, cellular, device sensor(s) such as a barometric sensor or accelerometer, RFID device signals, and/or other techniques for determining a current location of the receiver110and/or device116. The receiver110and/or device116may access the one or more networks120through one or more wireless links to one or more access points. The access points may be of any suitable type or types. For example, an access point may be a cellular base station, an access point for wireless local area network (e.g., a Wi-Fi access point), an access point for wireless personal area network (e.g., a Bluetooth access point), etc. The access point may connect the receiver110and/or device116to the one or more networks120, which may include the Internet, an intranet, a local area network, a public switched telephone network (PSTN), private communication networks, etc. In some embodiments, access point(s) may be used in obtaining location data for the receiver110and/or device116. FIG.3is a simplified illustration of a composite build engine240, in accordance with disclosed embodiments of the present disclosure. In various embodiments, the composite build engine240may be included in the subsystem111or may be separate from the subsystem111. The composite build engine240may, in some embodiments, be included in the content provider system102. Having processed an adaptable content object176, the subsystem111may create one or more content composites180that may include the adaptable content object176. To create the content composites180, disclosed embodiments may configure the content object176with at least in part as a containerization object that containerizes at least part of the particularized content object177to facilitate various features disclosed herein. The composite build engine240may configure the content composites180to include a composite flag that may include one or more parameters and may indicate one or more composite specifications. For example, in various embodiments, the composite flag may include indicia of access and reference to one or more other composite specifications, access and reference to other metadata, and/or the like. Further, the composite build engine240may configure the composite180to include the composite specifications to facilitate performance of a set of one or more operations by the one or more endpoint media devices116or110with respect to the composite180consequent to the one or more endpoint devices116or110receiving the content composite180. In various embodiments, the composite specifications may include one or a combination of instructions, metadata, one or more URLs, instructions to configure an executable function in accordance with a set of parameters, and/or the like to specify and facilitate performance of the set of one or more operations by an endpoint media device116or110. In some embodiments, the composite specifications may include at least part of the particularized content object177. The composite build engine240may identify one or more portions of adaptable content object176to be augmented based at least in part on content object177, which may, in various embodiments, be identified by image analysis and/or analysis of tag data that defines one or more areas within frames that correspond to certain portions represented within the frames for augmentation. As disclosed herein, such tag data could define the area of interest in any suitable way in various embodiments which could be by way of any one or combination of mattes, masks, pixel identification (which could, for example, include identification of pixel coordinates and ranges thereof to define areas of interest), pixel color component characteristics such as color component values, overlays, and/or the like, allowing for correlation to the area to be augmented in any suitable way. In some embodiments, a processor (such as a main processor, a core processor, digital signal processor, and/or like) may take a definition of the augmentation area with respect to one or more reference frames and may perform auto-correlation of related images in a video stream to identify/define the augmentation areas in other frames of the video sequence that represent the portion to be augmented. Image characteristics (e.g., color, brightness, etc.) of the area of interest with respect to the reference frame(s) could be measured, quantified, and/or otherwise identified, and matched with measured image characteristics of the other frames to define the area to be augmented in multiple frames in an image-changing sequence. Accordingly, certain embodiments may allow for handling the complexity of multiple on-screen options by distinguishing particular areas in dynamic, image-changing content. Accordingly, the content composite180may include at least part of the content object177and the particularized content object177at the time the content composite180is transmitted to one or more endpoint media devices116, with the particularized content object177separate from or grafted into the content object176such that the content object176is an adapted content object176adapted with the particularized content object177. In some embodiments, the content composite180may not include a particularized content object177at the time the content composite180is transmitted to one or more endpoint media devices116or110. In such instances, the particularized content object177may be fetched per the composite specifications from the system102,200, another data source103, or from storage of the endpoint media device116or110, and may be used by an endpoint media device116or110to adapt the content object176. The composite specifications may include instructions and build specifications according to which the endpoint media device116or110may merge the pulled/retrieved particularized content object177with the content object176. In some embodiments, the composite flag and/or composite specifications may prompt the endpoint media device116or110to execute the instructions to perform at least one operation of the set of one or more operations facilitated by the composite object180. In some embodiments, APIs may be used to instruct the endpoint media device116as to what to do with the composite flag and/or composite specifications. In some embodiments, the composite flag and/or composite specifications may allow for invoking, waking up, opening, and/or otherwise activating an application of the endpoint media device116responsive to the decryption of the composite flag and/or composite specifications, in some instances, when the application is offline and/or otherwise not online with respect to the system102,200, and/or another system103. The composite flag and/or composite specifications may prompt the endpoint media device116or110to causes display of an interface element that allows user selection to cause communication to the process-performing system103of an instruction to configure an executable function in accordance with a set of parameters. In some embodiments, the composite flag and/or composite specifications may include a report flag that triggers one or more return channel communications. The corresponding instructions may instruct the endpoint media device116or110to report to the system102,200, and/or another system103with one or more return channel communications indicating detection of one or more operations executed consequent to the presentation of a composite180such as opening a mobile app the endpoint media device116, utilizing the composite180to place a bet, bets placed, metrics of the operations (e.g., time of execution); and/or the like. The return channel communications may contribute to the observation data229and feedback loop features disclosed further herein. Additionally, in various embodiments, with selection of a content composite180to select an executable function for a system103, the devices110and/or116may automatically record the event corresponding to the content composite180. This feature may to any of the composite180selections disclosed herein such that, where the selections instruct the system103, the parallel instructions are issued to the devices110and/or116to record the corresponding event. When the selections are made via the receiver110, the composite instructions may directly instruct the receiver110to record the event. However, when the selections are made via a device116, the instruction may be communicated from the device116to a receiver110that is associated with the account identifier. Such instruction may be made directly to the receiver110when it is determined that the device116and the receiver110are communicatively couplable, such as in the subsystem300. The communication of the instruction may be made directly via Bluetooth, Wi-Fi, and/or the like when such connections are available. However, when such connections are not available directly to the receiver110, the instruction may be communicated indirectly via return channel communications as disclosed herein. However, when it is determined that no such receiver110is associated with the account identifier, instruction may be communicated to the service provider system102via return channel communications, and the system102may record the events or, where such recording is already automatically provided by the system102, the system102may receive the instruction and later make the recordings accessible to the device116. Further, as disclosed further herein, the subsystem111may detect triggers corresponding to a viewer's previous selections of composites180, or the triggers may correspond to shifting odds due to developments occurring within the event (e.g., underdog is actually close or ahead, an upset is imminent, etc.), detecting a state change in the event (e.g., a score change, one team or competitor leading or falling behind, etc.), and/or the like. The subsystem111may compare such developments and state changes to the viewer's previous selections of composites180. This may correspond to comparing the evolution of the games and odds and bets already placed by the viewer. The subsystem111may identity where the bets have become increasingly unlikely to result in a win, or were the bets are close or highly likely to result in a win. Responsive to detecting one or more of such situations, the subsystem111may create additional composites180. Such composites180may indicate the detection of such situations, may notify the viewer of the watch ability of the event, that the viewer may want to tune in or pay attention to the ongoing event, and/or may identify additional possible executable functions that could be selected in view of the developments detected for the events. Still further, the subsystem111may correlate such developments in the events to timing specifications for the recordings of the events may highlight, mark, and/or otherwise record presentation times corresponding to the developments within the recordings of the events. In various embodiments, the subsystem111may then create selectable options for skipping to the corresponding segments within the recordings, and/or may create cuts of the recordings corresponding to the segments for highlights that can be made accessible and viewable to the viewer. Listings of the recorded content for the events and/or highlights may be made available with any suitable graphical indicia and descriptive content be additional composites180and/or the augmentation interface disclosed herein. To facilitate the content composite180, the composite build engine240may include a metadata handler208that may generate metadata (e.g., one or more tags) corresponding to identifiers, attributes, characteristics, and/or categories of programming content202, adaptable content objects176, and/or particularized content objects177. In various embodiments, the metadata210may be inserted into the output programming content202, output adaptable content objects176, and/or output particularized content objects177. In some embodiments, the one or more tags210may not be inserted into the programming content202, adaptable content objects176, and/or particularized content objects177but may be sent with the output programming content202, output adaptable content objects176, and/or output particularized content objects177. composite build engine240may assign packet identifiers to identify data of the content that is to be transmitted as part of a data stream to a receiver110and/or device116and that is to be associated with one or more tags. Accordingly, the content splicing subsystem111may output one or a combination of metadata-augmented programming content202-1, metadata-augmented content objects176-1, and/or metadata-augmented content objects177-1. In some embodiments, one or a combination of metadata-augmented programming content202-1, metadata-augmented content objects176-1, and/or metadata-augmented content objects177-1may be stored at least temporarily in one or more repositories222. In some embodiments, tag data may be stored at least temporarily in one or more repositories222. The content matching engine238may identify a televised event in the programming content202and may identify one or more corresponding identifiers, attributes, characteristics, and/or categories of programming content202, adaptable content objects176, and/or particularized content objects177of one or more adaptable content objects176and/or one or more particularized content objects177. Based at least in part on such identification, the composite build engine240may create metadata, which, in some embodiments, may correspond to tag data. Tag data may include an indication of a period of time (or other measure of time, e.g., a number of frames), a start frame, an end frame, and/or the like. Tag data may include or otherwise be associated with a tag identifier and may include event, attribute, characteristic, and/or category identifiers. For example, the metadata for the televised event may identify the particular event. The metadata may further identify one or more attributes of the particular event (e.g., any suitable identifier for the participating entities, the location of an event, and/or the like). In some embodiments, at least a portion of the metadata augmentation may be performed at the content provider system102such that one or more tagged composite components may be is provided to an endpoint media device116. Subsequently, the endpoint media device116may identify composite components, for example, by processing the metadata. The metadata for adaptable content objects176may, for example, identify the adaptable content objects176as being adaptable with any suitable identifier, such as a flag, field value, etc. Additionally or alternatively, the metadata for the adaptable content objects176may identify that the adaptable content objects176are designated for a certain event or category of events with any suitable identifier. The metadata for the adaptable content objects176may further identify one or more attributes of the particular event (e.g., any suitable identifier for associated entities, location, a temporal attribute such as a time of an event, and/or the like). Additionally or alternatively, the metadata for the adaptable content objects176may identify transition points, hooks, frames, windows, other portions designated for overlays, and/or the like for merging with content objects177such that content from the content objects177is merged at the transition points, hooks, frames, windows, other portions designated for overlays, and/or the like. In some embodiments, metadata-augmented adaptable content objects176may be provided by the system102to the receivers110and/or devices116, after which the receivers110and/or devices116, each having at least a portion of the content composite subsystems111, may process and use the metadata to facilitate matching adaptable content objects176with corresponding televised events of the programming content202. Likewise, the receivers110and/or devices116may process and use the metadata to facilitate matching adaptable content objects176with corresponding content objects177and then creating content composites180therefrom. Thus, the metadata may facilitate the receivers110and/or devices116appropriately providing corresponding content composites180for display with appropriate placement with respect to televised events at commercial breaks and/or during event presentation of the televised events. In a similar manner, metadata-augmented content objects177may be provided by the service provider system102to the receivers110and/or devices116. The metadata for adaptable content objects content object177may, for example, identify an identifier of the particular event (e.g., any suitable identifier for a game, match, competition, and/or the like). The metadata for the content objects177may further identify fields and content for one or more attributes of the particular event, such as any suitable identifier for the competitors, the location of the event, a temporal attribute such as a time of the event or a progression of the event, performance metrics of one or more competitors in the event (e.g., possession time, attempts, hits, strikes, takedowns, interceptions, completions, baskets, assists, fouls, etc.), a state change in the event (e.g., a score change, one team or competitor leading or falling behind, etc.), odds information with respect to a particular sporting event and a particular outcome of the sporting event and/or of certain potential results/actions that could occur within the event, URLs and hyperlinks to betting platforms/websites of systems103and/or sites for further information, and/or the like. In some embodiments, at least a portion of the metadata augmentation may be performed at the service provider system102and/or the service provider system103such that tagged content objects177are provided to the receivers110and/or devices116from the systems102and/or103. Subsequently, the receivers110and/or devices116may process and use the metadata to facilitate matching adaptable content objects176with corresponding content objects177and then creating content composites180therefrom. Thus, the metadata may facilitate the receivers110and/or devices116matching adaptable content objects176with corresponding content objects177and then creating content composites180. Alternatively, the receivers110and/or devices116, having at least a portion of the content splicing subsystem111, may process the content objects177in the form in which they are received (e.g., directly from a service provider103) and, based on such processing, may match the content objects177to a particular event and/or may identify other attributes of the content objects177without the content objects177being received as augmented with metadata. In any case, in some embodiments, the receivers110and/or devices116, each having at least a portion of the subsystems111, may create the content composites180. Yet, as another alternative, the service provider102, having at least a portion of the subsystem111, may create the content composites180and transmit the content composites180to the receivers110and/or devices116. The learning engine239that may be an analysis engine that employs machine learning. The learning engine239may further employ deep learning. Accordingly, the learning engine239may facilitate machine learning or, more specifically, deep learning, to facilitate creation, development, and/or use of viewer pattern data216. As disclosed herein, the subsystem111may determine an event that the viewer actually is viewing, is about to view (e.g., the televised event is scheduled to play on the channel that the viewer is currently viewing), or is likely to view as determined with the learning engine239. The subsystem111may push information indicating the event to one or more service provider systems102and/or103. In some embodiments, the service provider system102may select one or more adaptable content objects176matching the televised event for transfer to the subsystem111which, as disclosed herein, may be a part of the content provider system201and/or may be part of the receiver110and/or devices116. The subsystem111may select from the one or more adaptable content objects176as matching particular segments of the event and, utilizing a content composite splicing engine242in some embodiments, may output one or more corresponding content composites180for display after the particular segments and/or simultaneously with the particular segments. In various embodiments, one or more of the service provider systems102,103may select one or more particularized content objects177matching the event for transfer to the subsystem111. In some embodiments, one or more of the service provider systems102,103may select a set of one or more particularized content objects177for transfer (e.g., based on recency of information updates corresponding to the content objects177) for transfer to the subsystem111, and the subsystem111may determine which content objects177from the set that match the event. As disclosed above, the content composite subsystem111may include a matching engine238that may include logic to implement and/or otherwise facilitate any taxonomy, classification, categorization, correlation, mapping, qualification, scoring, organization, and/or the like features disclosed herein. FIG.4illustrates certain aspects of the AI-based subsystem data flow400, in accordance with various embodiments of the present disclosure. The content processing subsystem111may be configured to gather observation data229, which may be specific to one or more particular identified users and/or may be generally related to particular receivers/devices110,116. The observation data229may be gathered from one or more receivers110and/or devices116, aggregated, consolidated, and transformed into viewer pattern profiles that include personalized pattern data216. In embodiments where the learning engine239is included in a receiver/device110,116, the receiver/device110,116may be a self-observer that may additionally gather additional observation data229. In various embodiments, the data from the one or more receivers/devices110,116may be retrieved and/or received by the content processing subsystem111via one or more data acquisition interfaces, which may include interfaces of the content processing subsystem111, the one or more receivers/devices110,116, and/or the like—through network(s)120in various embodiments, through any suitable means for direct communication, and/or through any other suitable means of transferring data. According to various embodiments where the subsystem111is included in a service provider system102, observation data229may be actively gathered and/or pulled from the one or more receivers/devices110,116. As disclosed herein, in various embodiments, the one or more data acquisition interfaces may include one or more APIs that define protocols and routines for interfacing with the one or more receivers/devices110,116and which may specify API calls to/from one or more receivers/devices110,116. In various embodiments, the APIs may include a plug-in to integrate with an application of one or more receivers/devices110,116. The API translation profiles may translate the protocols and routines of the data source component and/or system to integrate at least temporarily with the system and allow one-way communication to the system102and/or two-way communication with system102in various embodiments by way of API calls. Various embodiments of the subsystem111may aggregate observation data229to derive device identification data404, device operations406, temporal data408, and/or contextual data410. The device identification data404may include any suitable data for identifying and tracking particular receivers110and devices116; associated accounts, subscribers, and viewers; and/or the like disclosed herein. The device operations data406may include any suitable data for identifying and tracking device operations and interactions as those disclosed herein. The contextual data410may include metrics and patterns of viewer interactions/responses pursuant to provisioning of content composites180and biasing181. For example, viewer responses to content composites180provisioning may include indications of whether the viewer selected a user-selectable options provided with composites180, the types of such selections, and/or types of consequent interactions with service provider systems103. For example, the metrics and patterns may take into account whether the viewer opted out of content composites180, whether the viewer selected links of composites180to interact with the platforms and sites of one or more service provider systems103, whether the viewer selected options to redirect content composites180and/or notifications from service provider systems103to a secondary device116, which service provider systems103the viewer selected, whether the viewer placed bets and the types of the viewer's bets and other interactions with service provider systems103, which types of events and outcomes the viewer placed bets on, amounts wagered, and/or the like. The temporal data408may include metrics such as any information to facilitate detection, recognition, and differentiation of one or combination of temporal factors correlated or which the content processing subsystem111correlates to other observation data229such as device identification data404, contextual data410, and/or the like. For example, the temporal data408may include time of day information, time of week information, time of year information, holiday information, etc. when the viewer made selections and bets; when during the progression of the events, sports seasons, postseasons, championships, and/or similar stages that the viewer made selections and placed bets; and/or the like. As disclosed herein, the subsystem111may be configured to receive, pull, process, buffer, organize, rank, and/or store data source input212, which may be included in the observation data229. This may include collecting data source input212from a plurality of devices110and/or116and/or from one or more data source systems112via the one or more data acquisition interfaces. As disclosed herein, data source input212may include, but are not limited to, updates (real-time and/or otherwise) and/or continuous data streams received from one or more data sources112, which may include real-time events related to bookmakers, bookies, sportsbooks, oddsmakers, sports event and/or other competition information, gambling/betting, Twitter® feeds, Instagram® posts, Facebook® updates, and/or the like. Observation data229may be actively gathered and/or pulled from one or more data sources, for example, by accessing a repository and/or by “crawling” various repositories. The observation data229may include user indications of preference of entities and subject matter, such as positive ratings, indication of liking an entity, sharing entity-specific and/or subject-matter-specific information with others that the user has made via webpages and/or social media. This information may be compiled in real-time or near real-time and may be used to trigger composite180generation and communication. The subsystem111may determine based at least in part on the observation data collected from one or more of the data source systems (e.g., a social media system) strong preferences and/or aversions to particular sports teams/competitors. For example, an aversion may be inferred from negative indicia (e.g., dislikes of the particular team on a social media profile of the contact) and/or positive indicia (e.g., likes of another team that is a determined rival of the particular team), and, in some instances) recent games (i.e., wins/losses) of the two teams. The contextual data410may include viewership data. In some embodiments, the service provider system102may include a viewership data engine that is configured to facilitate identification, aggregation, consolidation, and qualification of viewership data pertinent to a plurality of viewers of devices116and110in various geolocations. The harvesting engine236may be configured with logic to process, analyze, retrieve, pull, cause communication of, derive, compile, aggregate, categorize, characterize, rank, handle, store, report, and/or present any suitable information/content pertaining to viewership—e.g., implicit content ratings derived from histories and patterns of viewing and recording, and explicit content ratings input by viewers. The harvesting engine236may be configured to cause viewership information to be transmitted from devices116and110to the service provider system102for identification, aggregation, consolidation, and qualification of viewership data pertinent to a plurality of viewers of devices116and110in various geolocations. In some embodiments, the viewership data engine may correspond to the harvesting engine236. In some embodiments, the viewership data and the determination of viewership characteristics may be based at least in part on real-time or near real-time back-channel information from viewing devices116and110indicating the channels and/or events being viewed and/or recorded, viewer profiles, viewer selections, viewer geolocations, viewer ratings of events, viewing history, explicit user preferences, user characteristics, and/or the like. Aggregated viewership data may be analyzed to identify a set of one or more viewership characteristics with respect to events. Based at least in part on the viewership data, content matching engine238and/or the learning engine239may differentiate and qualify one or a combination of events being viewed and/or recorded, viewer profiles, viewer selections, viewer geolocations, viewer ratings of events, viewing history, explicit user preferences, viewer characteristics (e.g., demographics), and/or the like. Some embodiments may employ a decision tree, checklist, workflow, and/or the like to capture various aspects of viewership data and assess those aspects to infer event qualification. Some embodiments may qualify an event according to a gradated viewership scale. Any suitable viewership scale may be used in various embodiments. In some embodiments, a viewership scale could entail a categorization scheme, with categories such as high viewership, medium viewership, and low viewership, or any suitable categories. In some embodiments, a viewership scale could entail an event scoring system. The event scoring system could be correlated to the category scheme in some embodiments, such that certain scores correspond to certain categories. Some embodiments may score an event with a numerical expression, for example, a viewership score. A viewership score may be an assessment of an event's current, past, and/or predicted viewership. Accordingly, a viewership score may indicate which events have had, currently have, and or are forecasted to have greater viewership than other events. In addition to or in alternative to employing real-time viewership qualification, certain embodiments may predict viewership of events based at least in part on historical viewership data of specific events and/or of one or more categories to which events are mapped. Such categories may include one or a combination of geolocation, event type, executable function type (e.g., bet type), parameters for the executable function (e.g., bet parameters, odds, etc.), demographics, social trending, particular competitors (e.g., rivals), tournament/playoffs wersus regular season, and/or the like which may be correlated to viewership scores to refine the viewership qualifications for correlation to particular viewers in order to trigger composite180generation and presentation for the particular viewer as a function of one or more of these categories. Accordingly, certain embodiments may accord viewership scores based at least in part on such corresponding historical viewership data, which may include previous viewership score, and some embodiments may employ geo-discrimination to differentiate which events are likely to have greater viewership in certain geo-locations as compared to other events. For example, a viewership score for a football game may be assigned a greater number of points when the service area includes a hometown of one of the football teams playing in the game. Hence, the viewership score for the game may be higher when there is a correlation between the game a particular geolocation than when there is not such a correlation. Likewise, the system102may recognize situations where a significant viewership exists in geolocations that not hometowns of the football teams playing in the game. For example, a geo-tailored viewership score for the football game may be assigned a greater number of points when the service area includes a significant number of fans even though there is no explicit connection to the football game. Hence, the viewership score for the game may be higher when the system102accounts for a significant pocket of fans of the Kansas team that are located in Colorado, even though the team from Colorado is not playing the game. Accordingly, certain embodiments may more accurately differentiate which events are likely to have greater viewership in various geo-locations. The learning engine239may map one or a combination of various extra-composite metrics of the observation data229to the metrics of the particular composites180provided to a particular viewer. Based at least in part on taking into account such observation data229as part of a feedback loop, the learning engine239may employ an ongoing learning mode to develop personalized pattern data216for particular viewers or content receivers/devices generally, and to confirm, correct, and/or refine determinations made for personalized pattern data216for particular viewers or content receivers/devices generally. The content processing subsystem111may be configured to employ machine learning to process the observation data229and the content objects180and to derive and develop the personalized pattern data216. The content processing subsystem111may be configured to employ deep learning to process the observation data229and the content objects180and to derive and develop the personalized pattern data216. The learning engine239may be configured to perform any one or combination of features directed to matching or otherwise correlating the observation data229—such as the device identification data404, the device operation identification data406, the temporal data408, the contextual data410, descriptive information of the content objects180, and/or the like—with intra-content metrics of the content objects180. The learning engine239may include logic to implement and/or otherwise facilitate any taxonomy, classification, categorization, correlation, mapping, qualification, scoring, organization, and/or the like features disclosed herein. In some embodiments, the learning engine239may include the matching engine238. The learning engine239may include a reasoning module to make logical inferences from a set of the detected and differentiated data to infer one or more patterns of activity for particular viewers and/or receivers/devices generally. A pattern-based reasoner could be employed to use various statistical techniques in analyzing the data in order to infer personalized pattern data216from the observation data229. A transitive reasoner may be employed to infer relationships from a set of relationships related to the observation data229. In various embodiments, the system automatically establishes and develops the personalized pattern data216. However, the personalized pattern data216may be set up and/or tailored by users. With various embodiments, the personalized pattern data216may be automatically established and developed by the system. The feedback could be used for training the system to heuristically adapt conclusions, profiles, correlations, attributes, triggers, patterns, and/or the like to learn particular viewers and adapt content composite180provisioning to particular viewers, which may include requesting, searching for, and/or selecting particular types of adaptable content objects176and/or content objects177(e.g., which may be based at least in part on the metadata features disclosed herein) for content composite180creation. For example, the learning engine239may learn that a particular viewer tends to interact with content composites180that are directed to only certain types of events. Such event type differentiation may be on the macro level, such as recognizing that a viewer tends to interact more with composites180directed to certain types of sports and not other types of sports. Accordingly, the subsystem111may bias content composites180provisioning toward the types of sports that tend to cause viewer interaction, and decrease composite180provisioning frequency or cease provisioning for other types. Further, the learning engine239may learn that a particular viewer tends to interact with content composites180that are directed to only types of events within a particular category (e.g., more high-profile events such as post-season events and/or championship events as opposed to regular season and/or non-championship events). Accordingly, the subsystem111may likewise bias content composites180provisioning toward such types of events and decrease or cease provisioning with respect to other types. Further, viewer interaction differentiation may be on the micro level, such as recognizing that a viewer tends to interact more with composites180directed to certain types of outcomes and state changes with respect to particular events. For example, the learning engine239may detect a viewer pattern of interacting only with composites180directed to the potential final outcomes/results of an event (e.g., final score, winner, etc.) or may detect a viewer pattern of interacting with more micro-level potential outcomes that can occur within an event (e.g., scoring on particular drives, takedowns, files, per-competitor performance, etc.). Accordingly, the subsystem111may likewise bias content composites180provisioning toward such types of outcomes and decrease or cease provisioning with respect to other types. Thus, the subsystem111may adapt composite180provisioning to maximize viewer engagement, and, when the subsystem111detects state changes in televised events that are mapped to viewer patterns of composite180interaction corresponding to such state changes and events, the subsystem111may initiate the polling of one or more corresponding adaptable content objects176, the polling of one or more corresponding content objects177, and/or the creation of one or more corresponding composites180as a function of the detected state change and the detected viewer pattern in order to provide tailored composites180to a viewer in response to the detected state change. Further, in situations where a pattern of more micro-level interactions detected for particular viewer, the subsystem111biasing181of composites180may include serial provisioning of composites180in a serial drill-down manner such that the first composites180provisioned may be directed to a more macro-level outcome and one or more composites180subsequently provisioned may be directed to more micro-level outcomes in accordance with the detected pattern. Thus, disclosed embodiments may provide for serial matching of composites180with respect to one another in order to provision the composites180with a trend that matches the detected pattern. Accordingly, as part of such learning and adaptation processes, the subsystem111may bias181composite180provisioning (which may correspond to bet recommendations) toward a particular viewer based at least in part on what the subsystem111has learned about a number of factors. Composite180creation and provisioning may be a function of pattern data216specific to one or a combination of location, geo-specific viewership, learning about the viewer and correlated viewership metrics, devices used, personal viewership, demographic viewership, social viewership, learned betting behavior, types of bets, live viewership trends (who's setting recordings for events and/or tuning in), event types/categories, specific competitors, rivalries, time spent watching particular games, regular season versus post-season, upcoming events, real-time state changes based on what's actually happening in events, parameters such as odds information, other observations data229, other data source input212, and/or the like criteria disclosed herein. The learning engine239and/or the matching engine238may perform correlation based at least in part on correlation rules that govern correlation of the personalized pattern data216to content objects177and corresponding sources of the content objects177based at least in part on metrics and availabilities of the content objects177from the particular source systems103. In various embodiments, the correlation may be based at least in part on the profiles of the service provider systems103. By way of example, the correlation rules218may include correlation criteria that could include respective weightings assigned to the particular criteria. Hence, each type of the above criteria could be assigned a weight according to its significance. Specifications of the criteria and weightings could be implemented in any suitable manner, including lists, tables, matrices, and/or the like, and could be organized in a rank order and/or any hierarchical structure according to weight. Some embodiments may have specifications of the criteria and weightings organized according to decision tree, with contingencies so that only certain combinations of criteria may be considered. In some embodiments, the learning engine239and/or the matching engine238may employ a scoring system to quantify correlations with a numerical expression, for example, a match/watchability score, with higher scores being assigned to higher correlations. Higher scores may be assigned for greater extents of matching. Accordingly, the learning engine239and/or the matching engine238may learn a viewer's top-ranked interests in types of events, participating entities, geolocations, types of bets, odds, and/or the like observation data229. In accordance with the pattern data216, the subsystem111may detect a trigger for composite180creation that corresponds to a need for presentation of one or more composites180in view of an upcoming event, a continuation of an ongoing event, and/or during the presentation of content corresponding to the event. Detecting a trigger may include determining timing specifications of the event that matches the viewer pattern data216, odds information that matches the viewer pattern data216, a temporal progression in the event (e.g., the fourth quarter, the second round, etc.) and/or shifting odds due to developments occurring within the event (e.g., underdog is actually close or ahead, an upset is imminent, etc.), detecting a state change in the event (e.g., a score change, one team or competitor leading or falling behind, etc.), and/or the like. As one example case out of many possibilities, say a state change in an event occurs (e.g., a score change, one team or competitor leading or falling behind, a takedown, a foul, etc.) and/or temporal progression benchmark (e.g., the fourth quarter, the second round, etc.) is reached. The subsystem111may detect the state change and, in response, initiate creation of composites180for presentation at an upcoming break or during the event. The composite180may be dynamically inserted in the content stream within a short time (e.g., substantially, seconds, within a minute, with environments, etc.) after the state change. As disclosed herein, in various embodiments, the subsystem111may detect the state change and other triggers by way of analyzing and consolidating data source input212(e.g., data feeds and/or event updates) received from various data sources112, particularized content objects177from systems103, and/or analysis of content202by keyword recognition of a dialogue from an announcer (e.g., detecting words such as touchdown, goal, takedown, foul, minutes on the clock, etc.), sudden changes in crowd noise, and/or image recognition (e.g., detecting graphics displayed with a televised event such as a scoreboard, streaming tickers or overlays typically positioned and/or scrolling across a portion of the display area, etc.). When the subsystem111detects a trigger, the subsystem111may receive, pull, and/or select one or more content objects177from a process-performing system103, and may receive, pull, and/or select from one or more adaptable content objects176provided by the system102as matching particular segments of the event and the one or more content objects177to generate and output for display, utilizing the content composite splicing engine242, one or more composites180to display as a commercial during a programming break after a particular segment or as an intra-program overlay, frame, window, pop-up, and/or the like presented concurrently with the event. FIG.5is a simplified illustration of a content composite interaction subsystem500, in accordance with certain embodiments of the present disclosure. The interaction subsystem500may include the television receiver110-1. The interaction subsystem500may include a display device160-1communicatively coupled with the television receiver110-1. The television receiver110-1may be communicatively coupled with a media service back-end502, which may correspond to certain upstream elements ofFIG.1and may include the network120-1. The interaction subsystem500may include any one or combination of computing devices116a-d. Depicted are two examples of a computing device116cand a computing device116d. Though both examples are depicted, a user may only use one computing device116in certain use cases. A number of other examples are possible, but not shown. In some embodiments, the display160-1and/or the television receiver110-1may be controlled by the user using the computing device116to send wireless signals576to communicate with the television receiver device110-1and/or the display160-1. The computing device116may receive wireless signals577from the television receiver device110-1to effectuate bi-directional communication. The computing device116may further receive and send wireless signals586,587from the network120through various means such as a Wi-Fi router, modem, cellular access points, and/or the like. For example, a content object177may include text, one or more images, links, URLs, buttons, other user interface elements, and/or the like which the content splicing engine242may aggregate, process, format, crop, rescale, and/or otherwise prepare and include in composites180for insertion into the content stream for output with the programming content202and/or during breaks of the programming content202. By way of example, a content composite180may include a combination of graphics, video, audio, and/or one or more links along with the message such as: “The Buffs are up 38-36 at halftime—check out the latest odds,” “KU is up 42-40, place a bet on the second half of the game,” or “Think Denver can come back? The current odds are 7:1.” Thus, the content composite splicing engine242may identify content portions for augmentation by processing the content object177, reading the object177or certain portions thereof, and determine portions for augmentation in video segments. In some embodiments, portions of images and/or frames of the adaptable content object176may be overwritten with captured content from the content object177. Referring again more particularly toFIG.2, the matching engine238may be configured to match adaptable content objects176to objects177and segments of programming content202based at least in part on metadata at a service provider system102side or at a television receiver110and/or device116, in accordance with various embodiments. For example, metadata may be extracted when or before a given segment of programming content202is to be output for display and before a transition point. In some embodiments, the matching engine238may read the metadata and perform a search of the repositories222for one or more adaptable content objects176that have metadata matching the extracted metadata with respect to one or more of event identification, event category identification, and/or temporal identification, with the highest preference given to the adaptable content object176that has metadata most closely matching the metadata of the previous segment. Alternatively, the matching engine238may read the metadata mapped to the segment and pull one or more adaptable content objects176from the service provider system102. In so doing, the subsystem111may transmit at least a portion of the metadata of the objects177and/or programming content202to the service provider system102in order to facilitate matching the extracted metadata with one or more adaptable content objects176with respect to one or more of event identification, event category identification, and/or temporal identification. Consequently, the service provider system102may transmit one or more matching adaptable content objects176to the subsystem111, which may be integrated with a receiver110and/or device116. Some embodiments may include the subsystem111configured to perform a search of the repositories222for one or more adaptable content objects176that have metadata matching the extracted metadata in addition to pulling one or more adaptable content objects176from the service provider system102. For example, the subsystem111may first perform a search of the repositories222for any matching adaptable content objects176and then only pull one or more adaptable content objects176from the service provider system102when no adaptable content objects176are found in the search of the repositories222that match the extracted metadata with a sufficient match score that indicates a level of correlation satisfying a correlation threshold. Additionally or alternatively, the matching engine238may pull one or more content objects177from the service provider system102and/or one or more service provider systems103based at least in part on particulars detected with the trigger. In so doing, the subsystem111may transmit at least a portion of metadata of the trigger and/or the programming content202to the service provider system102and/or one or more service provider systems103in order to facilitate matching the extracted metadata with one or more content objects177with respect to one or more of event identification, event category identification, and/or temporal identification. Consequently, the service provider system102and/or one or more service provider systems103may transmit one or more matching content objects177to the subsystem111. Additionally or alternatively, in some embodiments, the subsystem111first obtain one or more matching adaptable content objects176, then read metadata from the one or more matching adaptable content objects176, and transmit at least a portion of the metadata to the service provider system102and/or one or more service provider systems103in order to facilitate matching the metadata with one or more content objects177. In determining whether to initiate and/or the restrictions governing the creation and/or provisioning of content composites180, the subsystem111may detect a location corresponding to the computing device116and the receiver111, and may determine from rules218whether the provisioning of content composites180of certain type are prohibited for the location. In the case where the subsystem111determines that there is no prohibition for the location, the subsystem111may determine a set of the rules218that govern timing restrictions and specifications, event type restrictions and specifications, place and manner restrictions and specifications, and types of adaptations of the adaptable content objects176with the particularized content objects177to create the composites180. Various sets of rules218may provide for various types of adaptations of the adaptable content objects176with the particularized content objects177, and the subsystem111may determine which set of rules218apply to a given receiver110and/or device116as a function of the location of the receiver110and/or device116. The place and manner restrictions and specifications of the geo-specific rules218may govern how composites are provisioned with respect to programming content202(e.g., as a commercial, as a real-time pop-up, as a real-time overlay, as an inset frame, and/or like), which may be a function of the type of event and/or the type of end-user device. For example, a set of rules218may specify that composites180may only be provided during commercial breaks of televised event. Another set of rules218may specify that composites180may be provided as overlays, frames, and/or pop-ups during the televised event. In such cases, the rules218may require the user opt in and request such overlays, frames, and/or pop-ups during presentation of the televised event. Accordingly, one or more user-selectable options may be presented to the user via the receiver110and/or device116to allow request overlays, frames, and/or pop-ups during presentation of the televised event. Such user-selectable options may be provided with composites180that are presented and during commercial breaks. Thus, contingent upon user selection of the options, presentation of composites180may transition to overlays, frames, and/or pop-ups during presentation of the televised event. Likewise, one or more user-selectable options may be presented to the user via the receiver110and/or device116to prohibit content composite180provisioning. Upon user selection of such prohibition, the subsystem111may present alternative content objects in lieu of content composites180during commercial breaks. In like manner, one or more user-selectable options may be presented to the user via the receiver110and/or device116to allow content composite180provisioning to a secondary device116concurrently with the presentation of the televised event. Such provisioning to a secondary device116may be provided by way of one or combination of application installed on the secondary device116, communications from the receiver110, communications from the service provider system102, a near field communication interface (e.g., contactless interface, Bluetooth, optical interface, etc.), wireless communications interfaces capable of communicating through a cellular data network, or through Wi-Fi, such as with a wireless local area network (WLAN), and/or the network120. By way of example, a composite180may be presented via a television receiver110on a display160with one or more user-selectable options that allow redirection of composites180to a secondary device116consequent to user selection. The redirection of composites180to the secondary device116may include options to cast the televised event from the receiver110to the secondary device116and/or the composites180. To facilitate such a mode of operation, various embodiments may include the receiver110and/or the device116incorporating at least portions of the subsystem111provide various features. According to one option, the secondary device116(e.g., device116cor116das illustrated inFIG.5) may receive the same content, including composites180as commercial segments and/or overlays, being displayed on the display device160with simulcasting to the secondary device116so that the secondary device116need only display the augmented content. According to another option, the television receiver110may provide the programming content to the display device160, and the secondary device116may receive the programming content and splice composites180into the content displayed with the device116. In various embodiments, the device116may receive the composites180for composite generation/control from the receiver110, may receive the composites180for composite generation/control from the service provider system102, and/or may receive adaptable content objects176and content objects177from the receiver110and/or the system102in order that the device116may create and provision composites180therefrom. In some modes of operation, the television receiver110may present alternative content objects with the display device160in lieu of content composites180, while the content composites may be shunted to the secondary device116. Thus, the secondary device116may receive composites180that would otherwise be displayed on the display device160. In some embodiments and options, the receiver110may not cast to the televised event, but the device116may present composites180without the televised event. For example, according to some options, an application for the device116may be downloaded, installed, and initiated to facilitate content provisioning on the device116and interaction with one or a combination of the receiver110, system102, and/or one or more systems103. Accordingly, various embodiments may provide various user-selectable options for transitioning from just viewing a televised event to displaying and interacting with composites180and service provider systems103via a secondary device116, while a televised event corresponding to the composites180is being displayed on another device such as display device160the receiver110. The user-selectable options presented with the composite180may allow for taking actions, such as selecting one or more URLs and/or hyperlinks to one or more betting platforms, websites, and/or sites for further information and placing bets. As disclosed herein, the user-selectable options may include one or more options to transition provisioning of one or more composites180to a secondary device116and/or to request notifications from the one or more betting platforms, websites, and/or sites be sent to the secondary device116so that the secondary device116may be used to interact with the platforms and sites via an application installed on the secondary device116. In that way, a user may place bets and otherwise interact with the one or more platforms and sites via the secondary device116while viewing the televised event on a primary display associated with a receiver110and/or primary device116. As disclosed herein, various embodiments may provide for an interface (augmentation interface) that facilitates the adaptive content composite interaction that is jurisdiction-smart, geo-adaptable, and jurisdiction-adaptable. In combination with other features disclosed herein, various embodiments may differentiate what features are available for a location and jurisdiction and then provide only those features that are available/allowed. Further, the features and recommendations provided by various embodiments may be a function of access rights of particular viewers. The interfaces of various embodiments may orchestrate the services of different systems103, biasing content composites180and announcements to particular viewers, surfacing available content composites180and executable functions from the systems103, while providing comparisons of the various corresponding options and parameters. In various embodiments, the augmentation interface may be similar to or different from interface elements disclosed with the example ofFIG.5and in U.S. application Ser. No. 17/505,135, which application is filed concurrently herewith and is hereby incorporated by reference in its entirety for all purposes. FIG.8illustrates an example method800for adaptive content composite interaction with respect to digitally distributed content corresponding to an event, in accordance with embodiments of the present disclosure. One or a combination of the aspects of the method800may be performed in conjunction with one or more other aspects disclosed herein, and the method800is to be interpreted in view of other features disclosed herein and may be combined with one or more of such features in various embodiments. Thus, in various embodiments, one or more of the methods may additionally or alternatively include one or a combination of the following. As indicated by block802, in various embodiments, one or more processing devices of one or more media devices may perform discovery with respect to a plurality of events at least in part by one or a combination of the following operations, which are disclosed further herein. As indicated by block804, a set of one or more electronic communications may be received via one or more interfaces and detected. As indicated by block806, one or more indicators of one or more events of a first set of one or more events for which corresponding content is specified for digital distribution may be detected from the one or more sets of one or more electronic communications. As indicated by block808, the one or more processing devices may detect a particular endpoint identifier mapped to a set of one or more media devices110and/or116. As indicated by block810, a set of access specifications mapped to the particular endpoint identifier may be determined. In some embodiments, the set of access specifications may be further determined to be mapped to the set of one or more media devices110and/or116. As indicated by block812, a second set of one or more events may be determined from the first set of one or more events as a function of the first set of access specifications. In some instances, the second set may be a subset of the first set; in some instances, the second set may include or be equivalent to the first set. As indicated by blocks816, the one or more processing devices may receive, from a remote system102or103via a network120, one or more content objects177corresponding to the second set of one or more events. As indicated by blocks816, the one or more processing devices may create mapping specifications of the one or more content objects177correlated to the second set of one or more events and the first set of access specifications. For example, a first set of one or more content objects177received by the one or more processing devices from the remote system102or103may be processed, and a second set of one or more content objects177from the first set of one or more content objects177may be identified as a function of the one or more rules mapped to the current geolocation. As indicated by blocks818,820, and822, the one or more processing devices may create a set of one or more content composites180corresponding to the mapping specifications at least in part by, for each content object177of the second set of one or more content objects177: selecting, based at least in part on the second set of one or more events, an adaptable content object176from a plurality of adaptable content objects176; and configuring the adaptable content object176with the content object177to form a content composite180configured to facilitate presentation of the adaptable content object176adapted with the content object177for at least part of a presentation time when the content composite180is presented. As indicated by block824, the set of one or more content composites180may be stored. As indicated by block826, the set of one or more content composites180may be used to facilitate an augmentation interface. The augmentation interface may correspond to a graphical layout of the set of one or more content composites180, where each content composite180causes display of an interface element that allows user selection to cause communication to the process-performing system103of an instruction to configure an executable function in accordance with a set of parameters. Where the set of one or more content composites180corresponds to a plurality of content composites180, and the graphical layout of the plurality of content composites180may hierarchically arrange the plurality of content composites180according to one or more parameters of the executable functions of the plurality of content composites180. For example, in various embodiments, the augmentation interface may organize betting options for various events from various systems103by best odds, over under numbers, and/or the like. This could include bets placed, pending and completed, won and lost, current event developments for the corresponding events, as well as options for further selection. Further, the interface may indicate events which the viewer does not currently have access to but could upgrade access rights with a single click. In some embodiments, the creating the content composite180and/or the adaptation of the one or more content objects176with one or more particular objects177may be a function of a current geolocation of the endpoint media device116or110, with a set of rules mapped to the current geolocation and specifying geo-specific criteria for creating content composites180, selecting content objects176and particularized content objects177, adapting the content objects176with particularized content objects177, and provisioning the content objects176and particularized content objects177. Thus, in various embodiments, the one or more processing devices may detect a current geolocation of a particular media device110and/or116of the set of one or more media devices110and/or116, as disclosed further herein. The one or more processing devices may retrieve one or more rules mapped to the current geolocation of the particular media device110and/or116. The current geolocation of the media device may be determined at a time when the content corresponding to the event is being output for display and/or prior to being output for display. With the content composite180created and, in some instances, the one or more content objects176adapted, the one or more content objects176corresponding to the content may be output for display (during a commercial break and/or during the event as a window, overlay, etc.), where the content objects176and particularized content objects177are selected based at least in part on location metadata mapped to the content objects176and particularized content objects177specifying location indicia for the content objects176and particularized content objects177. FIG.9illustrates a receiver900that makes use of, interacts with, and/or at least partially includes the content composite generation/control system200, in accordance with disclosed embodiments of the present disclosure. Certain embodiments of the receiver900may include set top boxes (STBs), television receivers, and over-the-top receivers. In some embodiments, the receiver900may correspond to the television receiver110. In various embodiments, in addition to being in the form of a STB, a receiver may be incorporated as part of another device, such as a television or other form of display device, such as a computer, smartphone, tablet, or other handheld portable electronic device. For example, a television may have an integrated receiver (which does not involve an external STB being coupled with the television). One or a combination of the content harvesting engine236-1, the content matching engine238-1, learning engine239-1, composite build engine240-1, and/or content splicing engine242-1may be provided in conjunction with content harvesting module236-2, the content matching module238-2, composite build module240-2, and/or content composite splicing module242-2to implement various functionalities of the content composite subsystem111into the receiver900. The receiver900may represent receiver110ofFIG.1and may be in the form of a STB that communicates with a display device such as a television. The receiver900may be incorporated as part of a television, such as the display device160ofFIG.1or television900ofFIG.9, etc. The receiver900may include: processors910(which may include control processor910-1, tuning management processor910-2, and possibly additional processors), tuners915, network interface920, non-transitory computer-readable storage medium925, electronic programming guide (EPG) database930, networking information table (NIT)940, digital video recorder (DVR) database945, on-demand programming927, content store222-3, user interface950, decryption device960, decoder module933, interface935, and/or descrambling engine965. In other embodiments of receiver900, fewer or greater numbers of components may be present. It should be understood that the various components of receiver900may be implemented using hardware, firmware, software, and/or some combination thereof. Functionality of components may be combined; for example, functions of descrambling engine965may be performed by tuning management processor910-2. Further, functionality of components may be spread among additional components; for example, PID filters955may be handled by separate hardware from program map table957. The receiver900may be in data communication with service providers, such as by way of network interface920. The network interface920may be used to communicate via an alternate communication channel with a content provider, if such communication channel is available. The primary communication channel may be via satellite (which may be unidirectional to the receiver900) and the alternate communication channel (which may be bidirectional) may be via a network, such as the Internet. Referring back toFIG.1, receiver110may be able to communicate with content provider system102via a network120, such as the Internet. This communication may be bidirectional: data may be transmitted from the receiver110to the content provider system102and from the content provider system102to the receiver110. Referring back toFIG.9, the network interface920may be configured to communicate via one or more networks, such as the Internet, to communicate with content provider system102ofFIG.1. Other information may be transmitted and/or received via the network interface920such as adaptable content objects176, content objects177, composites180, metadata, and/or the like as disclosed herein. The storage medium925may represent one or more non-transitory computer-readable storage mediums. The storage medium925may include memory and/or a hard drive. The storage medium925may store information related to the EPG database930, augmentation module932and related preferences, other non-video/audio data931, DVR database945, the other modules, and/or the like. Recorded television programs may be stored using the storage medium925as part of the DVR database945. The EPG database930may store information related to television channels and the timing of programs appearing on such television channels. The EPG database930may be stored using the storage medium925, which may be a hard drive. Information from the EPG database930may be used to inform users of what television channels or programs are popular and/or provide recommendations to the user. Information from the EPG database930may provide the user with a visual interface displayed by a television that allows a user to browse and select television channels and/or television programs for viewing and/or recording. Information used to populate the EPG database930may be received via the network interface920and/or via satellites, such as the satellite106ofFIG.1via the tuners915. For instance, updates to the EPG database930may be received periodically via satellite. The EPG database930may serve as an interface for a user to control DVR functions of the receiver900, and/or to enable viewing and/or recording of multiple television channels simultaneously. Information from EPG database930may be output as a video stream to a display device. A particular user may issue commands indicating that an EPG interface be presented. A user issuing a command that an EPG be displayed may constitute a change command. In some embodiments, content composites180may be created and presented in conjunction with the EPG. For example, content composites180could pertain to televised events indicated in the EPG. Hence, content composite180features may extend to EPG views in some embodiments. The television interface935may serve to output a signal to a television (or another form of display device) in a proper format for display of video and playback of audio. As such, the television interface935may output one or more television channels, stored television programming from the storage medium925(e.g., television programs from the DVR database945, information from the EPG database930, etc.) to a television for presentation. User profiles may also be stored in the storage medium945and may include stored user preferences that may be inferred by the television receiver900based at least in part on viewing history. The television receiver900may communicate user profile information to the service system(s)102,103to request adaptable content objects176and content objects177tailored to the inferred user preferences to provision composites180in accordance with certain embodiments disclosed herein. The user profiles may include profiles for multiple users or may include a single profile for the television receiver in general. In some embodiments, the user profiles may include preferences for customized content presentation adjustments disclosed herein. The user profiles may further include user feedback, via user-selectable options, received from the user regarding customizations. The feedback data may be used to refine the customizations for particular viewers and types of content customizations. The user interface950may include a remote control (physically separate from the receiver900) and/or one or more buttons on the receiver900that allow a user to interact with the receiver900. The user interface950may be used to select a television channel for viewing, view information from the EPG database930, and/or program a timer stored to DVR database945, wherein the timer is used to control the DVR functionality of the control processor910-1. The user interface950may also be used to transmit commands to the receiver900and make user selections to customize user preferences. For simplicity, the receiver900ofFIG.9has been reduced to a block diagram; commonly known parts, such as a power supply, have been omitted. Further, some routing between the various modules of the receiver900has been illustrated. Such illustrations are for exemplary purposes only. The state of two modules not being directly or indirectly connected does not indicate the modules cannot communicate. Rather, connections between modules of the receiver900are intended only to indicate possible common data routing. It should be understood that the modules of the receiver900may be combined into a fewer number of modules or divided into a greater number of modules. Further, the components of the television receiver900may be part of another device, such as built into a television. The television receiver900may include one or more instances of various computerized components, such as disclosed in relation to computer systems disclosed further herein. While the television receiver900has been illustrated as a satellite receiver, it is to be appreciated that techniques below may be implemented in other types of television receiving devices, such as cable receivers, terrestrial receivers, IPTV receivers or the like. In some embodiments, the television receiver900may be configured as a hybrid receiving device, capable of receiving content from disparate communication networks, such as satellite and terrestrial television broadcasts. In some embodiments, the tuners may be in the form of network interfaces capable of receiving content from designated network locations. FIG.10is a block diagram of a system1000including one non-limiting example of a computing device116configured to facilitate adaptive content composite generation/control, in accordance with disclosed embodiments of the present disclosure. The computing device116may be a portable device suitable for sending and receiving information to/from the receiver110and over a network to/from remote data sources (e.g., service providers103and online content sources112) in accordance with embodiments described herein. For example, in various embodiments, the computing device116may correspond to one or more of computing devices116a,106b,116c,116ddepicted inFIG.1. In some embodiments, the computing device116may be provided with an application1051, which may, in some embodiments, correspond to a mobile application configured to run on the computing device116to facilitate various embodiments of this disclosure. For example without limitation, the mobile application1051may transform the computing device116into an adaptive content composite generation/control device to facilitate features of various embodiments disclosed herein. In various embodiments, the mobile application1051may allow the device116to be configured to provide one or a combination of the content harvesting engine236-1, the content matching engine238-1, learning engine239-1, content augmentation engine240-1, and/or content composite splicing engine242-1may be provided in conjunction with the content harvesting module236-2, the content matching module238-2, content augmentation module240-2, and/or content composite generation/control module242-2to implement various functionalities of the content composite subsystem111into the device116. In various embodiments, the application1051can be any suitable computer program that can be installed and run on the computing device116, and, in some embodiments, the application1051may not be a mobile app but may be another type of application, set of applications, and/or other executable code configured to facilitate embodiments disclosed herein. The application1051may be provided in any suitable way. For non-limiting example, the application1051may be made available from a website, an application store, the service provider102, etc. for download to the computing device116; alternatively, it may be pre-installed on the computing device116. In various embodiments, the computing device116configured with the application1051may provide one or more display screens that may each include one or more user interface elements. A user interface may include any text, image, and/or device that can be displayed on a display screen for providing information to a user and/or for receiving user input. A user interface may include one or more widgets, text, text boxes, text fields, tables, grids, charts, hyperlinks, buttons, lists, combo boxes, checkboxes, radio buttons, and/or the like. As shown inFIG.10, the computing device116includes a display1020and input elements1032to allow a user to input information into the computing device116. By way of example without limitation, the input elements1032may include one or more of a keypad, a trackball, a touchscreen, a touchpad, a pointing device, a microphone, a voice recognition device, or any other appropriate mechanism for the user to provide input. In various embodiments, the computing device116may pull content objects176, content objects177, and/or composites180from the receiver110and/or from systems102and/or103via the network120in order to provide the content composites180to a user of the computing device116through the application1051. The application1051can include a utility that communicates with the receiver110and/or from online data sources via the network120to control downloading, displaying, caching, and/or other operations concerning the handling of content objects176, content objects177, and/or composites180. The application1051and the computing device116may cooperate with the receiver110to facilitate tracking of (and customizations of user profiles and other features disclosed herein based at least in part on) user selections in response to content objects displayed through the one or more additional applications. The user selection of a user-selectable option corresponding to the application1051may involve any one or combination of various user inputs. The user selection may be in the form of a keyboard/keypad input, a touch pad input, a track ball input, a mouse input, a voice command, etc. For example, the content object may be selected by the user by pointing and clicking on a content object. As another example, a content object may be selected by an appropriate tap or movement applied to a touch screen or pad of the computing device116. The computing device116includes a memory1034communicatively coupled to a processor1036(e.g., a microprocessor) for processing the functions of the computing device116. The computing device116may include at least one antenna for wireless data transfer to communicate through a cellular network, a wireless provider network, and/or a mobile operator network, such as GSM, for example without limitation, to send and receive Short Message Service (SMS) messages or Unstructured Supplementary Service Data (USSD) messages. The computing device116may also include a microphone to allow a user to transmit voice communication through the computing device116, and a speaker to allow the user to hear voice communication. The antenna may include a cellular antenna (e.g., for sending and receiving cellular voice and data communication, such as through a network such as a 3G, 4G, or 5G network). In addition, the computing device116may include one or more interfaces in addition to the antenna, e.g., a wireless interface coupled to an antenna. The communications interfaces1044can provide a near field communication interface (e.g., contactless interface, Bluetooth, optical interface, infrared interface, etc.) and/or wireless communications interfaces capable of communicating through a cellular network, such as GSM, or through Wi-Fi, such as with a wireless local area network (WLAN). Accordingly, the computing device116may be capable of transmitting and receiving information wirelessly through both short range, radio frequency (RF), cellular, and Wi-Fi connections. The computing device116may access the network108through a wireless link to an access point. For example, a computing device116may access the network108through one or more access points1006. The access points1006may be of any suitable type or types. For example, an access point1006may be a cellular base station, an access point for wireless local area network (e.g., a Wi-Fi access point), an access point for wireless personal area network (e.g., a Bluetooth access point), etc. The access point1006may connect the computing device116to the network108, which may include the Internet, an intranet, a local area network, private communication networks, etc. In some embodiments, the communications interfaces may allow computing device116to receive programming content cast from the television receiver. For example, the programming content from the television receiver may be indirectly transmitted via a local network (e.g., via Wi-Fi) or directly transmitted to the computing device via a casting device integrated with the television receiver or coupled to the television receiver (e.g., via a dongle). As another example, the television receiver may cast programming content to the computing device via a wired connection (e.g., via one or more of HDMI, USB, lightning connector, etc.). Some embodiments may provide for simulcasting such that the same programming that is being displayed on the display device is being displayed on the computing device116simultaneously or substantially simultaneously. The computing device116can also include at least one computer-readable medium1046coupled to the processor1036, which stores application programs and other computer code instructions for operating the device, such as an operating system (OS)1048. In some embodiments, the application1051may be stored in the memory1034and/or computer-readable media1046. Again, the example of computing device116is non-limiting. Other devices, such as those disclosed herein, may be used. It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed. The terms “machine-readable medium,” “computer-readable storage medium” and “computer-readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. These mediums may be non-transitory. In an embodiment implemented using the computer systems disclosed herein, various computer-readable media might be involved in providing instructions/code to processor(s) for execution and/or might be used to store and/or carry such instructions/code. In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take the form of a non-volatile media or volatile media. Non-volatile media include, for example, optical and/or magnetic disks, such as the non-transitory storage device(s). Volatile media include, without limitation, dynamic memory, such as the working memory. Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, any other physical medium with patterns of marks, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read instructions and/or code. Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a communication medium to be received and/or executed by the computer system. The communications subsystems of computer systems disclosed herein (and/or components thereof) generally will receive signals, and the bus then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory, from which the processor(s) retrieves and executes the instructions. The instructions received by the working memory may optionally be stored on a non-transitory storage device either before or after execution by the processor(s). It should further be understood that the components of computer systems can be distributed across a network. For example, some processing may be performed in one location using a first processor while other processing may be performed by another processor remote from the first processor. Other components of computer systems may be similarly distributed. As such, the computer systems may be interpreted as a distributed computing system that performs processing in multiple locations. In some instances, computer systems may be interpreted as a single computing device, such as a distinct laptop, desktop computer, or the like, depending on the context. The methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims. Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure. Also, configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks may be stored in a non-transitory computer-readable medium such as a storage medium. Processors may perform the described tasks. Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Furthermore, the example embodiments described herein may be implemented as logical operations in a computing device in a networked computing system environment. The logical operations may be implemented as: (i) a sequence of computer implemented instructions, steps, or program modules running on a computing device; and (ii) interconnected logic or hardware modules running within a computing device. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. Also, the terms in the claims have their plain, ordinary meaning unless otherwise explicitly and clearly defined by the patentee. The indefinite articles “a” or “an,” as used in the claims, are defined herein to mean one or more than one of the element that the particular article introduces; and subsequent use of the definite article “the” is not intended to negate that meaning. Furthermore, the use of ordinal number terms, such as “first,” “second,” etc., to clarify different elements in the claims is not intended to impart a particular position in a series, or any other sequential character or order, to the elements to which the ordinal number terms have been applied.
170,450
11943506
DETAILED DESCRIPTION FIG.1is a functional diagram of a preferred media hub system supporting the automatically analyzing and managing a media hub software update. As shown, system100comprises media hub102. Media hub102includes processor104and memory206. Memory206stores information that defines an operating system and one or more applications, as well as content data. Media hub102is linked to headend108by broadband network110. Headend108is controlled by an MSO that provides media services and content to the users of media hub102. Media hub102is also shown to be linked to television112, upon which users can access MSO provided media services. A flow diagram showing the steps involved in an MSO mandated software update for media hub102inFIG.2. The process the MSO initiates the process by triggering or scheduling a software download event for media hub102(step202). The event begins with a query of memory106by processor104to determine if the memory presently has enough free storage space to accommodate the impending software update download (step204). Information indicative of the storage space required for the download would have been provided to media hub102by headend108at the initiation of the download event. If memory206has the requisite available storage space, the process continues with step206and the image of the updated software is downloaded to media hub102from headend108. The process then terminates at step208with media hub102fully updated. However, if conditional step204results in a negative outcome, the process of software off-loading is initiated within media hub102. In step210processor104calculates the value of a Relative Move Score (“RMS”) for each application presently stored in memory106. This RMS value is a function of the total memory footprint a given application occupies within memory106. The more memory occupied, the higher the calculated RMS value. The calculated RMS value for an application will be reduced by processor104if the MSO has designated that particular application as high-priority. This type of designation might be applied by an MSO to applications central to the operation of video services, or those related to content management and customer privacy. Regardless of motive for designating an application “high-priority”, the associated RMS for that application will be reduced. This reduction can be a percentage of the actual calculated RMS, or simply fixing the value of the RMS to a predetermined value or limit. For extremely high-priority applications RMS could be fixed at0. Once the RMS values for all of the applications stored in memory106have been calculated and all reductions applied, processor104selects a subset that will be temporarily removed from memory104(step212). This subset can be specified within the software off-loading process to be a predetermined number of applications (remove the three applications having the highest RMS value), or predetermined percentage of the number of applications stored in memory106(remove stored applications according to RMS values until 30% of the memory space occupied by applications is free). The particular algorithm applied to determine the manner in which applications will be removed can be arbitrarily defined by the MSO, but it must prioritize the removal of applications having higher relative RMS values to ensure the off-loading process is performed efficiently. The process continues with step214wherein a processor creates an application archive for the storage of the applications that will be temporarily removed. This application archive can be a portion of an external hard drive (114) or cloud-based storage (116) linked to media hub by a private or public network (118). The selected subset of applications is then off-loaded from media hub102to the archive (step218). Next processor102determines if the off-loading of the selected subset of applications has resulted in clearing enough free space within memory106to accommodate the downloading of the updated software (step218). If it has, the process continues with step220and the new software image is downloaded and installed on media hub102. The off-loaded applications are then restored onto media hub102from the archive (step222) and the process terminates (step208). However, if conditional218has a negative outcome, the process reverts back to step210and RMS values are recalculated for the applications that remain within memory106. Steps212-218are then repeated until the downloading of the updated software can be accommodated. A flow diagram for an alternate process of automatically analyzing and managing a media hub software update is depicted inFIG.3. This alternate process relies upon the headend, as opposed to a media hub processor, to primarily direct and manage the process. As shown inFIG.3, the process is started at step302with the headend initiating a download event and executing a Remote Procedure Call (“RPC”) to download the updated software to media hub102(step304). If the download was successful, any off-loaded applications are restored to memory106(step308) the process terminates (steps310). However, if conditional step306results in a negative outcome, the headend determines if an “out of memory” error message was generated by media hub102(step312). If a such an error message was not generated, headend108designates the download as requiring attention that is outside of the scope of the headend capabilities. An out of scope determination may result in the MSO dispatching a technician to the premises at which media hub102is installed, or initiating some other action to remediate the situation. If conditional312results in an affirmative outcome, the process continues with step316and the headend accesses diagnostic information collected by and stored within media hub102. Diagnostic information is routinely accumulated within media hubs in accordance with various technical specifications, such as those established by the Broadband Forum (see Broadband Forum Technical Reports 069 and 181). Utilizing this diagnostic information, the RMS value for each application presently stored in memory106is calculated at the headend (step318). The calculated RMS value is then utilized to select a subset of applications to be temporarily removed from memory104(step320). This subset can be determined in much the same manner as was described for the process flow ofFIG.2and the process will not be discussed further here. The process continues with step322wherein the headend creates an application archive for the storage of the applications that will be temporarily removed. This application archive can be located within the headend, or within a portion of an external hard drive (114), or cloud-based storage (116) linked to media hub102or headend108by a private or public network (118). The selected subset of applications is then off-loaded from media hub102to the archive (step324). Next headend108determines if the off-loading of the selected subset of applications has resulted in clearing enough free space within memory106to accommodate the downloading of the updated software (step326). This determination is made based upon diagnostic data obtained from media hub102. If it has, the process continues with step304and an RPC is initiated to download the new software image and install it on media hub102. The process then continues as was outlined above (step306, . . . ). Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. Variations and extensions could be implemented and practiced without departing from the spirit and scope of the present invention as defined by the appended claims.
7,817
11943507
DETAILED DESCRIPTION Methods and systems are provided herein for selectively masking business entity identifiers and/or brand identifiers (e.g., logos and other identifying information) visible in objectionable media content. The methods and/or any instructions for performing any of the embodiments discussed herein may be encoded on computer-readable media. Computer-readable media includes any media capable of storing data. The computer-readable media may be transitory, including, but not limited to, propagating electrical or electromagnetic signals, or may be non-transitory including, but not limited to, volatile and non-volatile computer memory or storage devices such as a hard disk, floppy disk, USB drive, DVD, CD, media cards, register memory, processor caches, random access memory (RAM), etc. FIG.1depicts masking scenario100in which media content comprising objectionable content is subjected to a masking action of visible entity identifiers, in accordance with some embodiments of the disclosure. Although masking scenario100depicts certain components and illustrates certain steps, it is understood that masking scenario100may be executed by any or all components depicted in and described in reference to masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Masking scenario100may also result from the execution of any or all of masking processes200,400,500,700, and1000ofFIGS.2,4,5,7, and10, respectively. Additionally, masking scenario100may result in the use or generation of content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, and/or product displays900A-E ofFIG.9. Masking or blurring a part of an image that depicts a human face, a license plate, or other identifying information that should be hidden at a time a content item is presented (e.g., to protect the identities of those involved with an incident) is known in the art. Additionally, some video-sharing sites, such as YouTube, provide users with custom blur tools that enable users to place blurs (e.g., circular) over different objects in a video, including moving objects (e.g., a car's license plate). Existing machine learning algorithms can detect and mask or blur objects based on various predefined criteria. It is common to see blurred objects in map applications that offer a feature that enables a user to view real-world images of a particular location (e.g., to simulate what a view from a particular street corner may look like). These images are processed on servers to mask the license plates on cars and blur the faces of pedestrians that appear in the pictures utilized by the mentioned applications. Masking entities (e.g., name of a shop) in content scenes with objectionable content is useful for preventing undesirable associations by customers between entities and the objectionable content. Content can be user-generated content (hereinafter “UGC”) such as a video that was recorded by pedestrians witnessing a serious car accident or incident that caused casualties. Masking scenario100comprises media content102being generated for display on display device104. Media content102may comprise the mentioned UGC, may comprise a live video stream, and/or may comprise content stored in memory and retrieved for display via a media content distribution platform or service. Display device104is configured to respond to inputs from viewer106via input device108(e.g., a remote control). In some embodiments, viewer106may be able to modify the settings of display device104based on preferences in order to modify or override masking protocols for media content102as defined by a source of media content102(e.g., a content provider has masked aspects of the content or has decided to not mask certain aspects of the content, and viewer106prefers to modify the originally provided version of media content102). In some embodiments, viewer106may comprise a customer of a content provider. In other embodiments, viewer106may comprise a system administrator or content moderator who is provided protocols for masking particular content items or masking aspects of particular content items depending on a context associated with various aspects of media content102. Additionally, media content102may be processed for masking actions before or during display of media content102on display device104(e.g., upstream of a television or via a television-based modification protocol). Media content102comprises context identifier114, which comprises a string of characters corresponding to a description of the subject matter of media content102. In some embodiments, context identifier114may comprise visible and non-visible metadata associated with media content102. Context identifier114, as shown inFIG.1, corresponds to a description of a news story, an identifier of a news broadcast (e.g., a channel and a company name), as well as particular details of a location corresponding to the subject matter of media content102. In some embodiments, additional information may comprise context identifier114, including recommended ages for viewing or other media content-related descriptors that would influence the size or construction of an audience for media content102(e.g., an indication of whether media content102is available only in a localized area like a county or a state, as opposed to nationwide or internationally). Context identifier114is used at least in part to determine whether media content102comprises objectionable subject matter or a type of objectionable subject matter, and may also be used as part of an implementation of masking protocols (e.g., as represented by the exemplary protocols ofFIG.8) depending on which entities (e.g., business, logos, or products thereof) should be masked. Entity identifiers112A-C correspond to multiple entity identifiers in at least one frame of media content102. For example, entity identifier112A comprises a vehicle, which may have a vehicle manufacturer logo and/or name. In another example, entity identifier112B corresponds to advertisements visible in a store window, where the events of media content102occurred. Entity identifier112C corresponds to a sign visible in at least one frame of media content102that identifies a store where the events of media content102, as indicated by content identifier114, occurred. Each of entity identifiers112A-C may be assigned an entity identifier type (e.g., business name, advertisement corresponding to a business, a logo, and/or a well-known phrase). Masking protocols may be retrieved, based on entity identifiers112A-C, that consider whether any or all of the entities identified may be negatively associated with the objectionable content found in media content102, thereby yielding a decision to mask any or all of the identifiers. Once the masking protocols are reviewed for rationale to perform a masking action on any or all of entity identifiers112A-C (e.g., as shown inFIG.1, all may be masked as the incident shown involves violence against the local police, which may broadly impact the views of any entity shown in media content102), masked identifier icons114A-C are generated to cover up each of entity identifiers112A-C, respectively, in each frame in which entity identifiers112A-C are displayed. For example, masked identifier icons114A-C may correspond to a blurring of pixels, a blackout bar, a covering graphic for at least the known iconic aspects of the entity identifier (e.g., just the logo or part of an assembly particularly associated with a brand such as a known car fascia assembly), or may comprise an alternative graphic to place instead (e.g., an alternative entity identifier if the masking protocols correspond to a replacement action of content identifiers flagged for masking due to improper licensing use). In one embodiment, a logo detection or entity identifier obscuring service is invoked at the beginning of video playback (e.g., when viewer106selects media content102for playing as a stream or playing back as a stored content item accessible to viewer106). An active profile corresponding to the play or playback command may be identified, which may provide additional context for performing masking actions (e.g., a location of viewer106and/or display device104, and, additionally or alternatively, known associations between viewer106and/or display device104with brands via past purchases or online orders, and demographic data including allowable maturity ratings for content to be viewed). For example, a sign for a local jewelry store in a city in the state of Washington might not need to be obscured to users watching the video in a different state or country as the local jewelry store is not a widely known or recognized brand outside of a particular geographic location (e.g., a state, county, or a city). Alternatively, if the same jewelry store was advertising online to grow its business, then the advertising was likely targeted (i.e., the advertisements were/are likely served to couples in a certain demographics, and most importantly that live near the jewelry store or may have access to the jewelry store based on a threshold distance comparison such as a three-hour transit distance). Accordingly, the advertising or target reach parameters associated with the advertisement campaign for the online services provided via the local jewelry store can serve as an indicator as to whether a specific logo or brand within the video should be obscured, and if so, which group of users should not see the logo (e.g., advertisement campaign may modify parameters to expand the audience for which a masking action is performed for the entity identifiers of the jewelry store). An entity identifier may be a logo, which may comprise any form of identifying content such as a string of text or other imagery that is generally known to a target audience as being associated with a particular entity. Media content102and the aspects of media content102that could be subjected to such modification (i.e., obscuring a portion of the frames that depict a logo) can be identified in several ways. For example, the title of the video can be checked for keywords such as “car accident,” “kidnapping,” “injury,” or other terms stored in a database of objectionable content, in order to identify the videos that need to run through the entity identifier (e.g., logo or brand name) detection stage or process. Similarly, the comments section or even explicit metadata such as the category of the video can be used to determine whether the video is a candidate for entity identifier masking. Video and/or audio processing algorithms that are tunable may be utilized in specific incidents (e.g., accidents, protests, violence, etc.). In some embodiments, the entity identifier may comprise a song, phrase, or segment of sound associated with media content102that may be masked for the purposes of obscuring the detection of the entity identifier. Where a brand or entity identifier detection service is utilized while processing the subject matter of media content102, the service can generate a list of brands that appear in a video, the time a brand appears and/or the segment number (e.g., which frames). A severity factor can also be assigned. The severity factor is an indication of how disturbing the content in the video is. The severity factor may be based on whether the video is trending, share rate, profiles sharing (e.g., sharing of media content102by social media influencers may be assigned a higher severity factor than sharing of media content102by social media users without an influencer designation), and/or view rate. This information can be saved in a data structure and can be used to enable on-the-fly masking of a specific logo from among multiple entity identifiers. The detected entity identifiers are further augmented with metadata to enable efficient and quick masking to occur upon video playback. For example, if a logo of an international brand was detected (e.g., Pepsi), then such logo would be assigned the value “International.” Alternatively, a brand or logo for a doughnut shop that is only available in a certain country, such as the U.S., can be assigned the value “Domestic-U.S.,” or “Domestic-Europe,’ if the brand is exclusive to Europe. The user's profile can be used to determine whether there's an association with the brand. In one embodiment, the length of time that the brand appears is also determined. For example, a brand name soda logo might appear in the background of a picture in just a handful of frames (e.g., seen in less than 1 second of the total content). Depending on the severity factor assigned to the video, the brand name soda logo might not need to be obscured. In one embodiment, a second copy of the video is created that has the logo/brand blurred, and the determination regarding which copy to serve to a user occurs when the user requests to watch the video. Similarly, since some apps/websites auto play content on the “For You” page, then the player fetches portions to auto play from the original version or the modified version. In one embodiment, logos/brands are obscured in videos that do not depict violence at all. For example, it is common to see obscured logos or trademarks (e.g., on clothes worn by artists) in music videos. This is because the music label wants compensation for promoting such brands, especially if the artist is popular. A famous artist wearing a Supreme Hoodie is generally considered a display of a product, and often results in free advertisement for Supreme depending on who wears the product and at what events. In such scenarios, presenting a version of the video with the unmasked brand is used as an advertising medium. For example, an ad network might determine that a user watching the music video is a Supreme customer (e.g., has purchased Supreme products before), and based on its communication with the playback service, instructs the playback service to present an unmodified version of the video. The content owner can therefore be compensated for showing the unmodified version, since doing so is equivalent to serving an ad within the video or a display ad near the playback device. The playback service can make such information available to the ad network, i.e., which brand is obscured but can be unobscured for the right audiences or users, as in-content advertising. FIG.2depicts masking process200, which comprises an illustrative process for determining whether an entity identifier is visible within a content item comprising objectionable content and whether the entity identifier should be subjected to a masking action, in accordance with some embodiments of the disclosure. It is understood that masking process200may be executed by any or all components depicted in and described in reference to masking scenario100ofFIG.1, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Masking process200may also incorporate, during execution, any or all of masking processes400,500,700, and1000ofFIGS.4,5,7, and10, respectively, in whole or in part. Additionally, masking process200may result in the use or generation of content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, and/or product displays900A-E ofFIG.9. At202, a request to display media content is received. For example, a viewer may provide an input via a remote device, a controller device, an audio receptor, or other means associated with a particular device configured to dispel media content. If, at204, objectionable subject matter is not found in at least one frame of the media content (NO at204), then the process ends. If objectionable subject matter is found in at least one frame of the media content YES at204), then the at least one frame of the media content is reviewed for entity identifiers. As discussed in reference toFIG.1, objectionable subject matter may be predefined by any of a user, a content source, a content distributor, or by any metadata accessible via communicatively connected devices such as social media content or metadata encoded to be associated with individual frames of media content. If, at206, at least one entity identifier is not identified in the at least one frame of the media content (NO at206), then frames of the media content immediately preceding or following the at least one frame with the objectionable content are reviewed at208for content identifiers. For example, a news clip may not show a crime scene in a frame with an entity identifier. However, the news commentators may verbalize an entity identifier immediately prior to or immediately after describing the crime scene, which may lead to undesirable associations by viewers. If, at208, there are no entity identifiers in the frames immediately prior to or immediately after the at least one frame comprising the objectionable content (NO at208), then the process ends as no masking actions are required. If, at206, at least one entity identifier is identified in the at least one frame of the media content (YES at206), then a first context of the subject matter of the media content is determined (e.g., details of the objectionable content are identified based on metadata and/or video analysis as cross referenced with a masking protocol to identify industries and entities potentially adversely affected by the objectionable content as well as the reach of the content item in terms of geographic location) at210while a second context of the entity identifier (e.g., an industry, a target audience for advertisements, a geographic reach of the entity identifier, or another other related variable is determined for cross referencing with a masking protocol to determine if the adverse subject matter may adversely affect the entity identifier) at212. At214, the first context of the subject matter of the media content is compared to the second context of the entity identifier (e.g., a comparison is made between context data to verify if, for example, industries are related and the first context may yield a negative view of the identifier with the second context as defined by masking protocols, like those shown inFIG.8). At216, if the first context and the second context are not similar enough to require a masking action (NO at216), then the process ends as no masking action is required to prevent a negative association between the subject matter of the media content and the content identifier. If the first context and the second context are similar enough to require a masking action of the entity identifier (YES at216), then a masking action is performed on the at least one frame to render the entity identifier unperceivable by a viewer of the media content at218(e.g., as shown inFIGS.6and9). The comparison may comprise a listing of content elements and a threshold amount of the elements must match or must be significantly related. For example, the handheld device industry and the vehicle industry may both be adversely affected by a car accident where a handheld device was known to be used prior to the events of the accident. However, a brand of clothing may not be adversely affected by the imagery of the car accident. Each of the relationships may be based on exemplary masking protocols as shown inFIG.8or may be a result of a form of machine learning to train a masking algorithm based on newer or previously unknown negative associations between related entity identifiers and industries, as shown inFIG.10. In some examples, the masking at218is performed by digitally manipulating at least a portion of a frame and/or related sections or locations (e.g., specific or related macroblocks) in subsequent frames as well (e.g., when the decoding of a frame depends on other frames). Additionally, obscuring an object or identifier in one part of the video, in some aspects, will cause it to be obscured in other parts of the video (e.g., if it appears again 5 minutes later). FIG.3is a block diagram of system300, which is configured to mask entity identifiers in media content comprising objectionable content that is generated for display on display310of a computing device (e.g., computing device302) in response to determining masking protocols require masking of entity identifiers based on a contextual analysis, in accordance with some embodiments of the disclosure. In some embodiments, one or more parts of or the entirety of system300may be configured as a system implementing various features, processes, and components ofFIGS.1,2, and4-10. AlthoughFIG.3shows a certain number of components, in various examples, system300may include fewer than the illustrated number of components and/or multiples of one or more of the illustrated number of components (e.g., multiple iterations of computing device302for each device in the system with a display and or multiple iterations of server304). The interactive system is shown to include computing device300, content server302, and a communication network306. It is understood that while a single instance of a component may be shown and described relative toFIG.3, additional instances of the component may be employed. For example, content server302may include, or may be incorporated in, more than one server. Similarly, communication network306may include, or may be incorporated in, more than one communication network. Content server302is shown communicatively coupled to computing device300through communication network306. While not shown inFIG.3, content server302may be directly communicatively coupled to computing device300, for example, in a system absent or bypassing communication network306. Communication network306may comprise one or more network systems, such as, without limitation, Internet, LAN, WIFI or other network systems suitable for audio processing applications. In some embodiments, the system ofFIG.3excludes content server302, and functionality that would otherwise be implemented by content server302is instead implemented by other components of the system depicted byFIG.3, such as one or more components of communication network306. In still other embodiments, content server302may work in conjunction with one or more components of communication network306to implement certain functionality described herein in a distributed or cooperative manner. Similarly, in some embodiments, the system depicted byFIG.3excludes computing device300, and functionality that would otherwise be implemented by computing device300is instead implemented by other components of the system depicted byFIG.3, such as one or more components of communication network306or content server302or a combination of the same. In other embodiments, computing device300works in conjunction with one or more components of communication network306or content server302to implement certain functionality described herein in a distributed or cooperative manner. Computing device300includes control circuitry308, display circuitry310and input/output circuitry312. Control circuitry308may be based on any suitable processing circuitry and comprises control circuits and memory circuits, which may be disposed on a single integrated circuit or may be discrete components. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores). In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). Some control circuits may be implemented in hardware, firmware, or software. Control circuitry308in turn includes transceiver circuitry314, storage316and processing circuitry318. In some embodiments, computing device300or control circuitry308may be configured as varying embodiments of audio/video user entertainment system100ofFIG.1. In addition to control circuitry308and320, computing device300, content server302, may each include storage (storage316and storage322, respectively). Each of storages316and322may be an electronic storage device. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVRs, sometimes called personal video recorders, or PVRs), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Each storage316and322may be used to store various types of content, metadata, and/or other types of data (e.g., they can be used to record audio questions asked by one or more participants connected to a conference). Non-volatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage may be used to supplement storages316and322or instead of storages316and322. In some embodiments, the user profile activity history, user profile preferences, and accessible media content may be stored in one or more of storages316and322. In some embodiments, control circuitry320and/or308executes instructions for an application stored in memory (e.g., storage322and/or storage316). Specifically, control circuitry320and/or308may be instructed by the application to perform the functions discussed herein. In some implementations, any action performed by control circuitry320and/or308may be based on instructions received from the application. For example, the application may be implemented as software or a set of executable instructions that may be stored in storage322and/or316and executed by control circuitry320and/or308. In some embodiments, the application may be a client/server application where only a client application resides on computing device300, and a server application resides on content server302. The application may be implemented using any suitable architecture. For example, it may be a stand-alone application wholly implemented on computing device300. In such an approach, instructions for the application are stored locally (e.g., in storage316), and data for use by the application is downloaded on a periodic basis (e.g., from an out-of-band feed, from an Internet resource, or using another suitable approach). Control circuitry308may retrieve instructions for the application from storage316and process the instructions to perform the functionality described herein. Based on the processed instructions, control circuitry308may determine to execute elements of the embodiments of this disclosure in response to input received from input/output circuitry312or from communication network306. For example, in response to a user providing inputs to activate entertainment system100, control circuitry308may perform the steps of any of the processes depicted inFIGS.1,2, and4-11B, or processes relative to various embodiments. In client/server-based embodiments, control circuitry308may include communication circuitry suitable for communicating with an application server (e.g., content server302) or other networks or servers. The instructions for carrying out the functionality described herein may be stored on the application server. Communication circuitry may include a cable modem, an Ethernet card, or a wireless modem for communication with other equipment, or any other suitable communication circuitry. Such communication may involve the Internet or any other suitable communication networks or paths (e.g., communication network306). In another example of a client/server-based application, control circuitry308runs a web browser that interprets web pages provided by a remote server (e.g., content server302). For example, the remote server may store the instructions for the application in a storage device. The remote server may process the stored instructions using circuitry (e.g., control circuitry308) and/or generate displays. Computing device300may receive the displays generated by the remote server and may display the content of the displays locally via display circuitry310. This way, the processing of the instructions is performed remotely (e.g., by content server302) while the resulting displays, such as the display windows described elsewhere herein, are provided locally on computing device300. Computing device300may receive inputs from the user via input/output circuitry312and transmit those inputs to the remote server for processing and generating the corresponding displays. Alternatively, computing device300may receive inputs from the user via input/output circuitry312and process and display the received inputs locally, by control circuitry308and display circuitry310, respectively. Content server302and computing device300may transmit and receive content and data such as media content via communication network306. For example, content server302may be a media content provider, and computing device300may be a smart television configured to download or stream media content, such as a live news broadcast, from content server302. Control circuitry320,308may send and receive commands, requests, and other suitable data through communication network306using transceiver circuitry342,314, respectively. Control circuitry320,308may communicate directly with each other using transceiver circuitry342,314, respectively, avoiding communication network306. It is understood that computing device300is not limited to the embodiments and methods shown and described herein. In nonlimiting examples, computing device300may be a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a handheld computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smartphone, or any other device, computing equipment, or wireless device, and/or combination of the same capable of suitably displaying and manipulating media content. Control circuitry320and/or308may be based on any suitable processing circuitry such as processing circuitry328and/or318, respectively. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores). In some embodiments, processing circuitry may be distributed across multiple separate processors, for example, multiple of the same type of processors (e.g., two Intel Core i9 processors) or multiple different processors (e.g., an Intel Core i7 processor and an Intel Core i9 processor). In some embodiments, control circuitry320and/or control circuitry308are configured to implement a media content operation system, such as systems, or parts thereof, that perform various processes described and shown in connection withFIGS.1,2, and4-11B, and/or systems carrying out the features described and shown relative toFIGS.1,2, and4-11B. Computing device300receives user input332at input/output circuitry312. For example, computing device300may receive a user input such as a user swipe or user touch, as previously discussed. In some embodiments, computing device300is a media device (or player) configured as entertainment system100, with the capability to access media content. It is understood that computing device300is not limited to the embodiments and methods shown and described herein. In nonlimiting examples, computing device300may be a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a handheld computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smartphone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same. User input332may be received from a user selection-capturing interface that is separate from device300, such as a remote-control device, trackpad or any other suitable user movement sensitive or capture devices, or as part of device302, such as a touchscreen of display circuitry310. Transmission of user input332to computing device300may be accomplished using a wired connection, such as an audio cable, USB cable, ethernet cable or the like attached to a corresponding input port at a local device, or may be accomplished using a wireless connection, such as Bluetooth, WiFi, WiMAX, GSM, UTMS, CDMA, TDMA, 3G, 4G, 4G LTE, 5G, or any other suitable wireless transmission protocol. Input/output circuitry312may comprise a physical input port such as a 3.5 mm audio jack, RCA audio jack, USB port, ethernet port, or any other suitable connection for receiving audio over a wired connection, or may comprise a wireless receiver configured to receive data via Bluetooth, WiFi, WiMAX, GSM, UTMS, CDMA, TDMA, 3G, 4G, 4G LTE, 5G, or other wireless transmission protocols. Processing circuitry318may receive user input332from input/output circuitry312using communication path334. Processing circuitry318may convert or translate the received user input332which may be in the form of gestures or movement, to digital signals. In some embodiments, input/output circuitry312performs the translation to digital signals. In some embodiments, processing circuitry318(or processing circuitry328, as the case may be) carries out disclosed processes and methods. For example, processing circuitry318or processing circuitry328may perform processes ofFIGS.1,2, and4-11B, respectively. Processing circuitry318may provide requests to storage316by communication path336. Storage316may provide requested information to processing circuitry318by communication path338. Storage316may transfer, by communication path338, a request for information to transceiver circuitry314, which may translate or encode the request for information to a format receivable by communication network306before transferring the request for information by communication path340. Communication network306may forward the translated or encoded request for information to transceiver circuitry342by communication path346. At transceiver circuitry342, the translated or encoded request for information, received through communication path346, is translated or decoded for processing circuitry328, which will provide a response to the request for information (e.g., additional activities associated with an event) based on information available through control circuitry320or storage322, or a combination thereof. The response to the request for information is then provided back to communication network306by communication path350in an encoded or translated format such that communication network306, which can forward the encoded or translated response back to transceiver circuitry314by communication path352. At transceiver circuitry314, the encoded or translated response to the request for information may be provided directly back to processing circuitry318by communication path356, or may be provided to storage316, through communication path358, which then provides the information to processing circuitry318by communication path360. Processing circuitry318may also provide a request for information directly to transceiver circuitry314though communication path362, enabling storage316to respond to an information request, provided through communication336, by communication path360that storage316does not contain information pertaining to the request from processing circuitry318. Processing circuitry318may process the response to the request received through communication path356or360and may provide instructions to display circuitry310for a notification to be provided to the users through communication path364. Display circuitry310may incorporate a timer for providing the notification or may rely on inputs through input/output circuitry312from the user, which are forwarded through processing circuitry318through communication path364, to determine how long or in what format to provide the notification. When display circuitry310determines the display has been completed (e.g., media content has completed a playback time or a user has exited out of a recommendation), a notification may be provided to processing circuitry310through communication path366. The communication paths provided inFIG.3between computing device300, content server302, communication network306, and all subcomponents depicted are exemplary and may be modified to reduce processing time or enhance processing capabilities for each step in the processes disclosed herein by one skilled in the art. FIG.4depicts masking process400, which comprises an illustrative process for selectively masking entity identifiers based on a location, in accordance with some embodiments of the disclosure. It is understood that masking process400may be executed by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Masking process400may also incorporate, during execution, any or all of masking processes200,500,700, and1000ofFIGS.2,5,7, and10, respectively, in whole or in part. Additionally, masking process400may result in the use or generation of content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, and/or product displays900A-E ofFIG.9. At402, at least one entity identifier is determined to be present or visible by a viewer in objectionable media content. At404, entity advertisement protocols corresponding to the entity associated with the at least one entity identifier are retrieved based on an association matrix (e.g., a matrix comprising a list of business entities and entity identifiers for each business entity with corresponding masking protocols) and other data available in association with the at least one entity identifier. For example, the advertisement protocols may comprise a target audience and/or a geographic region of the intended reach of advertisements for the entity. Additional data may comprise a current list of content considered objectionable (e.g., based at least in part on social media trends) at the time masking process400commences. If, at406, the entity advertisement protocols identify a target location based on a location (YES at406), then a list of entity identifiers for the location are determined from the advertisement protocols at408. If, at410, the entity identifier is not on the list of entity identifiers for the location (NO at410), then the subject matter of the entity associated with the entity identifier (e.g., its industry and/or the location of its primary business) is compared to the subject matter of the content item (e.g., to determine if the subject matter of the content item is objectionable when viewed with the entity identifier) at412. The process then proceeds to process block214ofFIG.2to complete the masking process analysis. If, at410, the entity identifier is on the list of entity identifiers for the location (YES at410), then an association matrix for the entity corresponding to the entity identifier is utilized to determine whether the subject matter of the media content corresponds to an instruction to mask the entity identifier at414. Additionally, if, at406, the entity advertisement protocols do not identify a target audience based on the location (NO at406), then an association matrix (e.g., as shown inFIG.8) for the entity corresponding to the entity identifier is utilized to determine whether the subject matter of the media content corresponds to an instruction to mask the entity identifier at414. If, at416, the association matrix does not comprise an instruction to mask the entity identifier based on the subject matter of the content item (NO at416), then the subject matter of the entity associated with the entity identifier (e.g., its industry and/or the location of its primary business) is compared to the subject matter of the content item (e.g., to determine if the subject matter of the content item is objectionable when viewed with the entity identifier) at412. The process then proceeds to process block214ofFIG.2to complete the masking process analysis. If, at416, the association matrix does comprise an instruction to mask the entity identifier based on the subject matter of the content item (YES at416), then a masking action is performed on the at least one frame to render the entity identifier unperceivable by a viewer of the media content at418(e.g., as shown inFIGS.6and9). In some examples, the masking at418is performed by digitally manipulating at least a portion of a frame and/or related sections or locations (e.g., specific or related macroblocks) in subsequent frames as well (e.g., when the decoding of a frame depends on other frames). Additionally, obscuring an object or identifier in one part of the video, in some aspects, will cause it to be obscured in other parts of the video (e.g., if it appears again 5 minutes later). Additionally or alternatively, a related object or identifier (e.g., a different object associated with the same brand) may be unmasked in some aspects. FIG.5depicts masking process500, which comprises an illustrative process for selectively masking entity identifiers based on a licensing agreement, in accordance with some embodiments of the disclosure. It is understood that masking process500may be executed by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Masking process500may also incorporate, during execution, any or all of masking processes200,400,700, and1000ofFIGS.2,4,7, and10, respectively, in whole or in part. Additionally, masking process500may result in the use or generation of content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, and/or product displays900A-E ofFIG.9. At502, at least one entity identifier in at least one frame of media content is determined to be present and viewable by a viewer of the media content. At504, a business entity corresponding to the entity identifier is identified. At506, a source of the media content is identified based on the media content (e.g., based on metadata describing the origins of the media content prior to display). If, at508, the media content does not comprise commercial use of the entity identifier beyond fair use, such as merely describing a source of a product, (NO at508), then the process proceeds to process block214ofFIG.2to determine if there is a risk for an objectionable association between the entity identifier and the media content. If, at508, the media content does comprise commercial use of the entity identifier beyond fair use, such as convoluting or misrepresenting a source of a product or message associated with a product, (YES at508), then a query requesting licensing agreements between the business entity and the source of the media content is generated at510. For example, respective commercial agreement administrators for each of the business entity and the source of the media content may be queried for proof of bilaterally adopted commercial licensing agreements between the two parties. If, at512, it is determined there is a licensing agreement permitting the source of the media content to use the entity identifier (YES at512), then the subject matter of the content item is compared to the use parameters defined in the licensing agreement at514. If, at514, the use of the entity identifier and the subject matter of the content item are within the permitted parameters of the licensing agreement (YES at514), then the process ends, as no masking action is required. Alternatively, if, at512, there is no licensing agreement permitting the source of the media content to use the entity identifier (NO at512), or if, at514, the subject matter of the content item and the use of the entity identifier are outside the parameters of an executed licensing agreement (NO at514), then a masking action is performed on the at least one frame to render the entity identifier unperceivable by a viewer of the media content at516(e.g., as shown inFIGS.6and9). At518, a notification for the business entity indicating improper or unauthorized use of the entity identifier is generated. At520, the notification is transmitted to the business entity (e.g., the legal team of the business entity may be contacted to address the improper or unauthorized use of the entity identifier). FIG.6depicts content item versions600A and600B, wherein content item version600A comprises unmasked entity identifiers and content item version600B comprises a second version of the same content item with masked entity identifiers, in accordance with some embodiments of the disclosure. Each of content item versions600A and600B may be generated by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Each of content item versions600A and600B may be generated as a result of the execution of any or all of masking processes200,500,700, and1000ofFIGS.2,5,7, and10, respectively, in whole or in part. Content item versions600A and600B may be generated by using masking protocols800A-C ofFIG.8and/or may be generated by incorporating aspects of product displays900A-E ofFIG.9. Each of content item versions600A and600B comprise display602, which is configured to generate for display media content604. Media content604may, for example, be a music video with readily identifiable individuals or entities associated with individuals. Media content604may be subjected to video, audio, and/or metadata analysis to identify a source of the music video (e.g., an artist, a studio, a production company, or a media distribution platform). Based on the analysis, the identified source of the video as well as at least one entity identifier may be used to generate a query to businesses associated with the identified source of the video and the at least one entity identifier for proof of licensing agreements (e.g., as described in reference to masking process700ofFIG.7). As shown inFIG.6, entity identifier606corresponds to a drink product visible in media content604, being held by an individual associated with media content604. Content item version600A corresponds to a version of media content604wherein entity identifier606does not need to be masked based on a determination that the use of entity identifier606corresponds a form of fair use or, as described in reference toFIG.7, a licensing agreement outlining either the use of entity identifier606is permitted or the current location of the streaming of media content604does not conflict with an existing licensing agreement. Content item version600B corresponds to a version of media content604where entity identifier606is obscured, masked, or covered via masked icon608. For example, the use of entity identifier606may violate an existing licensing agreement, or an entity associated with media content604may lack a valid licensing agreement to use entity identifier606in the manner corresponding to media content604, resulting in the entity associated with entity identifier606requesting or requiring the generation of content item version600B, in parallel to the generation of content item version600A, where the use of entity identifier606is in question. FIG.7depicts masking process700, which comprises an illustrative process for selectively masking entity identifiers based on a profile associated with at least one content platform, in accordance with some embodiments of the disclosure. It is understood that masking process700may be executed by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Masking process700may also incorporate, during execution, any or all of masking processes200,400,500, and1000ofFIGS.2,4,5, and10, respectively, in whole or in part. Additionally, masking process700may result in the use or generation of content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, and/or product displays900A-E ofFIG.9. At702, at least one entity identifier is determined to be present and visible by viewers of media content in at least one frame of the media content. At704, a user profile is identified in association with a request to display the media content item. For example, the media content may be requested by a user of a subscription-based content provider, which tracks user viewing and activity histories, or an administrator of a content platform may be configured to address perception of entities visible in various content items available via the content platform (e.g., a masking service paid by entities to address potential risks for objectionable associations). At706, at least one user preference indication, wherein the at least one user preference indication corresponds to at least one of 1) a desire for a first subject matter to be displayed and 2) a desire for a second subject matter to be masked, is retrieved from the user profile. If, at708, the media content item is determined to correspond to subject matter the user profile indicates should be reviewed for masking of visible entity identifiers (YES at708), the process proceeds to process block214ofFIG.2to determine if there is a risk for an objectionable association between the entity identifier and the media content. If, at708, the media content item is not determined to correspond to subject matter the user profile indicates should be reviewed for masking of visible entity identifiers (NO at708), then the user profile is reviewed for data indicative of interest in the entity identifier at710. For example, the user profile may comprise a purchase history of the user, and masking process700may be used at least in part for product placement in modifiable media content to replace brands or entity identifiers not of interest to the user with entity identifiers of interest to the user (e.g., masking a visible soda brand of a first label with a second label of a different drink, such as a flavored water, or different soda brand the user purchases or has considered purchasing). If, at710, the entity identifier is indicated as being of interest to the user based on the user profile (YES at710), then the process ends as no masking action is required. If, at710, the entity identifier is indicated as not being of interest to the user based on the user profile (NO at710), then an alternative entity identifier that the user profile indicates is 1) related to the at least one entity identifier subject matter and 2) of interest to the user is identified at712. At714, a masking action on the content item, to render the at least one entity identifier unperceivable by a viewer of the media content by covering the at least one entity identifier with the alternative entity identifier, is performed (e.g., a first label or product is covered by a second or preferred label or product). FIG.8depicts association matrices800A-C, which are used to determine whether entity identifiers should be masked based on the subject matter of a content item in which the entity identifiers are present, in accordance with some embodiments of the disclosure. Each of association matrices800A-C may be generated by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, product displays900A-E ofFIG.9, and masking process1000ofFIG.10. Each of association matrices800A-C may be generated as a result of the execution of any or all of masking processes200,400,500,700, and1000ofFIGS.2,4,5,7, and10, respectively, in whole or in part. Association matrices800A-C may be utilized in determining which of product displays900A-E ofFIG.9to generate. Association matrices800A-C each correspond to different entities, lists of identifiers, and a list of objectionable content that would trigger a masking action for each of the listed identifiers detected in media content comprising at least one item on or related to (e.g., as determined by an intent analyzer or other algorithm to statistically determine a relationship between strings of characters via metadata or other available data) each list of objectionable content triggering masking actions. For example, as shown in association matrix800A, column802A lists exemplary technology and/or device entities (e.g., Apple, Samsung, and Google). Column804A lists entity identifiers for each of the entities that would yield an association between the identifier and the entity (e.g., a logo, a string of characters comprising a company name, and/or a device of a well-known make or model such as a smart mobile device). Column804B provides three scenarios that would trigger a masking action for each, including a car accident, riots that conflict with the entity agenda, and media content characters performing actions not suitable for a target audience for a product (e.g., an artist in an explicit music video using a device intended for customers below an age limit for viewing the explicit music video). As shown in association matrix800B, a list of automotive manufacturers populate column802B with a similar list of identifiers to804A in804B with the addition of dealerships selling the vehicles (e.g., each association matrix for different industry types may comprise entity identifiers of different types). Column806B of association matrix800B also excludes the masking scenario of media content characters misusing a product for particular age range since vehicles are typically made available for consumers of age to view explicit music videos. Association matrix800C comprises column802C listing Walmart, Burger King, and a local enterprise (e.g., a store or restaurant with a single location that may have broad reaching advertisements, but only provides products and/or services out of a single location. The entity identifiers listed in column804C are different for each entity (e.g., Walmart includes delivery vehicles while the others do not) considering the type of identifiers may be different for different industries and different businesses. Column806C also shows a scenario of a violent crime and a scenario of an incident deterring customers (e.g., a health protocol issue) as the entities listed in column802C may be subjected to different events that the entities in association matrices800A and800B are not concerned with as impacting their businesses. FIG.9depicts product displays900A-E, which collectively comprise examples of masking actions that are performed for entity identifiers present in a content item comprising objectionable content, in accordance with some embodiments of the disclosure. Product displays900A-E may be generated by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, and masking process1000ofFIG.10. Each of product displays900A-E be generated as a result of the execution of any or all of masking processes200,400,500,700, and1000ofFIGS.2,4,5,7, and10, respectively, in whole or in part. Product displays900A-E may be generated by using masking protocols800A-C ofFIG.8and/or may be used in the generation of content item version600B ofFIG.6. Product display900A comprises an original, unmasked product with a visible product display (e.g., a can of Coca-Cola with the label presented). In response to determining a masking action is required for media content (e.g., as described in reference to the other figures) that displays product display900A, one of product displays900B-E is the result of a designated masking action. Product displays900B and900C each obscure only a displayed label or logo. Product display900B corresponds to a pixilation of the label or blurring of the label while product display900C corresponds to using a censor bar or a blackout method obscuring the label. Product display900D corresponds to an example wherein all of product display900A is obscured from view, including the shape (e.g., useful where an entity may be identified by a particular product geometry, shape, or size). Product display900E corresponds to a product replacement image (e.g., a can of Sprite) where, as described in reference toFIG.7, a licensing agreement is invalid for one product and a viewer of the media content has profile data indicating interest in another product of the same type (e.g., the viewer prefers Sprite to Coca-Cola). FIG.10depicts masking process1000, which comprises an illustrative process for updating a machine learning model configured to provide recommendations to selectively mask new entity identifiers based on information indicating that the subject matter of a content item displaying the new entity identifier may yield a negative association between the entity identifier and the content item displaying the new entity identifier, in accordance with some embodiments of the disclosure. It is understood that masking process1000may be executed by any or all components depicted in and described in reference to masking scenario100ofFIG.1, masking process200ofFIG.2, system300ofFIG.3, masking process400ofFIG.4, masking process500ofFIG.5, content item versions600A and600B ofFIG.6, masking process700ofFIG.7, masking protocols800A-C ofFIG.8, and product displays900A-E ofFIG.9. Masking process1000may also incorporate, during execution, any or all of masking processes200,400,500, and700ofFIGS.2,4,5, and7, respectively, in whole or in part. Additionally, masking process1000may result in the use or generation of content item versions600A and600B ofFIG.6, masking protocols800A-C ofFIG.8, and/or product displays900A-E ofFIG.9. At1002, a request to display media content received (e.g., as described in reference toFIG.1). If, at1004, the media content is determined to not comprise any frames with objectionable content, based on video/audio analysis as compared to a stored list of objectionable content or subject matter (NO at1004), then the process ends as no masking action is required. If, at1004, the media content is determined to comprise at least one frame with objectionable content (YES at1004), then a first context of the objectionable content is determined, wherein the first context comprises at least one of 1) an event type and 2) an adversely affected industry at1006. If, at1008, an entity identifier is not found in the content item either by video, image, or audio analysis (NO at1008), then the process ends as no masking action is required. If, at1008, an entity identifier is determined to be visible by a user or audible to the user in the content item (YES at1008), then stored association matrices are reviewed for the entity identifier at1010(e.g., like those shown inFIG.8). If, at1010, the entity identifier is determined to be in at least one association matrix, then the association matrix is retrieved at1012, and the process proceeds to process block214ofFIG.2to complete the analysis on whether a masking action for the entity identifier as visible or audible in the media content is required. If, at1010, the entity identifier is not present in any of the retrieved association matrices (NO at1010), then an industry associated with the entity identifier is determined at1014. At1016, at least one association matrix corresponding to at least one of 1) the industry associated with the entity identifier and 2) the event type is retrieved. For example, if the entity identifier is for a motorcycle brand and the event type is a motorcycle accident, then an association matrix for a car brand with protocols for a car accident may be retrieved at1016. The similarity may be based on a threshold similarity rating based on associations between strings of characters or images extracted in association with the entity identifier. Additionally, an intent or context determination engine may be utilized for analyzing strings of characters extracted from data associated with the entity identifier. In some embodiments, a business entity representative may be notified of the analysis being performed such that the business entity may provide a prepopulated association matrix in line with the business entity protocols. At1018, a new association matrix is identified based on the at least one association matrix achieved. At1020, the new association matrix is stored in memory for future masking analysis of the entity identifier. At1022, a notification for transmission to at least one of 1) an entity associated with the identifier and 2) a system administrator indicating the new association matrix for the entity identifier is generated. The process then proceeds to process block214ofFIG.2to complete the analysis on whether a masking action for the entity identifier as visible in the media content is required. The systems and processes discussed above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the actions of the processes discussed herein may be omitted, modified, combined, and/or rearranged, and any additional actions may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be exemplary and not limiting. Only the claims that follow are meant to set bounds as to what the present disclosure includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. While some portions of this disclosure may refer to “convention” or examples, any such reference is merely to provide context to the instant disclosure and does not form any admission as to what constitutes the state of the art.
65,176
11943508
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS A method and apparatus for real-time DVR programming is described. In the following description, for the purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without such details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. Structural Overview FIG.1Aillustrates a network with content and service providers for a DVR, according to a possible embodiment. The system comprises DVR102which is in communication with network105through any communication interface, such as an Ethernet interface or wireless communications port. The functionality of a DVR is typified in U.S. Pat. No. 6,233,389 which is owned by the Applicants and is hereby incorporated by reference. The system also includes service provider server (“service provider”)104, storage106for service provider104, content provider108, personal computer110and portable device112. Personal computer110may be a personal computing device, such as a desktop computer or laptop computer, and is also in communication with network105through any communications interface, including wireless. Portable device112may be any handheld computing device, cellular phone, portable media player, or any other portable device capable of displaying or playing multimedia content and is also in communication with network105through any communications interface, including wireless. DVR102, personal computer110, and portable device112each communicate as client with service provider server104through network105. In a possible embodiment, DVR102, personal computer110, and portable device112each communicate with content provider110through network105. Storage106may be internal to service provider104(not shown) or external to service provider104as shown. Network105may be implemented by any medium or mechanism that provides for the exchange of data between devices in the communication system. Examples of network105include, without limitation, a network such as a Local Area Network (LAN), Wide Area Network (WAN), the Internet, one or more terrestrial, satellite, or wireless links, etc. Alternatively or additionally, any number of devices connected to network105may also be directly connected to each other through a communications link. In a possible embodiment, content provider108provides broadcast program content to DVR102via cable, satellite, terrestrial communication, or other transmission method. Broadcast program content may include any multimedia content such as: audio, image, or video content. In a possible embodiment, content provider108provides multimedia content, such as any downloadable content, through network105to DVR102, personal computer110, or portable device112. In a possible embodiment, DVR102communicates with service provider104and storage106, which provide program guide data, graphical resources (such as fonts, pictures, etc.), service information, software, advertisements, event identification data, and other forms of data that enable DVR102to operate independently of service provider104to satisfy user interests. In a possible embodiment, content provider108may provide, to service provider104, content data and/or any metadata, including promotional data, icons, web data, and other information. Service provider104may then interpret the metadata and provide the content data and/or metadata to DVR102, personal computer110, or portable device112. Referring toFIG.1B, in a possible embodiment, DVR102generally comprises one or more components, signified by signal converter154, that may be used to digitize an analog television signal and convert it into a digital data stream or accept a digital data stream. An example of the internal structure and operation of a DVR is further described in U.S. Pat. No. 6,233,389. DVR102receives broadcast signals from an antenna, from a cable TV system, satellite receiver, etc., via input152A. Input152A may comprise one or more tuning modules that allow one or more signals to be received and recorded simultaneously. For example, a TV input stream received by input152A may take the form of a National Television Standards Committee (NTSC) compliant signal or a Phase Alternating Line (PAL) compliant broadcast signal. For another example, a TV input stream received by input152A may take a digital form such as a Digital Satellite System (DSS) compliant signal, a Digital Broadcast Services (DBS) compliant signal, or an Advanced Television Standards Committee (ATSC) compliant signal. DBS, DSS, and ATSC are based on standards called Moving Pictures Experts Group 2 (MPEG-2) and MPEG-2 Transport. MPEG-2 Transport is a standard for formatting the digital data stream from the TV source transmitter so that a TV receiver can disassemble the input stream to find programs in the multiplexed signal. An MPEG-2 transport multiplex supports multiple programs in the same broadcast channel with multiple video and audio feeds and private data. Input152A tunes to a particular program in a channel, extracts a specified MPEG stream from the channel, and feeds the MPEG stream to the rest of the system. Analog TV signals are encoded into a similar MPEG format using separate video and audio encoders, such that the remainder of the system is unaware of how the signal was obtained. Information may be modulated into the vertical blanking interval (VBI) of the analog TV signal in a number of standard ways; for example, the North American Broadcast Teletext Standard (NABTS) may be used to modulate information onto certain lines of an NTSC signal, which the FCC mandates the use of a certain other line for closed caption (CC) and extended data services (EDS). Such signals are decoded by input152A and passed to the other modules as if the signals had been delivered via an MPEG-2 private data channel. Recording module160records the incoming data stream by storing the digital data stream on at least one storage facility, signified by storage164A/164B that is designed to retain segments of the digital data stream. Storage164A/164B may be one or more non-volatile storage devices (e.g., hard disk, solid state drive, USB external hard drive, USB external memory stick, USB external solid state drive, network accessible storage device, etc.) that are internal164A and/or external164B. A signal converter154retrieves segments of the data stream, converts the data stream into an analog signal, and then modulates the signal onto a RF carrier, via output152B, through which the signal is delivered to a standard TV set. Output152B may alternatively deliver a digital signal to a TV set or video monitor with signal converter154, converting the data stream into an appropriate digital signal. For example, DVR102may utilize a High-Definition Multimedia Interface (HDMI) for sending digital signals to a TV via a HDMI cable. DVR102also includes a communication interface162, through which the DVR102communicates with network105via Ethernet, wireless network, modem, or other communications standard. Further, DVR102may be integrated into a TV system such that the components described above are housed in a TV set capable of performing the functions of each component of DVR102. In another embodiment, DVR102generally comprises one or more components necessary to receive, record, store, transfer and playback digital data signals from one or more sources, such as a PC, a DVR, a service provider, or content server. DVR102can transfer digital data signals to another DVR, PC, or any other suitably configured device, such as a handheld device or cell phone, etc. DVR102may encode or decode digital signals via encoder156A and decoder156B into one or more formats for playback, storage or transfer. According to a possible embodiment, encoder156A produces MPEG streams. According to another embodiment, encoder156A produces streams that are encoded using a different codec. Decoder156B decodes the streams encoded by encoder156A or streams that are stored in the format in which the streams were received using an appropriate decoder. DVR102can also encrypt or decrypt digital data signals using encryptor/decryptor158for storage, transfer or playback of the digital data signals. In a possible embodiment, DVR102communicates with service provider104, which provides program guide data, graphical resources such as brand icons and pictures, service information, software programs, advertisements, and other forms of data that enable DVR102to operate independently of the service provider104to perform autonomous recording functions. Communication between DVR102and service provider104may use a secure distribution architecture to transfer data between the DVR102and the service provider104such that both the service data and the user's privacy are protected. DVR Synchronization with Service Provider by Polling A possible embodiment of DVR synchronization with service provider104by polling may be described with respect toFIG.1AandFIG.1B. Storage164A/164B of DVR102comprises program guide data, season pass data, wishlist data, now playing data, to do data (e.g., what programs are scheduled), suggestions data, etc. A season pass is a type of recording that keeps track of episodes of a television series. For example, the service provided by TiVo (TiVo Inc., Alviso, CA) records episodes of the television series every week, even when the day or time of a showing of an episode changes. Via a season pass, a user may indicate how many episodes to store and whether or not to store reruns. A wishlist is a list of one or more content items that a user desires to record or schedule to record to the user's DVR when the content item becomes available. The wishlist is specified using any type of information such as an actor's name, director's name, movie title, etc. The DVR102records a show or movie that meets the wishlist specification whenever a show or movie is broadcast or is available for download via a network, such as the Internet, intranet, etc. Storage106of service provider104also comprises a copy of such data for DVR102. For example, storage106comprises one or more databases, which comprise tables that are associated with DVR102. As well, storage106comprises copies of all other DVR clients (e.g., as data stored in tables associated with each of the other DVR clients), which service provider104supports and with which service provider104communicates (not shown.) DVR102periodically establishes a Secure Sockets Layer (SSL) connection to and contacts (“polls”) service provider104to initiate synchronization between data stored in storage164A/164B of DVR102and data stored in storage106of service provider104. Synchronization between data stored in storage164A/164B of DVR102and data stored in storage106of service provider104as used herein means causing at least a portion of data stored in storage164A/164B and at least a portion of data stored in storage106to represent the same content. For example, in a possible embodiment, DVR102contacts service provider104via network105to synchronize every fifteen minutes. In a possible embodiment, synchronization is achieved by DVR102contacting service provider104and sending a subset of local data in storage164A/164B, e.g., data that reflects updates to the local data stored in storage164A/164B, to service provider104that stores the data on storage106. In another example, a viewer, from the viewer's PC110, adds a new season pass for a series, such as The War, to the viewer's collection of season passes. In this example, the viewer, from the viewer's PC110, adds the new season pass for the series by causing PC110to send data related to adding the season pass to service provider104, which then stores the data in the appropriate table(s) associated with the viewer's DVR102in the database on storage106. When DVR102initiates synchronizing data with service provider104, data reflecting the newly added season pass contained in storage106is sent to DVR102. It should be appreciated that DVR/service provider synchronization is not limited by which element (e.g., DVR102or service provider104) initiates synchronization and sends updated data to the receiving element. For example, DVR102may initiate synchronization or service provider104may initiate synchronization. An example DVR/service provider synchronization process is as follows. A user is logged onto the Internet (e.g., network105) using personal computer110. For example, the user is navigating the TiVo Central™ Online web page via his browser and, from the TiVo Central™ Online remote scheduling facility, schedules a program to record on the user's DVR102. The message to record the program gets sent from the web page interface on personal computer110to service provider104. The program information is added to the database tables associated with the user's DVR102by service provider104, e.g., on storage106comprising data that represents the schedule of programs for user's DVR102. The next time that DVR102and service provider104synchronize data, data reflecting the schedule with the added program is sent by service provider104from storage106to DVR storage164A/164B. DVR102is thus configured to record the added program according to the user's request. Instant Message Protocol In a possible embodiment, DVR102, personal computer110, portable device112, or any other appropriately configured device, may communicate with service provider104on network105using a secure client-server instant message protocol to transfer data between DVR102, personal computer110, portable device112, or any other appropriately configured device and service provider104such that both the service data and the user's privacy are protected. In a possible embodiment, data may be transferred using secure client-server instant message communications protocol over network105via wired, wireless, or any other communication interface. In a possible embodiment, DVR102receives and sends instant messages through communication interface162. As an example, on a cell phone, a user might select a program to be recorded and the request to record the program is sent as an instant message to service provider104. Instant message communication between DVR102, personal computer110, or portable device112and service provider104may be described with reference toFIG.2AandFIG.2B.FIG.2Ais a block diagram of service provider104comprising an Extensible Messaging and Presence Protocol (XMPP) server202internally. In a possible embodiment, XMPP server202is communicatively connected to network105and external to service provider104, as shown inFIG.2B. It should be appreciated that in a possible embodiment, any system configured for instant message communications protocol may be contemplated and that any embodiment described herein using XMPP is meant by way of example and is not meant to be limiting. For example AOL Instant Messenger (AIM®), Microsoft's Windows Live, ICQ®, or Short Messaging Services (SMS) are each a system that may be used for instant message communications protocol in accordance with one or more embodiments. In a possible embodiment, commands from any of DVR102, personal computer110, or portable device112are sent via network105to service provider104as instant messages. After receipt of such instant messages, service provider104updates appropriate database tables in storage106that are associated with the user associated with the command. As an example, in a possible embodiment, after receipt of one or more instant messages containing information relating to a particular update to a user's DVR, service provider104updates appropriate database objects in central site database100, as described in the commonly owned U.S. Pat. No. 6,728,713, titled, “Distributed Database Management System,” dated Apr. 27, 2004, which is incorporated herein in its entirety as if fully set forth herein. It should be appreciated that such configurations are by way of example only and are not meant to be limiting. In a possible embodiment, XMPP is an open source protocol for real-time extensible instant messaging (IM) over a network as well as presence information, such as used for buddy lists. XMPP is based on open standards, similar to email. Similar to a user in an open email environment, a user in an open XMPP environment with a domain name and a suitable Internet connection may run an XMPP server and communicate directly with users on other XMPP servers. An example client XMPP application is Google Talk. Google Talk is a Windows application for Voice over IP and instant messaging, offered by Google, Mountain View, CA. An example XMPP message delivery process from UserA to UserB is as follows. UserA sends a message intended for UserB to UserA's XMPP server. If UserB is blocked on UserA's server, then the message is dropped. Otherwise, UserA's XMPP server opens a connection to UserB's XMPP server. A possible embodiment of the opened connection may include obtaining authorization and obtaining an encrypted connection. After the connection is established, UserB's XMPP server checks if UserA is blocked on UserB's XMPP server. If UserA is blocked on UserB's XMPP server, the message is dropped. In a possible embodiment, if UserB is not presently connected to UserB's XMPP server, the message is stored for later delivery. It should be appreciated that other options apply, such as dropping the message. In a possible embodiment, if UserB is presently connected to UserB's XMPP server, the message is delivered to UserB. It should be appreciated that in a possible embodiment, UserA's server and UserB's server are the same server. For instance, UserA sends instant messages to UserB and receives instant messages from UserB by sending messages to and receiving messages from an XMPP server and UserB sends instant messages to UserA and receives messages from UserA by sending messages to and receiving messages from the XMPP server. Further details on example structure and functionality of XMPP may be found in The Internet Society's “Request For Comment” (RFC) documents RFC3920, “Extensible Messaging and Presence Protocol: Core” and RFC3921, “Extensible Messaging and Presence Protocol: Instant Messaging and Presence.” Instant Message Synchronization In a possible embodiment, DVR102is an instant messaging client and hosts an instant message client application. DVR102attempts to maintain an instant messaging connection with instant message XMPP server202at all times. Service provider104is also an instant messaging client and hosts an instant message client application. As well, service provider104attempts to maintain an instant messaging connection with instant message XMPP server202at all times. In a possible embodiment, DVR202, XMPP server202, and service provider104communicate according to open standard XMPP protocol, e.g., as described above. In a possible embodiment, service provider104comprises related software that enables service provider104to communicate with storage106. It should be appreciated that in certain contexts herein, references to service provider104are used in the collective sense and are meant to include reference to the related software that manages storage106. A possible embodiment of instant message synchronization may be described with reference toFIG.3A.FIG.3Ais a flow diagram showing an example DVR/service provider synchronization process flow. This example synchronization process flow begins with a user remotely requesting a programming event, e.g., to add a program, via service provider104(Step302.) For example, PC110may request to add a program to the user's schedule of recordings for DVR102. For example, through PC110the user may remotely add a program to record using TiVo Central™ Online through service provider104. Service provider104updates database tables on storage106that are associated with the user's DVR to include the program (Step304.) As well, service provider104sends an instant message to DVR102via XMPP server202(Step306.) It should be appreciated that, in a possible embodiment, DVR102attempts to maintain the connection to XMPP server202at all times, reconnecting automatically whenever the connection drops. Similarly, it should be appreciated that, in a possible embodiment, service provider104attempts to maintain the connection to XMPP server202at all times, reconnecting automatically whenever the connection drops. In either case, when the connection to XMPP server202is not up for any reason, the instant message is discarded. In the example, the instant message informs DVR102that a change has been made to the database tables that are associated with the user's DVR in storage106and requests that DVR102synchronize data in storage164A/164B with data in storage106. In a possible embodiment, the notification causes DVR102to open a new SSL connection with service provider104specifically for the synchronization process and to close the newly opened SSL connection when the synchronization of the relevant data in storage106with data in storage164A/164B is done (Step308.) It should be appreciated that certain details in the example are by way of illustration only and are not meant to be limiting. As an example, while a remote user requests a change, the request for change may be sent from any configurable device, such as portable device112. Another embodiment of DVR/service provider synchronization may be described with reference toFIG.3B.FIG.3Bis a flow diagram showing an example DVR/service provider synchronization process flow that is similar toFIG.3A, however with a different last step. As inFIG.3A, the example synchronization process flow ofFIG.3Bbegins with a user remotely requesting a programming event, e.g., to add a program, via service provider104(Step302.) For example, PC110requests the service to add a program to the user's schedule of recordings for DVR102. For example, through PC110the user may remotely add a program using TiVo Central™ Online through service provider104. Service provider104updates database tables on storage106that are associated with the user's DVR to include the program (Step304.) As well, service provider104sends an instant message to DVR102via XMPP server202(Step306.) It should be appreciated that, in a possible embodiment, DVR102attempts to maintain the connection to XMPP server202at all times, reconnecting automatically whenever the connection drops. Similarly, it should be appreciated that, in a possible embodiment, service provider104attempts to maintain the connection to XMPP server202at all times, reconnecting automatically whenever the connection drops. In either case, when the connection to XMPP server202is not up for any reason, the instant message is discarded. In the example, the instant message informs DVR102that a change has been made to the database tables that are associated with the user's DVR in storage106and requests that DVR102synchronize data in storage164A/164B with data in storage106. Responsive to the message, DVR102uses the already established connection with XMPP server202to pass and/or receive the synchronization data to synchronize data in storage164A/164B with data in storage106(Step310.) It should be appreciated that certain details in the example are by way of illustration only and are not meant to be limiting. For instance, while, in the example, a remote user requests a change from PC110, in another possible embodiment, the request for change may be sent from any configurable device, such as portable device112. It should be appreciated that this approach allows for fast updates of DVR102. A user may be able to request to record a multimedia content a minute or two before the multimedia content begins and DVR102may be updated and record the multimedia content without missing any of the content material. It should be appreciated that client-server instant message protocol in a DVR environment is not limited to synchronizing schedule-related and recording-related data. Indeed, any type of data stored in storage106of service provider104may be synchronized with data stored in DVR storage164A/164B, such as software, electronic program guide data, advertisements, multimedia content, etc. As well, any type of data stored in DVR storage164A/164B may be synchronized with data stored in storage106of service provider104. As well, through an instant message connection, data reflecting any type of activity from any client may be sent to the service provider storage on a real-time basis. The type of and use of such gathered data is limitless. For example, the data may be aggregated and analyzed for marketing or towards providing better customer service. As another example, data gathered for a particular user may be used to initiate a customized or targeted process for that particular user, and so forth. Real-Time Direct Communication Between Device and DVR In a possible embodiment, a device, such as, for example, portable device112, is in direct communication, via an instant message server, with a DVR, such as DVR102. In a possible embodiment, the device (“direct device”) is configured with customized user interface DVR software (“DVR UI”) that allows the user to program the user's DVR from the direct device as though the user is operating the user's DVR at home, the office, or any place where the DVR is located. Further, the direct device hosts an instant message client application such that the direct device is an instant messaging client. This approach allows for fast updates to the user's DVR from a direct device, such as a cell phone. A possible embodiment can be described with reference toFIG.3C, a flow diagram showing an example of direct communication from a device to a DVR. In an example implementation, a user desires to request a programming event, such as to add a season pass for a particular television program, from the user's cell phone. In a possible embodiment, the user opens his cell phone and from the cell phone launches his copy of the DVR UI. While the DVR UI is running, the user navigates to the appropriate screen and enters a command to add the season pass (Step320.) After receiving the command, the DVR UI causes a corresponding instant message to be created that is to be delivered to the user's DVR (Step322.) In a possible embodiment, the DVR UI generates the instant message and provides it to the instant message client application or, in another possible embodiment, the instant message client application generates the instant message. The instant message contains data that causes the user's DVR to add the season pass, for example, as though the user was at home and operating his DVR. It should be appreciated that the particular configurations described above are by way of example only and are not meant to be limiting. In a possible embodiment, after the instant message is generated, the direct device uses an already established connection with an instant message server and sends the instant message to the instant message server to be delivered to the user's DVR (Step324.) In another possible embodiment, the direct device opens a new connection to the instant message server and sends the instant message to the instant message server to be delivered to the user's DVR (Step324.) In a possible embodiment, after the instant message is sent to the instant message server, the direct device closes the connection to the instant message server. In a possible embodiment, the user's DVR maintains a connection to the instant message server at all times and receives the instant message originally sent by the direct device via the already established connection to the instant message server. In a possible embodiment, the direct device or the instant message server retries sending the instant message to the user's DVR until the delivery is successful. In a possible embodiment, the direct device or the instant message server is notified when the instant message has been successfully delivered to the user's DVR. For example, the direct device or the instant message server may receive an instant message from the user's DVR indicating that the delivery of the instant message was successful. In another possible embodiment, the direct device or the instant message server stores a copy of the instant message or stores related data that may be used for generating a new instant message for delivery to the user's DVR. Then, when delivery of the instant message fails, the direct device or the instant message server may send another instant message by using the stored copy or the newly generated instant message. For example, when after a specified length of time the direct device or the instant message server does not receive notification that the delivery of the instant message was successful, the copy of the instant message or the newly generated instant message is subsequently sent for delivery to the user's DVR. In another possible embodiment, the direct device or the instant message server tries to resend the instant message or the newly generated instant message on a periodic basis until a notification of successful delivery is received or until a specified retry limit is reached. For example, the direct device or the instant message server may resend the instant message every second. As another example, the direct device or the instant message server may try to resend the instant message 20 times, after which the process that retries sending the instant message terminates. In another possible embodiment, when the retry limit is reached, the direct device receives a notification message indicating that the attempt to deliver the instant message failed. In a possible embodiment, when the retry limit is reached, the DVR UI displays on the display of the direct device, a message indicating that the delivery failed or that the user's request for the programming event failed. For example, the user's DVR UI may display a message indicating that the attempt to add the season pass failed. In another possible embodiment, when the instant message cannot be delivered to the user's DVR (e.g., the user's DVR connection to the instant message server has been dropped), the instant message is discarded. In a possible embodiment, after receiving the instant message containing programming event data, such as adding the season pass, the user's DVR performs the appropriate operation, such as adding the season pass, as if the user had entered the command at the user's DVR. Data reflecting the programming event is stored in storage164A/164B. Then, storage106of service provider104is updated with the information stored in164A/164B, including, for example, the added season pass from the example, via any of the synchronization processes described above. Scalability and Robustness In a possible embodiment, the DVR attempts to maintain an SSL connection with an XMPP server at all times, reconnecting whenever the connection is dropped. Because the DVR maintains the SSL connection with the XMPP server, the DVR has the capability to use instant messaging at all times, except during those short intervals when the connection is temporarily dropped. For example, the DVR may employ an already established connection with the XMPP server to perform the synchronization with the service provider. Thus, the DVR using the established connection to perform synchronization provides scalability. In another embodiment, one or more XMPP servers are configured not to store messages that are sent to any of the one or more XMPP servers. For example, an XMPP server receives an XMPP message and passes the XMPP message on to a recipient, such as the DVR, without using additional XMPP server resources for storing the message. Because the one or more XMPP servers may not need to use additional resources to store XMPP messages, more XMPP server resources may be used at a given time for processing more messages, thus providing greater scalability. In a possible embodiment, DVR/service provider synchronization via instant messaging is robust because the DVR and service provider automatically reconnect after any connection failures during the synchronization process. In another embodiment, DVR/service provider synchronization is rendered robust by a configuration that uses a combination of DVR/service provider synchronization by polling and DVR/service provider synchronization by instant messaging. For example, an administrator may set DVR/service provider synchronization by polling to operate every twenty-four hours, while DVR/service provider synchronization by instant messaging is operable as well. The combination of synchronization by polling and synchronization by instant messaging renders a robust synchronization feature. For example, suppose that an XMPP server crashes at the time that the XMPP server is attempting to send a message to a DVR, e.g., a request to synchronize, and that the crash causes the sending of the message to fail. In a possible embodiment, the DVR may be updated from the synchronization by the polling process, possibly at a later time. Thus, synchronization is successful and robust even in a case, which may be rare, when an XMPP message is lost. Hardware Overview FIG.4is a block diagram that illustrates a computer system400upon which a possible embodiment of the invention may be implemented. Computer system400includes a bus402or other communication mechanism for communicating information, and a processor404coupled with bus402for processing information. Computer system400also includes a main memory406, such as a random access memory (“RAM”) or other dynamic storage device, coupled to bus402for storing information and instructions to be executed by processor404. Main memory406also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor404. Computer system400further includes a read only memory (“ROM”)408or other static storage device coupled to bus402for storing static information and instructions for processor404. A storage device410, such as a magnetic disk or optical disk, is provided and coupled to bus402for storing information and instructions. Computer system400may be coupled via bus402to a display412, such as a cathode ray tube (“CRT”), for displaying information to a computer user. An input device414, including alphanumeric and other keys, is coupled to bus402for communicating information and command selections to processor404. Another type of user input device is cursor control416, such as a mouse, trackball, stylus, or cursor direction keys for communicating direction information and command selections to processor404and for controlling cursor movement on display412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. The invention is related to the use of computer system400for selecting a frame of a multi-frame video program for display in accordance with a selected trick play mode of a DVR. According to a possible embodiment of the invention, selecting a frame of a multi-frame video program for display in accordance with a selected trick play mode of a DVR is provided by computer system400in response to processor404executing one or more sequences of one or more instructions contained in main memory406. Such instructions may be read into main memory406from another computer-readable medium, such as storage device410. Execution of the sequences of instructions contained in main memory406causes processor404to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software. The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor404for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device410. Volatile media includes dynamic memory, such as main memory406. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, or any other memory chip or cartridge. Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor404for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system400can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector can receive the data carried in the infrared signal and appropriate circuitry can place the data on bus402. Bus402carries the data to main memory406, from which processor404retrieves and executes the instructions. The instructions received by main memory406may optionally be stored on storage device410either before or after execution by processor404. Computer system400also includes a communication interface418coupled to bus402. Communication interface418provides a two-way data communication coupling to a network link420that is connected to a local network422. For example, communication interface418may be an integrated services digital network (“ISDN”) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface418may be a local area network (“LAN”) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface418sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link420typically provides data communication through one or more networks to other data devices. For example, network link420may provide a connection through local network422to a host computer424or to data equipment operated by an Internet Service Provider (“ISP”)426. ISP426in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”428. Local network422and Internet428both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link420and through communication interface418, which carry the digital data to and from computer system400, are exemplary forms of carrier waves transporting the information. Computer system400can send messages and receive data, including program code, through the network(s), network link420and communication interface418. In the Internet example, a server430might transmit a requested code for an application program through Internet428, ISP426, local network422and communication interface418. The received code may be executed by processor404as it is received, and/or stored in storage device410, or other non-volatile storage for later execution. In this manner, computer system400may obtain application code in the form of a carrier wave. In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
40,625
11943509
DETAILED DESCRIPTION Systems and methods are provided herein to recommend content based on a detected orientation of objects that are connected over the Internet. For example, a specific arrangement of furniture in a room over a network-connected surface may indicate that a user wishes to watch television. In some examples, this specific arrangement may include changing the orientation of the chairs in a room to face a display device (e.g., the television). This arrangement may indicate that the user is interested in watching a media asset on the display device present in the room and may recommend content to the user. In some examples, the number and the type of chairs present in the room may provide more information to a media guidance application regarding the kind of content that may be recommended to the user. In order for the media guidance application to determine an orientation of objects placed in a particular area, the objects are connected to each other over the Internet and placed on a network-connected surface that communicates with the media guidance application. In some examples, a network-connected surface may be a surface with a capability to communicate with other devices over a network like the Internet, for example. The media guidance application is able to keep track of attributes like location, orientation, type, and number of objects placed on the network-connected surface using the information from the network-connected surface. The media guidance application uses this information to provide content recommendations to the user in the vicinity of these objects. FIG.1shows an illustrative example of providing media asset recommendations based on orientation of Internet-connected objects, in accordance with some embodiments of the disclosure.FIG.1shows exemplary views100(a),100(b), and100(c) of the same living room. Layouts100(a),100(b), and100(c) each depict a different layout of furniture in the room that helps the media guidance application. Layout100(a) depicts a display device102, a couch104, a coffee table114, a dining table106with chairs108and110, and a lamp124, placed on a network-connected surface126. Network-connected surface126may be a carpet-like surface that covers a part or complete floor of a room. Network-connected surface126may be able to detect objects placed on network-connected surface126. Network-connected surface126may be an electronic surface connected to the media guidance application over the Internet and may contain all the information for the media guidance application for interpretation. This may represent a baseline positioning of furniture in a room. The objects depicted inFIG.1present on the network-connected surface126are merely illustrative, and any object on network-connected surface126may replace any of the specific examples, like the coffee table, etc. The media guidance application may receive from network-connected surface126a plurality of object identifiers for a plurality of objects of different types detected on network-connected surface128. As depicted inFIG.1, network-connected surface126may cover part of a floor or the entire floor of a room within a house. The objects (102-114) placed on network-connected surface126may include furniture pieces like tables (106,104), lamp (124), chairs (108and110), couch (104), etc. In such examples, the media guidance application may detect a variety of objects placed on network-connected surface126. Each object may be independently capable of connecting to the network (e.g., Internet) and may have a digital identifier associated with it that identifies the object to other objects. In some examples, the various objects may have wireless Internet or Bluetooth capability that will allow them to be connected to the Internet, or to the network-connected surface126directly. In some embodiments, the objects may connect to network-connected surface126by virtue of being placed on the network-connected surface. In some embodiments, a mobile application associated with network-connected surface126may form a network of which all the objects placed on network-connected surface126are a part. The application may be used to connect the network-connected surface126so that the media guidance application may access the identifiers of each object. The application may also be used to provide a user access to the digital identifiers of each piece of furniture placed on network-connected surface126. From the connection, the media guidance application may receive the identifiers of each object placed on the network-connected surface126. In some embodiments, the network-connected surface may also have Bluetooth or wireless capabilities that may allow network-connected surface126to be connected to the media guidance application directly, or via the Internet. In some examples, network-connected surface126may be connected to the media guidance application using a wired connection. Each identifier received at the media guidance application may include a type of the object that may indicate whether the object is a chair, a table, or a lamp, etc. The identifiers may also indicate an orientation and location of the object on network-connected surface126. Using these identifiers of each object placed on the surface, the media guidance application may keep track of the objects present on the network-connected surface. Layout100(a) ofFIG.1depicts a first positioning of each piece of furniture in the room. The media guidance application determines a first positioning of each object of the plurality of inanimate objects from the plurality of object identifiers. For example, once a connection between network-connected surface126and the media guidance application is established, the media guidance application may access each object identifier. The media guidance application may be able to associate a location with each identified object identifier. The location may be stored with respect to the dimensions of network-connected surface126. The media guidance application may save a placement and orientation of each object placed on the network-connected surface. InFIG.1, in some embodiments, the position of each piece of furniture inFIG.1may be determined using a coordinate system with the origin placed at either corner of the room. In such examples, the position of the object may be the coordinates of the center of the object based on the coordinate system established. In such embodiments, the origin of the coordinate system may be placed at the bottom right, and the coordinates of the center of the couch may be determined as (5,1) which may be interpreted as the location of couch104. The position of each piece of furniture may be similarly calculated. In some embodiments, there may be other methods of location determination based on which a position of each piece of furniture placed on network-connected surface126may be determined. The media guidance application keeps a record of the location of the objects placed on network-connected surface126to determine if there is a change in the position of any of the objects. The media guidance application detects one or more changes in the plurality of object identifiers, where the one or more changes correspond to one or more changes in positioning from the first positioning. Layout100(b) and100(c) ofFIG.1shows an exemplary embodiment of a different layout of furniture in a room over network-connected surface126for the same room depicted in layout100(a). In layout100(b), the user may move a chair128to sit on it and have dinner. Layout100(b) also shows chairs108and110moved away from dining table106to face the display device102. Layout100(c) ofFIG.1shows another exemplary embodiment of some furniture being moved around in the room over network-connected surface126. In layout100(c), the user may move around a lot of furniture in the room to set up for a party. Using the identifiers associated with each object placed on the surface, the media guidance application may update the location of each object as it is moved around over network-connected surface126. The updated locations of the physically inanimate objects placed on network-connected surface126may be compared to the previously stored locations by the media guidance application to determine whether there was a change in the location of any of the objects placed on network-connected surface126. The media guidance application may determine a new pattern of the arrangement from the updated locations of the objects placed on network-connected surface126. Based on detecting the one or more changes, the media guidance application determines a second positioning of each object of the plurality of physically inanimate objects from the plurality of object identifiers. For example, the media guidance application may determine the updated positions of the furniture placed on the network-connected surface. The media guidance application may keep track of each piece of furniture and note the position of whichever of the pieces of furniture were moved. In some embodiments, the media guidance application determines a type of each physically inanimate object of the plurality of physically inanimate objects placed on the network-connected surface. The media guidance application determines each type of furniture placed on the network-connected surface. As shown inFIG.1, the different types of furniture shown in layouts100(a),100(b), and100(c) include a couch104, a coffee table114, a dining table106with chairs108and110, a lamp124, high chair112placed on a network-connected surface126. The different types of furniture shown inFIG.1are for illustrative purposes only. In some embodiments, many more different kinds of furniture may be placed on network-connected surface126. Based on the different types of furniture determined, the media guidance application groups the plurality of physically inanimate objects into different groups. In some embodiments, the media guidance application may be configured to group objects placed on network-connected surface126by particular criteria. In some embodiments, the user may ask the groupings to be performed based on functionality. For example, the media guidance application may group all objects users may sit on like chairs (108,110,118,116,120) of a first type (i.e., dining chair) and couch104in one group. Additionally, the coffee table114may be grouped together with dining table106. In some embodiments, the user may ask the furniture to be grouped separately by type of object, in which case couch104may be grouped separately from the chairs and the coffee table114may be grouped separately from dining table106. Other objects like lamp124and high chair112may be grouped individually. In some embodiments, the user may be able to select which pieces of furniture the user would like to be grouped together in a mobile application associated with the media guidance application. The media guidance application also keeps track of an orientation of a given object of the physically inanimate objects in addition to the location of the object. For example, the media guidance application, along with determining that a location of an object on the network-connected surface has been changed, may also determine whether the orientation of an object has been changed. In some embodiments, detecting an orientation of an object placed in a region is further disclosed in Geller et al., U.S. Pat. No. 9,864,440, granted on Jan. 9, 2018, the disclosure of which is hereby incorporated herein in its entirety. Geller describes using a plurality of transmitters attached on different parts of an object and determining a distance of each transmitter from a particular sensor to map the object and determine an orientation of the object. A sensor similar to the one in Geller may be installed along with network-connected surface126that may be used to detect transmitter signals from a variety of transmitters installed on each object placed on network-connected surface126. In some embodiments, different sensors may be used to differentiate between the front and back of objects, such that they may be facing in a particular direction when being used by a user. The media guidance application may use the information from the transmitters and receivers to determine what the furniture is facing. For example, in terms of a chair or a sofa, the media guidance application may determine whether the chair or sofa is now facing a particular direction. The direction may be marked in terms of degrees from a baseline on network-connected surface126. The media guidance application determines from the type of the given object, whether the change in orientation will affect a direction a person would face when using the given object. For example, if the orientation of coffee table114is changed, that does not affect direction a person would face, because the coffee table is not something a person may sit on. Similarly, the orientation of any of chairs108or110, if modified, would change the direction a user would likely face when the user would sit on it. In response to determining that the change in orientation will not affect a direction a person would face when using the given object, the media guidance application ignores the change in orientation. For example, a change in orientation of the coffee table, end table, lamp, or any other object that the user may not sit on, will be ignored by the media guidance application. In some embodiments, in response to determining that the change in orientation will affect a direction a person would face when using the given object, the media guidance application determines that given object faces a display device, where the given object did not face the display device before the detected change in orientation was detected. For example, the media guidance application may determine that a change in orientation of chair108in a room may affect the direction in which the user sitting on the chair will face. In this example, when the orientation of chair108is changed (from layout100(a) to layout100(b) ofFIG.1), the direction the user faces while sitting on the chair is modified. In this example, upon determining that the orientation of the chair has changed, the media guidance application determines whether the new orientation of the chair faces the display device102in the vicinity of network-connected surface126. The media guidance application may also determine whether the previous orientation of chair108did not face display device102. In some embodiments, the media guidance application may determine that chair108faces the display device102by determining whether the front of chair108is oriented towards the front of display device102. In some embodiments, it may not be necessary that the front of chair108be parallel to the front of display device102. The front face of chair102may be placed at an angle within a threshold angle of the front face of display device102. For example, chair108may be placed at an angle of 45 degrees with respect to the front face of display device102. In such embodiments, the threshold angle permissible may be 50 degrees. In cases where the angle between the front face of chair108and front face of display device102is more than 50 degrees, the media guidance application may determine that chair108is not facing display device102. In response to determining that the given object now faces the display device, the media guidance application generates for display the content recommendation on the display device. For example, the media guidance application may infer from the change in orientation of the chair to face the display device, that now that the chair is facing the display device, the user is getting ready to watch a media asset on the display device, and instructs the display device to display the recommended content. In some embodiments, further in response to determining that given object faces a display device, where the given object did not face the display device before the detected change in orientation was detected, the media guidance application determines whether the given object faced a different display device prior to the detected change in orientation. For example, there may be a second display device (not shown) in addition to display device102in the vicinity of network-connected surface126. In this example, a room may have two televisions on two different walls, or it may have a monitor connected to a computer and a television. The media guidance application may detect that the change in orientation of chair108, which was initially facing a first display device (i.e., the monitor), is now facing display device102. While chair108was facing the first display device, the media guidance application was displaying the recommendation of the content on the first display device. In response to determining that the given object faced the different display device prior to the detected change in orientation, the media guidance application commands the different display device to cease generating for display the recommendation. For example, now that the media guidance application has determined that the orientation of chair108has moved away from the first display device to face display device108, the media guidance application instructs the first display device to stop displaying the recommendation for the content. Once a change in the positioning of each object on network-connected surface126is determined, the media guidance application may use the arrangement of furniture objects to determine media attributes associated with the environment created by the user. The arrangement of the furniture may be compared to a database that includes various templates of furniture arrangement. The media guidance application compares attributes of the second positioning of the plurality of physically inanimate objects to attributes of each template of a plurality of templates, where each template corresponds to a different possible positioning of the plurality of physically inanimate objects. For example, once the media guidance application has recorded the positioning of each piece of furniture placed on the network-connected surface102in layout100(b) or layout100(c), the media guidance application compares the layout of the furniture in the room to each entry in a database comprising a plurality of furniture layouts. Each furniture layout template may correspond to a particular scenario. The arrangement of the furniture on the network-connected surface may be compared with the templates to determine which template is closest to the furniture arrangement on network-connected surface126. Based on the comparison, the media guidance application may infer an environment being created by the user. The media guidance application determines, from the comparing, a first template from the plurality of templates to which the second positioning of the plurality of physically inanimate objects corresponds. For example, from the comparison between the layout of the furniture on the network-connected surface102in layout100(b) and layout100(c) to the various templates, the media guidance application may determine that furniture layout in layout100(b) resembles a family evening with chairs108and110facing display device102, and high chair112facing display device102. In some embodiments, the plurality of templates may be specified by the user at the media guidance application. The user may program common scenarios that are created within the room with furniture placed on network-connected surface102on the media guidance application. In such cases, the media guidance application may compare the layout100(b) and layout100(c) to the layouts specified by the user. Layout100(c) is an exemplary layout for party. Dining table106has been removed and chairs116-122are arranged in a horseshoe manner around the coffee table114. In some examples, the user may program layout100(c) to be similar to a party layout. In some embodiments, the media guidance application may access a profile associated with the user to determine supplementary information relating to what sort of party the user may be hosting. For example, the media guidance application may access the user's calendar associated with the profile and determine that this may be a watch party for the Super Bowl, or the Academy Awards, or ‘Game of Thrones’. In some embodiments, the media guidance application may also determine that it is a birthday party hosted by the user. In some examples, the user may program different layouts for each party that the user is expected to host in the media guidance application, which will help the media guidance application discern which template is being accessed. The change in layout may include bringing in new furniture (not shown) in layouts100(b) or100(c). Based on the determined template layout of furniture, the media guidance application may determine media attributes associated with the layout. The media guidance application determines a set of media attributes corresponding to the first template by comparing the first template to entries of a database that each correspond a respective template of the plurality of templates to a respective set of media attributes. For example, in case the detected furniture layout resembles a Super Bowl party, the media guidance application may determine that the media attributes associated with the layout may be ‘sports’, ‘NFL’, ‘football’, and ‘lombardi trophy.’ In the example where the furniture layout resembles a birthday party, there may be no media attributes, to indicate that the user is not interested in any television program, or the attributes associated may be ‘birthday’, ‘celebration’, and ‘party.’ Based on the determined media attributes, the media guidance application generates a content recommendation based on the first set of media attributes. For example, in the case the media attributes are ‘sports’, ‘NFL’, ‘football’, and ‘lombardi trophy’, the media guidance application may recommend the Super Bowl pregame show, the Super Bowl, the halftime show etc. In the case that the media attributes are birthday’, ‘celebration’, and ‘party’, the media guidance application may recommend party music playlists and media assets related to birthday like ‘13 going on 30’, ‘Harry Potter’, or ‘Toy Story’, for example. In case that there are no media attributes associated with a template, the media guidance application may not recommend any media assets. The media assets recommended also take into account user preferences recorded in the profile associated with the user. The user profile may include media assets that the user may regularly watch at a certain time that may be recommended by the media asset. In some embodiments, the media guidance application detects whether a display device is present in a vicinity of the network-connected surface. For example, the media guidance application that is connected to the network-connected surface determines whether a display device is present in the same room as the network-connected surface. In some examples, the media guidance application may be connected to the network-connected surface and the display device over a network, like the Internet, for example. In response to detecting that the display device102is present in the vicinity of the network-connected surface126, the media guidance application transmits a command to display device102to display the content recommendation on display device102. So, in some examples, if the media guidance application determines the presence of a display device in the same room as the network-connected surface, the media guidance application may display the content recommendation of the Super Bowl to the user. In some embodiments, the media guidance application detects a placement of an additional physically inanimate object on the network-connected surface. In some examples, the additional physically inanimate object may be another object of one of the types of objects already present on the network-connected surface. In layout100(b), high chair112may be an example of an object that was not present in layout of furniture100(a) but may be an additional object placed in layout of furniture100(b). High chair112may be part of a user's child's furniture that the user may bring in and place it on network-connected surface126in layout100(b). The presence of a new object (i.e., high chair112) may now modify the layout100(b). The media guidance application compares attributes of the third positioning of the plurality of physically inanimate objects to attributes of each template of a plurality of templates, where each template corresponds to a different possible positioning of the new physically inanimate object. For example, the location of high chair112placed on network-connected surface126is added to the layout of the objects present on network-connected surface126in layout100(b). The placement of high chair112may modify the layout of the furniture placed on network-connected surface126in layout100(b). In such an example, the media guidance application now compares this modified template to the plurality of templates present in the database to determine, from the comparing, a second template from the plurality of templates to which the third positioning of the new physically inanimate object corresponds. Based on the positioning of high chair112on network-connected surface126, the media guidance application may compare the updated furniture layout to the plurality of templates of furniture layouts. The media guidance application may also keep track of the number of objects placed on the network-connected surface. In some embodiments, the media guidance application determines a first number of physically inanimate objects placed on the network-connected surface. For example, the media guidance application may determine a count of the number of objects placed on the network-connected surface. The media guidance application may further divide the count of the number of objects placed on the surface based on type of object. For example, the media guidance application may determine that there are 10 different objects placed on the network-connected surface. The media guidance application, based on a type associated with each physically inanimate object, may determine a second number of physically inanimate objects that may be used for sitting. For example, of the 10 objects placed on the network-connected surface, the media guidance application may determine that there is one dining table, 6 chairs, two sofas and a coffee table. Layout100(c) depicts an increase in the number of chairs (116-122) in the room compared to layout100(a) and layout100(b). The media guidance application may determine, from the number of objects of various types, a scenario that the user is creating on the network-connected surface and, based on the number of objects, may update the content recommendation. The media guidance application compares the second number of physically inanimate objects to a threshold, and based on the comparison, updates the first content recommendation to an updated content recommendation. For example, the media guidance application may determine that there are 16 chairs instead of 6 chairs placed on the network-connected surface. Based on the increase in the number of chairs above a particular threshold of 8 chairs, the media guidance application may determine that the user has scheduled a viewing party and may recommend content appropriate for viewing parties, like the Super Bowl, the Oscars, ‘Game of Thrones’, etc. In some embodiments, the media guidance application determines an additional set of media attributes associated with the second template by comparing the second template to entries of a database that each correlate a respective template of the plurality of templates to a respective set of media attributes. For example, the media guidance application may use the updated template to determine additional media attributes that are associated with the updated template. In such examples, the media guidance application may determine that the number of chairs placed on the network-connected surface has increased from 6 to 16. Based on the increase in the number of chairs placed on the network-connected surface, the media guidance application updates the template layout of furniture that the layout of the furniture on the network-connected surface corresponds to. This updated template may be associated with a different set of media attributes in the database. For example, because the number of chairs placed on the network-connected surface has increased to indicate a viewing party, the media attributes associated with any furniture layout may include attributes, like ‘NFL’, ‘football’, ‘Oscars’, etc. Before the network-connected surface detected the presence of the high chair, the media guidance application may have determined that the media attributes of the previous template may be ‘action’, ‘thriller’, or ‘romance’. The media guidance application determines whether there is a conflict between the set of media attributes and the additional set of media attributes, and in response to determining that there is no conflict between the set of media attributes and the additional set of media attributes, the media guidance application updates the set of media attributes corresponding to the first template to include the additional set of media attributes corresponding to the second template, and updates the content recommendation to an updated content recommendation based on the updated media attributes. For example, the media guidance application may determine that the additional media attributes like ‘NFL’, ‘football’, ‘Oscars’, etc., do not conflict with the previous media attributes of ‘action’ and ‘thriller’ and therefore, the media guidance application may update the set of media attributes to include the additional media attributes of ‘thriller’ and ‘action.’ Based on this update, the media guidance application may recommend the Super Bowl to the user instead of the previous recommendation of a movie. In some cases, the media attributes of the updated layout may not conflict with the media attributes of the previous layout. In some embodiments, in response to determining that there is a conflict between the set of media attributes and the additional set of media attributes, the media guidance application refrains from updating the content recommendation to the updated content recommendation. For example, the media guidance application may determine that the attributes indicated by the updated layout conflict with the attributes of the previous layout, namely, the attribute of ‘NFL’ conflicts with the attribute of ‘romance’ and based on the fact the media attributes conflict, the media guidance application may not update the content recommendation and may just ignore the update to the furniture layout. In some embodiments, further in response to determining a conflict between the set of media attributes and the additional set of media attributes, the media guidance application generates a second content recommendation based on the additional set of media attributes on a secondary display device, for example, when the media guidance application determines that there is present in the room with the network-connected device a secondary device. Based on this determination, the media guidance application may generate the second content recommendation of the Super Bowl on the secondary device. The amount of content available to users in any given content delivery system can be substantial. Consequently, many users desire a form of media guidance through an interface that allows users to efficiently navigate content selections and easily identify content that they may desire. An application that provides such guidance is referred to herein as an interactive media guidance application or, sometimes, a media guidance application or a guidance application. Interactive media guidance applications may take various forms depending on the content for which they provide guidance. One typical type of media guidance application is an interactive television program guide. Interactive television program guides (sometimes referred to as electronic program guides) are well-known guidance applications that, among other things, allow users to navigate among and locate many types of content or media assets. Interactive media guidance applications may generate graphical user interface screens that enable a user to navigate among, locate and select content. As referred to herein, the terms “media asset” and “content” should be understood to mean an electronically consumable user asset, such as television programming, as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, Webcasts, etc.), video clips, audio, content information, pictures, rotating images, documents, playlists, websites, articles, books, electronic books, blogs, chat sessions, social media, applications, games, and/or any other media or multimedia and/or combination of the same. Guidance applications also allow users to navigate among and locate content. As referred to herein, the term “multimedia” should be understood to mean content that utilizes at least two different content forms described above, for example, text, audio, images, video, or interactivity content forms. Content may be recorded, played, displayed or accessed by user equipment devices, but can also be part of a live performance. The media guidance application and/or any instructions for performing any of the embodiments discussed herein may be encoded on computer readable media. Computer readable media includes any media capable of storing data. The computer readable media may be transitory, including, but not limited to, propagating electrical or electromagnetic signals, or may be non-transitory including, but not limited to, volatile and non-volatile computer memory or storage devices such as a hard disk, floppy disk, USB drive, DVD, CD, media cards, register memory, processor caches, Random Access Memory (“RAM”), etc. With the advent of the Internet, mobile computing, and high-speed wireless networks, users are accessing media on user equipment devices on which they traditionally did not. As referred to herein, the phrase “user equipment device,” “user equipment,” “user device,” “electronic device,” “electronic equipment,” “media equipment device,” or “media device” should be understood to mean any device for accessing the content described above, such as a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a hand-held computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smart phone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same. In some embodiments, the user equipment device may have a front facing screen and a rear facing screen, multiple front screens, or multiple angled screens. In some embodiments, the user equipment device may have a front facing camera and/or a rear facing camera. On these user equipment devices, users may be able to navigate among and locate the same content available through a television. Consequently, media guidance may be available on these devices, as well. The guidance provided may be for content available only through a television, for content available only through one or more of other types of user equipment devices, or for content available both through a television and one or more of the other types of user equipment devices. The media guidance applications may be provided as on-line applications (i.e., provided on a web-site), or as stand-alone applications or clients on user equipment devices. Various devices and platforms that may implement media guidance applications are described in more detail below. One of the functions of the media guidance application is to provide media guidance data to users. As referred to herein, the phrase “media guidance data” or “guidance data” should be understood to mean any data related to content or data used in operating the guidance application. For example, the guidance data may include program information, guidance application settings, user preferences, user profile information, media listings, media-related information (e.g., broadcast times, broadcast channels, titles, descriptions, ratings information (e.g., parental control ratings, critic's ratings, etc.), genre or category information, actor information, logo data for broadcasters' or providers' logos, etc.), media format (e.g., standard definition, high definition, 3D, etc.), on-demand information, blogs, websites, and any other type of guidance data that is helpful for a user to navigate among and locate desired content selections. FIGS.2-3show illustrative display screens that may be used to provide media guidance data. The display screens shown inFIGS.2-3may be implemented on any suitable user equipment device or platform. While the displays ofFIGS.2-3are illustrated as full screen displays, they may also be fully or partially overlaid over content being displayed. A user may indicate a desire to access content information by selecting a selectable option provided in a display screen (e.g., a menu option, a listings option, an icon, a hyperlink, etc.) or pressing a dedicated button (e.g., a GUIDE button) on a remote control or other user input interface or device. In response to the user's indication, the media guidance application may provide a display screen with media guidance data organized in one of several ways, such as by time and channel in a grid, by time, by channel, by source, by content type, by category (e.g., movies, sports, news, children, or other categories of programming), or other predefined, user-defined, or other organization criteria. FIG.2shows illustrative grid of a program listings display200arranged by time and channel that also enables access to different types of content in a single display. Display200may include grid202with: (1) a column of channel/content type identifiers204, where each channel/content type identifier (which is a cell in the column) identifies a different channel or content type available; and (2) a row of time identifiers206, where each time identifier (which is a cell in the row) identifies a time block of programming. Grid202also includes cells of program listings, such as program listing208, where each listing provides the title of the program provided on the listing's associated channel and time. With a user input device, a user can select program listings by moving highlight region210. Information relating to the program listing selected by highlight region210may be provided in program information region212. Region212may include, for example, the program title, the program description, the time the program is provided (if applicable), the channel the program is on (if applicable), the program's rating, and other desired information. In addition to providing access to linear programming (e.g., content that is scheduled to be transmitted to a plurality of user equipment devices at a predetermined time and is provided according to a schedule), the media guidance application also provides access to non-linear programming (e.g., content accessible to a user equipment device at any time and is not provided according to a schedule). Non-linear programming may include content from different content sources including on-demand content (e.g., VOD), Internet content (e.g., streaming media, downloadable media, etc.), locally stored content (e.g., content stored on any user equipment device described above or other storage device), or other time-independent content. On-demand content may include movies or any other content provided by a particular content provider (e.g., HBO On Demand providing “The Sopranos” and “Curb Your Enthusiasm”). HBO ON DEMAND is a service mark owned by Time Warner Company L.P. et al. and THE SOPRANOS and CURB YOUR ENTHUSIASM are trademarks owned by the Home Box Office, Inc. Internet content may include web events, such as a chat session or Webcast, or content available on-demand as streaming content or downloadable content through an Internet web site or other Internet access (e.g. FTP). Grid202may provide media guidance data for non-linear programming including on-demand listing214, recorded content listing216, and Internet content listing218. A display combining media guidance data for content from different types of content sources is sometimes referred to as a “mixed-media” display. Various permutations of the types of media guidance data that may be displayed that are different than display200may be based on user selection or guidance application definition (e.g., a display of only recorded and broadcast listings, only on-demand and broadcast listings, etc.). As illustrated, listings214,216, and218are shown as spanning the entire time block displayed in grid202to indicate that selection of these listings may provide access to a display dedicated to on-demand listings, recorded listings, or Internet listings, respectively. In some embodiments, listings for these content types may be included directly in grid202. Additional media guidance data may be displayed in response to the user selecting one of the navigational icons220. (Pressing an arrow key on a user input device may affect the display in a similar manner as selecting navigational icons220.) Display200may also include video region222, and options region226. Video region222may allow the user to view and/or preview programs that are currently available, will be available, or were available to the user. The content of video region222may correspond to, or be independent from, one of the listings displayed in grid202. Grid displays including a video region are sometimes referred to as picture-in-guide (PIG) displays. PIG displays and their functionalities are described in greater detail in Satterfield et al. U.S. Pat. No. 6,564,378, issued May 13, 2003 and Yuen et al. U.S. Pat. No. 6,239,794, issued May 29, 2001, which are hereby incorporated by reference herein in their entireties. PIG displays may be included in other media guidance application display screens of the embodiments described herein. Options region226may allow the user to access different types of content, media guidance application displays, and/or media guidance application features. Options region226may be part of display200(and other display screens described herein), or may be invoked by a user by selecting an on-screen option or pressing a dedicated or assignable button on a user input device. The selectable options within options region226may concern features related to program listings in grid202or may include options available from a main menu display. Features related to program listings may include searching for other air times or ways of receiving a program, recording a program, enabling series recording of a program, setting program and/or channel as a favorite, purchasing a program, or other features. Options available from a main menu display may include search options, VOD options, parental control options, Internet options, cloud-based options, device synchronization options, second screen device options, options to access various types of media guidance data displays, options to subscribe to a premium service, options to edit a user's profile, options to access a browse overlay, or other options. The media guidance application may be personalized based on a user's preferences. A personalized media guidance application allows a user to customize displays and features to create a personalized “experience” with the media guidance application. This personalized experience may be created by allowing a user to input these customizations and/or by the media guidance application monitoring user activity to determine various user preferences. Users may access their personalized guidance application by logging in or otherwise identifying themselves to the guidance application. Customization of the media guidance application may be made in accordance with a user profile. The customizations may include varying presentation schemes (e.g., color scheme of displays, font size of text, etc.), aspects of content listings displayed (e.g., only HDTV or only 3D programming, user-specified broadcast channels based on favorite channel selections, re-ordering the display of channels, recommended content, etc.), desired recording features (e.g., recording or series recordings for particular users, recording quality, etc.), parental control settings, customized presentation of Internet content (e.g., presentation of social media content, e-mail, electronically delivered articles, etc.) and other desired customizations. The media guidance application may allow a user to provide user profile information or may automatically compile user profile information. The media guidance application may, for example, monitor the content the user accesses and/or other interactions the user may have with the guidance application. Additionally, the media guidance application may obtain all or part of other user profiles that are related to a particular user (e.g., from other web sites on the Internet the user accesses, such as www.Tivo.com, from other media guidance applications the user accesses, from other interactive applications the user accesses, from another user equipment device of the user, etc.), and/or obtain information about the user from other sources that the media guidance application may access. As a result, a user can be provided with a unified guidance application experience across the user's different user equipment devices. This type of user experience is described in greater detail below in connection withFIG.5. Additional personalized media guidance application features are described in greater detail in Ellis et al., U.S. Patent Application Publication No. 2005/0251827, filed Jul. 11, 2005, Boyer et al., U.S. Pat. No. 7,165,098, issued Jan. 16, 2007, and Ellis et al., U.S. Patent Application Publication No. 2002/0174430, filed Feb. 21, 2002, which are hereby incorporated by reference herein in their entireties. Another display arrangement for providing media guidance is shown inFIG.3. Video mosaic display300includes selectable options302for content information organized based on content type, genre, and/or other organization criteria. In display300, television listings option304is selected, thus providing listings306,308,310, and312as broadcast program listings. In display300the listings may provide graphical images including cover art, still images from the content, video clip previews, live video from the content, or other types of content that indicate to a user the content being described by the media guidance data in the listing. Each of the graphical listings may also be accompanied by text to provide further information about the content associated with the listing. For example, listing308may include more than one portion, including media portion314and text portion316. Media portion314and/or text portion316may be selectable to view content in full-screen or to view information related to the content displayed in media portion314(e.g., to view listings for the channel that the video is displayed on). The listings in display300are of different sizes (i.e., listing306is larger than listings308,310, and312), but if desired, all the listings may be the same size. Listings may be of different sizes or graphically accentuated to indicate degrees of interest to the user or to emphasize certain content, as desired by the content provider or based on user preferences. Various systems and methods for graphically accentuating content listings are discussed in, for example, Yates, U.S. Patent Application Publication No. 2010/0153885, filed Nov. 12, 2009, which is hereby incorporated by reference herein in its entirety. Users may access content and the media guidance application (and its display screens described above and below) from one or more of their user equipment devices.FIG.4shows a generalized embodiment of illustrative user equipment device400. More specific implementations of user equipment devices are discussed below in connection withFIG.5. User equipment device400may receive content and data via input/output (hereinafter “I/O”) path402. I/O path402may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry404, which includes processing circuitry406and storage408. Control circuitry404may be used to send and receive commands, requests, and other suitable data using I/O path402. I/O path402may connect control circuitry404(and specifically processing circuitry406) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path inFIG.4to avoid overcomplicating the drawing. Control circuitry404may be based on any suitable processing circuitry such as processing circuitry406. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry404executes instructions for a media guidance application stored in memory (i.e., storage408). Specifically, control circuitry404may be instructed by the media guidance application to perform the functions discussed above and below. For example, the media guidance application may provide instructions to control circuitry404to generate the media guidance displays. In some implementations, any action performed by control circuitry404may be based on instructions received from the media guidance application. In client-server based embodiments, control circuitry404may include communications circuitry suitable for communicating with a guidance application server or other networks or servers. The instructions for carrying out the above mentioned functionality may be stored on the guidance application server. Communications circuitry may include a cable modem, an integrated services digital network (ISDN) modem, a digital subscriber line (DSL) modem, a telephone modem, Ethernet card, or a wireless modem for communications with other equipment, or any other suitable communications circuitry. Such communications may involve the Internet or any other suitable communications networks or paths (which is described in more detail in connection withFIG.5). In addition, communications circuitry may include circuitry that enables peer-to-peer communication of user equipment devices, or communication of user equipment devices in locations remote from each other (described in more detail below). Memory may be an electronic storage device provided as storage408that is part of control circuitry404. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage408may be used to store various types of content described herein as well as media guidance data described above. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage, described in relation toFIG.5, may be used to supplement storage408or instead of storage408. Control circuitry404may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be provided. Control circuitry404may also include scaler circuitry for upconverting and downconverting content into the preferred output format of the user equipment400. Circuitry404may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by the user equipment device to receive and to display, to play, or to record content. The tuning and encoding circuitry may also be used to receive guidance data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, encrypting, decrypting, scaler, and analog/digital circuitry, may be implemented using software running on one or more general purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage408is provided as a separate device from user equipment400, the tuning and encoding circuitry (including multiple tuners) may be associated with storage408. A user may send instructions to control circuitry404using user input interface410. User input interface410may be any suitable user interface, such as a remote control, mouse, trackball, keypad, keyboard, touch screen, touchpad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display412may be provided as a stand-alone device or integrated with other elements of user equipment device400. For example, display412may be a touchscreen or touch-sensitive display. In such circumstances, user input interface410may be integrated with or combined with display412. Display412may be one or more of a monitor, a television, a liquid crystal display (LCD) for a mobile device, amorphous silicon display, low temperature poly silicon display, electronic ink display, electrophoretic display, active matrix display, electro-wetting display, electrofluidic display, cathode ray tube display, light-emitting diode display, electroluminescent display, plasma display panel, high-performance addressing display, thin-film transistor display, organic light-emitting diode display, surface-conduction electron-emitter display (SED), laser television, carbon nanotubes, quantum dot display, interferometric modulator display, or any other suitable equipment for displaying visual images. In some embodiments, display412may be HDTV-capable. In some embodiments, display412may be a 3D display, and the interactive media guidance application and any suitable content may be displayed in 3D. A video card or graphics card may generate the output to the display412. The video card may offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors. The video card may be any processing circuitry described above in relation to control circuitry404. The video card may be integrated with the control circuitry404. Speakers414may be provided as integrated with other elements of user equipment device400or may be stand-alone units. The audio component of videos and other content displayed on display412may be played through speakers414. In some embodiments, the audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers414. The guidance application may be implemented using any suitable architecture. For example, it may be a stand-alone application wholly-implemented on user equipment device400. In such an approach, instructions of the application are stored locally (e.g., in storage408), and data for use by the application is downloaded on a periodic basis (e.g., from an out-of-band feed, from an Internet resource, or using another suitable approach). Control circuitry404may retrieve instructions of the application from storage408and process the instructions to generate any of the displays discussed herein. Based on the processed instructions, control circuitry404may determine what action to perform when input is received from input interface410. For example, movement of a cursor on a display up/down may be indicated by the processed instructions when input interface410indicates that an up/down button was selected. In some embodiments, the media guidance application is a client-server based application. Data for use by a thick or thin client implemented on user equipment device400is retrieved on-demand by issuing requests to a server remote to the user equipment device400. In one example of a client-server based guidance application, control circuitry404runs a web browser that interprets web pages provided by a remote server. For example, the remote server may store the instructions for the application in a storage device. The remote server may process the stored instructions using circuitry (e.g., control circuitry404) and generate the displays discussed above and below. The client device may receive the displays generated by the remote server and may display the content of the displays locally on equipment device400. This way, the processing of the instructions is performed remotely by the server while the resulting displays are provided locally on equipment device400. Equipment device400may receive inputs from the user via input interface410and transmit those inputs to the remote server for processing and generating the corresponding displays. For example, equipment device400may transmit a communication to the remote server indicating that an up/down button was selected via input interface410. The remote server may process instructions in accordance with that input and generate a display of the application corresponding to the input (e.g., a display that moves a cursor up/down). The generated display is then transmitted to equipment device400for presentation to the user. In some embodiments, the media guidance application is downloaded and interpreted or otherwise run by an interpreter or virtual machine (run by control circuitry404). In some embodiments, the guidance application may be encoded in the ETV Binary Interchange Format (EBIF), received by control circuitry404as part of a suitable feed, and interpreted by a user agent running on control circuitry404. For example, the guidance application may be an EBIF application. In some embodiments, the guidance application may be defined by a series of JAVA-based files that are received and run by a local virtual machine or other suitable middleware executed by control circuitry404. In some of such embodiments (e.g., those employing MPEG-2 or other digital media encoding schemes), the guidance application may be, for example, encoded and transmitted in an MPEG-2 object carousel with the MPEG audio and video packets of a program. User equipment device400ofFIG.4can be implemented in system500ofFIG.5as user television equipment502, user computer equipment504, wireless user communications device506, or any other type of user equipment suitable for accessing content, such as a non-portable gaming machine. For simplicity, these devices may be referred to herein collectively as user equipment or user equipment devices, and may be substantially similar to user equipment devices described above. User equipment devices, on which a media guidance application may be implemented, may function as a standalone device or may be part of a network of devices. Various network configurations of devices may be implemented and are discussed in more detail below. A user equipment device utilizing at least some of the system features described above in connection withFIG.4may not be classified solely as user television equipment502, user computer equipment504, or a wireless user communications device506. For example, user television equipment502may, like some user computer equipment504, be Internet-enabled allowing for access to Internet content, while user computer equipment504may, like some television equipment502, include a tuner allowing for access to television programming. The media guidance application may have the same layout on various different types of user equipment or may be tailored to the display capabilities of the user equipment. For example, on user computer equipment504, the guidance application may be provided as a web site accessed by a web browser. In another example, the guidance application may be scaled down for wireless user communications devices506. In system500, there is typically more than one of each type of user equipment device but only one of each is shown inFIG.5to avoid overcomplicating the drawing. In addition, each user may utilize more than one type of user equipment device and also more than one of each type of user equipment device. In some embodiments, a user equipment device (e.g., user television equipment502, user computer equipment504, wireless user communications device506) may be referred to as a “second screen device.” For example, a second screen device may supplement content presented on a first user equipment device. The content presented on the second screen device may be any suitable content that supplements the content presented on the first device. In some embodiments, the second screen device provides an interface for adjusting settings and display preferences of the first device. In some embodiments, the second screen device is configured for interacting with other second screen devices or for interacting with a social network. The second screen device can be located in the same room as the first device, a different room from the first device but in the same house or building, or in a different building from the first device. The user may also set various settings to maintain consistent media guidance application settings across in-home devices and remote devices. Settings include those described herein, as well as channel and program favorites, programming preferences that the guidance application utilizes to make programming recommendations, display preferences, and other desirable guidance settings. For example, if a user sets a channel as a favorite on, for example, the web site www.Tivo.com on their personal computer at their office, the same channel would appear as a favorite on the user's in-home devices (e.g., user television equipment and user computer equipment) as well as the user's mobile devices, if desired. Therefore, changes made on one user equipment device can change the guidance experience on another user equipment device, regardless of whether they are the same or a different type of user equipment device. In addition, the changes made may be based on settings input by a user, as well as user activity monitored by the guidance application. The user equipment devices may be coupled to communications network514. Namely, user television equipment502, user computer equipment504, and wireless user communications device506are coupled to communications network514via communications paths508,510, and512, respectively. Communications network514may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g., a 4G or LTE network), cable network, public switched telephone network, or other types of communications network or combinations of communications networks. Paths508,510, and512may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. Path512is drawn with dotted lines to indicate that in the exemplary embodiment shown inFIG.5it is a wireless path and paths508and510are drawn as solid lines to indicate they are wired paths (although these paths may be wireless paths, if desired). Communications with the user equipment devices may be provided by one or more of these communications paths, but are shown as a single path inFIG.5to avoid overcomplicating the drawing. Although communications paths are not drawn between user equipment devices, these devices may communicate directly with each other via communication paths, such as those described above in connection with paths508,510, and512, as well as other short-range point-to-point communication paths, such as USB cables, IEEE 1394 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 802-11x, etc.), or other short-range communication via wired or wireless paths. BLUETOOTH is a certification mark owned by Bluetooth SIG, INC. The user equipment devices may also communicate with each other directly through an indirect path via communications network514. System500includes content source516and media guidance data source518coupled to communications network514via communication paths520and522, respectively. Paths520and522may include any of the communication paths described above in connection with paths508,510, and512. Communications with the content source516and media guidance data source518may be exchanged over one or more communications paths, but are shown as a single path inFIG.5to avoid overcomplicating the drawing. In addition, there may be more than one of each of content source516and media guidance data source518, but only one of each is shown inFIG.5to avoid overcomplicating the drawing. (The different types of each of these sources are discussed below.) If desired, content source516and media guidance data source518may be integrated as one source device. Although communications between sources516and518with user equipment devices502,504, and506are shown as through communications network514, in some embodiments, sources516and518may communicate directly with user equipment devices502,504, and506via communication paths (not shown) such as those described above in connection with paths508,510, and512. System500may also include an advertisement source524coupled to communications network514via a communications path526. Path526may include any of the communication paths described above in connection with paths508,510, and512. Advertisement source524may include advertisement logic to determine which advertisements to transmit to specific users and under which circumstances. For example, a cable operator may have the right to insert advertisements during specific time slots on specific channels. Thus, advertisement source524may transmit advertisements to users during those time slots. As another example, advertisement source may target advertisements based on the demographics of users known to view a particular show (e.g., teenagers viewing a reality show). As yet another example, advertisement source may provide different advertisements depending on the location of the user equipment viewing a media asset (e.g., east coast or west coast). In some embodiments, advertisement source524may be configured to maintain user information including advertisement-suitability scores associated with user in order to provide targeted advertising. Additionally or alternatively, a server associated with advertisement source524may be configured to store raw information that may be used to derive advertisement-suitability scores. In some embodiments, advertisement source524may transmit a request to another device for the raw information and calculate the advertisement-suitability scores. Advertisement source524may update advertisement-suitability scores for specific users (e.g., first subset, second subset, or third subset of users) and transmit an advertisement of the target product to appropriate users. Content source516may include one or more types of content distribution equipment including a television distribution facility, cable system headend, satellite distribution facility, programming sources (e.g., television broadcasters, such as NBC, ABC, HBO, etc.), intermediate distribution facilities and/or servers, Internet providers, on-demand media servers, and other content providers. NBC is a trademark owned by the National Broadcasting Company, Inc., ABC is a trademark owned by the American Broadcasting Company, Inc., and HBO is a trademark owned by the Home Box Office, Inc. Content source516may be the originator of content (e.g., a television broadcaster, a Webcast provider, etc.) or may not be the originator of content (e.g., an on-demand content provider, an Internet provider of content of broadcast programs for downloading, etc.). Content source516may include cable sources, satellite providers, on-demand providers, Internet providers, over-the-top content providers, or other providers of content. Content source516may also include a remote media server used to store different types of content (including video content selected by a user), in a location remote from any of the user equipment devices. Systems and methods for remote storage of content, and providing remotely stored content to user equipment are discussed in greater detail in connection with Ellis et al., U.S. Pat. No. 7,761,892, issued Jul. 20, 2010, which is hereby incorporated by reference herein in its entirety. Media guidance data source518may provide media guidance data, such as the media guidance data described above. Media guidance data may be provided to the user equipment devices using any suitable approach. In some embodiments, the guidance application may be a stand-alone interactive television program guide that receives program guide data via a data feed (e.g., a continuous feed or trickle feed). Program schedule data and other guidance data may be provided to the user equipment on a television channel sideband, using an in-band digital signal, using an out-of-band digital signal, or by any other suitable data transmission technique. Program schedule data and other media guidance data may be provided to user equipment on multiple analog or digital television channels. In some embodiments, guidance data from media guidance data source518may be provided to users' equipment using a client-server approach. For example, a user equipment device may pull media guidance data from a server, or a server may push media guidance data to a user equipment device. In some embodiments, a guidance application client residing on the user's equipment may initiate sessions with source518to obtain guidance data when needed, e.g., when the guidance data is out of date or when the user equipment device receives a request from the user to receive data. Media guidance may be provided to the user equipment with any suitable frequency (e.g., continuously, daily, a user-specified period of time, a system-specified period of time, in response to a request from user equipment, etc.). Media guidance data source518may provide user equipment devices502,504, and506the media guidance application itself or software updates for the media guidance application. In some embodiments, the media guidance data may include viewer data. For example, the viewer data may include current and/or historical user activity information (e.g., what content the user typically watches, what times of day the user watches content, whether the user interacts with a social network, at what times the user interacts with a social network to post information, what types of content the user typically watches (e.g., pay TV or free TV), mood, brain activity information, etc.). The media guidance data may also include subscription data. For example, the subscription data may identify to which sources or services a given user subscribes and/or to which sources or services the given user has previously subscribed but later terminated access (e.g., whether the user subscribes to premium channels, whether the user has added a premium level of services, whether the user has increased Internet speed). In some embodiments, the viewer data and/or the subscription data may identify patterns of a given user for a period of more than one year. The media guidance data may include a model (e.g., a survivor model) used for generating a score that indicates a likelihood a given user will terminate access to a service/source. For example, the media guidance application may process the viewer data with the subscription data using the model to generate a value or score that indicates a likelihood of whether the given user will terminate access to a particular service or source. In particular, a higher score may indicate a higher level of confidence that the user will terminate access to a particular service or source. Based on the score, the media guidance application may generate promotions that entice the user to keep the particular service or source indicated by the score as one to which the user will likely terminate access. Media guidance applications may be, for example, stand-alone applications implemented on user equipment devices. For example, the media guidance application may be implemented as software or a set of executable instructions which may be stored in storage408, and executed by control circuitry404of a user equipment device400. In some embodiments, media guidance applications may be client-server applications where only a client application resides on the user equipment device, and server application resides on a remote server. For example, media guidance applications may be implemented partially as a client application on control circuitry404of user equipment device400and partially on a remote server as a server application (e.g., media guidance data source518) running on control circuitry of the remote server. When executed by control circuitry of the remote server (such as media guidance data source518), the media guidance application may instruct the control circuitry to generate the guidance application displays and transmit the generated displays to the user equipment devices. The server application may instruct the control circuitry of the media guidance data source518to transmit data for storage on the user equipment. The client application may instruct control circuitry of the receiving user equipment to generate the guidance application displays. Content and/or media guidance data delivered to user equipment devices502,504, and506may be over-the-top (OTT) content. OTT content delivery allows Internet-enabled user devices, including any user equipment device described above, to receive content that is transferred over the Internet, including any content described above, in addition to content received over cable or satellite connections. OTT content is delivered via an Internet connection provided by an Internet service provider (ISP), but a third party distributes the content. The ISP may not be responsible for the viewing abilities, copyrights, or redistribution of the content, and may only transfer IP packets provided by the OTT content provider. Examples of OTT content providers include YOUTUBE, NETFLIX, and HULU, which provide audio and video via IP packets. Youtube is a trademark owned by Google Inc., Netflix is a trademark owned by Netflix Inc., and Hulu is a trademark owned by Hulu, LLC. OTT content providers may additionally or alternatively provide media guidance data described above. In addition to content and/or media guidance data, providers of OTT content can distribute media guidance applications (e.g., web-based applications or cloud-based applications), or the content can be displayed by media guidance applications stored on the user equipment device. Media guidance system500is intended to illustrate a number of approaches, or network configurations, by which user equipment devices and sources of content and guidance data may communicate with each other for the purpose of accessing content and providing media guidance. The embodiments described herein may be applied in any one or a subset of these approaches, or in a system employing other approaches for delivering content and providing media guidance. The following four approaches provide specific illustrations of the generalized example ofFIG.5. In one approach, user equipment devices may communicate with each other within a home network. User equipment devices can communicate with each other directly via short-range point-to-point communication schemes described above, via indirect paths through a hub or other similar device provided on a home network, or via communications network514. Each of the multiple individuals in a single home may operate different user equipment devices on the home network. As a result, it may be desirable for various media guidance information or settings to be communicated between the different user equipment devices. For example, it may be desirable for users to maintain consistent media guidance application settings on different user equipment devices within a home network, as described in greater detail in Ellis et al., U.S. Patent Publication No. 2005/0251827, filed Jul. 11, 2005. Different types of user equipment devices in a home network may also communicate with each other to transmit content. For example, a user may transmit content from user computer equipment to a portable video player or portable music player. In a second approach, users may have multiple types of user equipment by which they access content and obtain media guidance. For example, some users may have home networks that are accessed by in-home and mobile devices. Users may control in-home devices via a media guidance application implemented on a remote device. For example, users may access an online media guidance application on a website via a personal computer at their office, or a mobile device such as a PDA or web-enabled mobile telephone. The user may set various settings (e.g., recordings, reminders, or other settings) on the online guidance application to control the user's in-home equipment. The online guide may control the user's equipment directly, or by communicating with a media guidance application on the user's in-home equipment. Various systems and methods for user equipment devices communicating, where the user equipment devices are in locations remote from each other, is discussed in, for example, Ellis et al., U.S. Pat. No. 8,046,801, issued Oct. 25, 2011, which is hereby incorporated by reference herein in its entirety. In a third approach, users of user equipment devices inside and outside a home can use their media guidance application to communicate directly with content source516to access content. Specifically, within a home, users of user television equipment502and user computer equipment504may access the media guidance application to navigate among and locate desirable content. Users may also access the media guidance application outside of the home using wireless user communications devices506to navigate among and locate desirable content. In a fourth approach, user equipment devices may operate in a cloud computing environment to access cloud services. In a cloud computing environment, various types of computing services for content sharing, storage or distribution (e.g., video sharing sites or social networking sites) are provided by a collection of network-accessible computing and storage resources, referred to as “the cloud.” For example, the cloud can include a collection of server computing devices, which may be located centrally or at distributed locations, that provide cloud-based services to various types of users and devices connected via a network such as the Internet via communications network514. These cloud resources may include one or more content sources516and one or more media guidance data sources518. In addition or in the alternative, the remote computing sites may include other user equipment devices, such as user television equipment502, user computer equipment504, and wireless user communications device506. For example, the other user equipment devices may provide access to a stored copy of a video or a streamed video. In such embodiments, user equipment devices may operate in a peer-to-peer manner without communicating with a central server. The cloud provides access to services, such as content storage, content sharing, or social networking services, among other examples, as well as access to any content described above, for user equipment devices. Services can be provided in the cloud through cloud computing service providers, or through other providers of online services. For example, the cloud-based services can include a content storage service, a content sharing site, a social networking site, or other services via which user-sourced content is distributed for viewing by others on connected devices. These cloud-based services may allow a user equipment device to store content to the cloud and to receive content from the cloud rather than storing content locally and accessing locally-stored content. A user may use various content capture devices, such as camcorders, digital cameras with video mode, audio recorders, mobile phones, and handheld computing devices, to record content. The user can upload content to a content storage service on the cloud either directly, for example, from user computer equipment504or wireless user communications device506having content capture feature. Alternatively, the user can first transfer the content to a user equipment device, such as user computer equipment504. The user equipment device storing the content uploads the content to the cloud using a data transmission service on communications network514. In some embodiments, the user equipment device itself is a cloud resource, and other user equipment devices can access the content directly from the user equipment device on which the user stored the content. Cloud resources may be accessed by a user equipment device using, for example, a web browser, a media guidance application, a desktop application, a mobile application, and/or any combination of access applications of the same. The user equipment device may be a cloud client that relies on cloud computing for application delivery, or the user equipment device may have some functionality without access to cloud resources. For example, some applications running on the user equipment device may be cloud applications, i.e., applications delivered as a service over the Internet, while other applications may be stored and run on the user equipment device. In some embodiments, a user device may receive content from multiple cloud resources simultaneously. For example, a user device can stream audio from one cloud resource while downloading content from a second cloud resource. Or a user device can download content from multiple cloud resources for more efficient downloading. In some embodiments, user equipment devices can use cloud resources for processing operations such as the processing operations performed by processing circuitry described in relation toFIG.4. As referred herein, the term “in response to” refers to initiated as a result of. For example, a first action being performed in response to a second action may include interstitial steps between the first action and the second action. As referred herein, the term “directly in response to” refers to caused by. For example, a first action being performed directly in response to a second action may not include interstitial steps between the first action and the second action. FIG.6is a flowchart of a detailed illustrative process for prompting the second user to select between options to transmit the access rights or block the first device, in accordance with some embodiments of the disclosure. It should be noted that process600or any step thereof could be performed on, or provided by, any of the devices shown inFIGS.5-6. For example, process600may be executed by control circuitry404(FIG.4) as instructed by a media guidance application implemented on user equipment102(which may have the functionality of any or all of user equipment502,504, and/or506(FIG.5)). In addition, one or more steps of process600may be incorporated into or combined with one or more steps of any other process or embodiment (e.g., as described in relation toFIGS.7-10). Many elements of process600have been described above with respect toFIG.1, and those descriptions have full force and effect with respect to the below description of process600, and thus details on previously described elements are omitted for the sake of brevity. Process600begins at602where control circuitry404receives, from the network-connected surface, a plurality of object identifiers from media data source514for a plurality of physically inanimate objects of different types detected on the network-connected surface, where the object identifiers indicate a positioning of each object. At604, control circuitry404determines, based on the object identifiers, a first arrangement of the plurality of physically inanimate objects. At606, control circuitry404detects one or more changes in the plurality of object identifiers, where the one or more changes correspond to one or more changes in positioning from the first arrangement. At608, control circuitry404determines a second arrangement of the plurality of physically inanimate objects. At610, control circuitry404generates a content recommendation based on the second arrangement as an output on display504. FIG.7is a flowchart of a detailed illustrative process for prompting the second user to select between options to transmit the access rights or block the first device, in accordance with some embodiments of the disclosure. It should be noted that process700or any step thereof could be performed on, or provided by, any of the devices shown inFIGS.4-5. For example, process700may be executed by control circuitry404(FIG.4) as instructed by a media guidance application implemented on user equipment102(which may have the functionality of any or all of user equipment502,504, and/or506(FIG.5)). In addition, one or more steps of process600may be incorporated into or combined with one or more steps of any other process or embodiment (e.g., as described in relation toFIGS.6and8-10). Many elements of process700have been described above with respect toFIG.1, and those descriptions have full force and effect with respect to the below description of process700, and thus details on previously described elements are omitted for the sake of brevity. Process700begins at702where control circuitry404receives, from the network-connected surface, a plurality of object identifiers for a plurality of physically inanimate objects of different types detected on the network-connected surface. At704, control circuitry404determines a first positioning of each object of the plurality of physically inanimate objects from the plurality of object identifiers. At decision block706, control circuitry404detects whether one or more identifiers in the plurality of object identifiers have changed, where the one or more changes correspond to one or more changes in positioning from the first positioning. In response to detecting a change in the identifier of one or more identifiers in the plurality of object identifiers, process700moves to708where, based on detecting one or more changes, control circuitry404determines a second positioning of each object of the plurality of physically inanimate objects from the plurality of object identifiers. In response to detecting no change in the identifier of one or more identifiers in the plurality of object identifiers, process700moves to718to end. At710, control circuitry404compares attributes of the second positioning of the plurality of physically inanimate objects in storage408to attributes of each template of a plurality of templates, where each template corresponds to a different possible positioning of the plurality of physically inanimate objects. At712, control circuitry404determines, from the comparing, a first template from the plurality of templates to which the second positioning of the plurality of physically inanimate objects corresponds. At714, control circuitry404determines a set of media attributes corresponding to the first template by comparing the first template to entries of a database that each correlates a respective template of the plurality of templates to a respective set of media attributes. At716, control circuitry404generates a content recommendation based on the set of media attributes. FIG.8is a flowchart of a detailed illustrative process for prompting the second user to select between options to transmit the access rights or block the first device, in accordance with some embodiments of the disclosure. It should be noted that process800or any step thereof could be performed on, or provided by, any of the devices shown inFIGS.4-5. For example, process800may be executed by control circuitry404(FIG.4) as instructed by a media guidance application implemented on user equipment102(which may have the functionality of any or all of user equipment502,504, and/or506(FIG.5)). In addition, one or more steps of process800may be incorporated into or combined with one or more steps of any other process or embodiment (e.g., as described in relation toFIGS.6-7and9-10). Many elements of process800have been described above with respect toFIG.1, and those descriptions have full force and effect with respect to the below description of process800, and thus details on previously described elements are omitted for the sake of brevity. Process800begins at802where control circuitry404determines a type of each physically inanimate object of the plurality of physically inanimate objects placed on the network-connected surface. At804, control circuitry404groups the plurality of physically inanimate objects into different groups based on the determined types. At806, control circuitry404determines a change in orientation of a given object of the physically inanimate objects. At decision block806, control circuitry404determines from the type of the given object, whether the change in orientation will affect a direction a person would face when using the given object. In response to determining that the change in orientation will not affect a direction a person would face when using the given object, process800proceeds to808where control circuitry404ignores the change in orientation. In response to determining that the change in orientation will affect a direction a person would face when using the given object, process800proceeds to810where control circuitry404detects whether a display device is present in the vicinity of the network-connected surface. In response to not detecting a display device in the vicinity of the network-connected surface, process800proceeds to820to end. In response to detecting a display device in the vicinity of the network-connected surface, process800proceeds to812where control circuitry404determines whether the given object faces a display device, where the given object did not face the display device before the detected change in orientation was detected. In response to determining that the given object faces the display device, process800proceeds to814, where control circuitry generates for display the content recommendation on the display device. In response to determining that the given object does not face the display device, process800proceeds to820to end. At816, control circuitry404determines whether the given object faced a different display device prior to the detected change in orientation. In response to determining that the given object faced a different display device prior to the detected change in orientation, process800proceeds to818where control circuitry404commands the different display device to cease generating for display the recommendation. In response to determining that the given object did not face a different display device prior to the detected change in orientation, process800proceeds to820to end. FIG.9is a flowchart of a detailed illustrative process for prompting the second user to select between options to transmit the access rights or block the first device, in accordance with some embodiments of the disclosure. It should be noted that process900or any step thereof could be performed on, or provided by, any of the devices shown inFIGS.4-5. For example, process900may be executed by control circuitry404(FIG.4) as instructed by a media guidance application implemented on user equipment102(which may have the functionality of any or all of user equipment502,504, and/or506(FIG.5)). In addition, one or more steps of process900may be incorporated into or combined with one or more steps of any other process or embodiment (e.g., as described in relation toFIGS.6-8and10). Many elements of process900have been described above with respect toFIG.1, and those descriptions have full force and effect with respect to the below description of process900, and thus details on previously described elements are omitted for the sake of brevity. Process900begins at902where control circuitry404determines a third positioning of the additional physically inanimate object. At904, control circuitry404compares attributes of the third positioning of the plurality of physically inanimate objects to attributes of each template of a plurality of templates, where each template corresponds to a different possible positioning of the new physically inanimate object. At906, control circuitry404determines, from the comparing, a second template from the plurality of templates to which the third positioning of the new physically inanimate objects corresponds. At908, control circuitry404determines an additional set of media attributes associated with the second template by comparing the second template to entries of a database that each correlate a respective template of the plurality of templates to a respective set of media attributes. At decision block910, control circuitry404determines whether there is a conflict between the first set of media attributes and the second set of media attributes. In response to determining that there is a conflict between the set of the media attributes and the additional set of media attributes, process900proceeds to916where control circuitry404refrains from updating the first content recommendation to the second content recommendation. At918, control circuitry404generates the updated content recommendation based on the additional set of media attributes on a secondary device associated with the new type of physically inanimate object. In response to determining that there is no conflict between the first set of media attributes and the second set of media attributes, process900proceeds to912where control circuitry404updates the set of media attributes corresponding to the first template to include the additional set of media attributes corresponding to the second template. At914, control circuitry404updates the content recommendation to an updated content recommendation based on the updated media attributes. FIG.10is a flowchart of a detailed illustrative process for prompting the second user to select between options to transmit the access rights or block the first device, in accordance with some embodiments of the disclosure. It should be noted that process1000or any step thereof could be performed on, or provided by, any of the devices shown inFIGS.4-5. For example, process1000may be executed by control circuitry404(FIG.4) as instructed by a media guidance application implemented on user equipment102(which may have the functionality of any or all of user equipment502,504, and/or506(FIG.5)). In addition, one or more steps of process1000may be incorporated into or combined with one or more steps of any other process or embodiment (e.g., as described in relation toFIGS.6-9). Many elements of process1000have been described above with respect toFIG.1, and those descriptions have full force and effect with respect to the below description of process1000, and thus details on previously described elements are omitted for the sake of brevity. Process1000starts at1002where control circuitry404determines a first number of physically inanimate objects placed on the network-connected surface. At1004, based on a type associated with each physically inanimate object, control circuitry404determines a second number of physically inanimate objects that may be used for sitting. At decision block1006, control circuitry404determines whether the second number of physically inanimate objects is greater than a threshold. In response to determining that the second number of physically inanimate objects is greater than the threshold, process1000proceeds to1008to update the first content recommendation to a second content recommendation. In response to determining that the second number of physically inanimate objects is less than the threshold, process1000proceeds to1010to end. It should be noted that processes600-1000or any step thereof could be performed on, or provided by, any of the devices shows inFIGS.1and4-5. For example, any of processes600-1000may be executed by control circuitry404(FIG.5) as instructed by control circuitry implemented on user equipment502,504,506(FIG.5), and/or a user equipment for selecting a recommendation. In addition, one or more steps of processes600-1000may be incorporated into or combined with one or more steps of any other process or embodiment. It is contemplated that the steps or descriptions of each ofFIGS.6-10may be used with any other embodiment of this disclosure. In addition, the steps and descriptions described in relation toFIGS.6-10may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these steps may be performed in any order or in parallel or substantially simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1and4-5could be used to perform one or more of the steps inFIGS.6-10. It will be apparent to those of ordinary skill in the art that methods involved in the present invention may be embodied in a computer program product that includes a computer-usable and/or readable medium. For example, such a computer-usable medium may consist of a read-only memory device, such as a CD-ROM disk or conventional ROM device, or a random-access memory, such as a hard drive device or a computer diskette, having a computer-readable program code stored thereon. It should also be understood that methods, techniques, and processes involved in the present disclosure may be executed using processing circuitry. The processes discussed above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the steps of the processes discussed herein may be omitted, modified, combined, and/or rearranged, and any additional steps may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be exemplary and not limiting. Only the claims that follow are meant to set bounds as to what the present invention includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted, the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. While some portions of this disclosure may make reference to “convention,” any such reference is merely for the purpose of providing context to the invention(s) of the instant disclosure, and does not form any admission as to what constitutes the state of the art.
101,803
11943510
DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of the present disclosure clearer, the following further describes the present disclosure in detail with reference to the accompanying drawings. The described embodiments are not to be construed as a limitation to the present disclosure. All other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure. In the following descriptions, related “some embodiments” describe a subset of all embodiments. However, it may be understood that the “some embodiments” may be the same subset or different subsets of all the embodiments, and may be combined with each other without conflict. In some embodiments, P candidate starting points may be flexibly pre-configured for second multimedia data according to actual requirements, so that in response to a media switching operation during playing of first multimedia data, a target starting point may be flexibly selected from the P candidate starting points, and switching from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data is performed based on the target starting point in the second multimedia data, so that the flexibility of multimedia playback can be effectively improved. In addition, in response to a target triggering operation during playing of the second multimedia data, switching from a first multimedia frame to another multimedia frame that meets the actual requirements (that is, a multimedia frame corresponding to a new starting point) may be automatically performed by positioning and switching the playback point, thereby reducing the playback of useless multimedia frames (multimedia frames that do not meet the actual requirements). This not only can further improve the flexibility of multimedia playback, but also can effectively save processing resources and effectively improve the playback effectiveness and playback efficiency of the second multimedia data. In some embodiments, based on the media switching operation during playing of the first multimedia data, switching from playing the first multimedia data to playing the multimedia frame corresponding to the target starting point in the second multimedia data may be performed based on the target starting point of the second multimedia data. Since the target starting point is determined according to at least one of the user preference profile of the target user and the hot spot information of the second multimedia data, the multimedia frame corresponding to the target starting point can satisfy interest preferences of the target user or arouse the interest of the target user to a greater extent. Therefore, playing the second multimedia data starting from the target starting point can greatly improve the attraction of the second multimedia data to the target user, making the target user more willing to continue playing the second multimedia data, thereby improving the user stickiness of the second multimedia data and the playback conversion rate of the second multimedia data. Since the whole process does not require the target user to manually find and play multimedia frames that can arouse the interest of the target user by repeatedly dragging the progress bar, not only the convenience and the playback efficiency of multimedia data can be effectively improved, but also the playback of useless multimedia frames that are not of interest to the target user can be reduced, thereby saving processing resources and effectively improving the playback effectiveness of the second multimedia data. In some embodiments, multimedia data refers to a data sequence formed by the sequential arrangement of multimedia frames corresponding to a plurality of time points. The multimedia frames may be audio frames or video frames, which is not limited herein. That is to say, the multimedia data mentioned in some embodiments may be music or video, which is not limited herein. The so-called music may be understood as an audio sequence composed of a plurality of audio frames arranged in sequence, and may be divided into pop music, classical music, opera music, and so on. The so-called video may be understood as an image sequence composed of a plurality of image frames arranged in sequence, and can be divided into film and television videos, variety show videos, we-media videos, game videos, etc. The film and television video refers to: a video produced by recording a performance process of a human and/or animal and a surrounding environment according to a pre-made script in a specified photographing scene, followed by addition of audio and special effects, and so on. The variety video refers to: an entertaining video that combines multiple art forms. The we-media video refers to: a video produced by an ordinary person by photographing a scene using a camera device and published through a network or other channels, e.g., video blog (vlog). The game video refers to: a video produced by recording a game screen displayed on a terminal screen of any player user or a game screen displayed on a terminal screen of a user watching a game process of the player user in a process where one or more player users play a game. Any video has a preset playback duration. When the preset playback duration of a video is less than a duration threshold, the video may be called a short video. When the preset playback duration of a video is greater than the duration threshold, the video may be called a long video. To better implement multimedia playback, some embodiments provide a multimedia playback solution. The general principle of the multimedia playback solution is as follows: In a process of playing certain multimedia data (e.g., multimedia data X) for a target user, when a media switching operation is detected, another multimedia data (e.g., multimedia data Y) may be switched to, and according to one or more of a user preference profile of the target user and hot spot information of the multimedia data Y, a starting playback point of the multimedia data Y may be adjusted to a playback point that the target user is interested in, so as to play the multimedia data Y. This can greatly improve the attraction of the multimedia data Y to the target user, making the target user more willing to continue playing the multimedia data Y, thereby improving the user stickiness and the playback conversion rate of the multimedia data Y. The playback conversion rate is a ratio of the number of valid playbacks to the total number of playbacks. When a piece of multimedia data is played for a duration longer than a set threshold, it may be considered that the multimedia data has been validly played. When a target triggering operation is detected during playing of the multimedia data Y, switching from playing a multimedia frame played at a moment at which the target triggering operation is detected to playing a multimedia frame that the target user is interested in and that corresponds to another playback point in the multimedia data Y. This can further improve the attraction of the multimedia data Y to the target user, and further improve the user stickiness of the multimedia data Y. Since such a multimedia playback method does not require the target user to manually find and play multimedia frames that can arouse the interest of the target user, not only the playback efficiency of multimedia data can be effectively improved, but also the playback of useless multimedia frames that are not of interest to the target user can be reduced, thereby saving processing resources and effectively improving the playback effectiveness of the second multimedia data. The user preference profile of the target user is data used to describe a preference (interest) of the target user. The hot spot information of the multimedia data Y may be used to indicate a multimedia frame with a large popularity value in the multimedia data Y. The popularity value is used to reflect the popularity of the multimedia frame. Generally, a larger popularity value indicates that more people have played the multimedia frame, and indicates a higher popularity of the multimedia frame, i.e., a higher probability that the multimedia frame can arouse the interest of a user. The multimedia data Y may be multimedia data specified by the target user as needed, or multimedia data selected from a database according to the user preference profile of the target user, or other multimedia data belonging to the same data set as the multimedia data X, which is not limited herein. For example, if the target user selects multimedia data 1 during playing of the multimedia data X, the multimedia data 1 may be determined as the multimedia data Y. For another example, it is predicted that the target user may be interested in multimedia data 2 according to the user preference profile of the target user, the multimedia data 2 may be determined as the multimedia data Y. For another example, the data set to which the multimedia data X belongs also includes multimedia data 3, multimedia data 4, etc., the multimedia data 3 or the multimedia data 4 may be determined as the multimedia data Y. In an implementation, the multimedia playback solution may be executed by a computer device. For example, the computer device may be a target terminal. The target terminal mentioned herein may include but is not limited to: a smartphone, a tablet computer, a notebook computer, a desktop computer, a smart television, or the like. Various applications (APPs), such as video playback applications, music playback applications, social networking applications, browser applications, information flow applications, and educational applications, may be run in the target terminal. In another implementation, for example, the computer device may include a target terminal and a server, i.e., the multimedia playback solution may also be jointly executed by the target terminal and the server. In this case, the target terminal and the server may be connected through a network. The server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides a basic cloud computing service such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), big data, an artificial intelligence platform, etc., which is not limited herein. When the multimedia playback solution is jointly executed by the target terminal and the server, the target terminal may be responsible for executing operations of playing the multimedia data X and the multimedia data Y, and the server may be responsible for executing an operation of determining a playback point in which the target user is interested in the multimedia data Y based on one or more of the user preference profile of the target user and the hot spot information of the multimedia data Y, and executing an operation of delivering indication information indicating the determined playback point to the target terminal, so that the target terminal may play the multimedia data Y according to the playback point indicated in the indication information, as shown inFIG.1A. For example, the multimedia data X is a video X and the multimedia data Y is a video Y. If the server determines according to the user preference profile that the target user is interested in an image frame at the 5th second in the video Y and an image frame at the 58th second in the video Y, the server may determine that starting points11that the target user is interested in are the 5th second and the 58th second. Therefore, the server may generate indication information “Video Y-5th second” and indication information “Video Y-58th second”, and send the indication information to the target terminal, so that when the target terminal is playing an image frame12at the 1st minute and 50th second in the video X while detecting a media switching operation, the target terminal may switch from displaying the image frame12to displaying the image frame13at the 5th second in the video Y, and continue to display image frames in the video Y that are after the 5th second, as shown inFIG.1B. When the target terminal is playing an image frame at the 30th second in the video Y while detecting a target triggering operation, the target terminal may switch from displaying the image frame at the 30th second to displaying the image frame at the 58th second in the video Y, and continue to display image frames in the video Y that are after the 58th second. It is to be understood that some embodiments are described only using the example where the video Y includes two starting points that the target user is interested in. In practical applications, the number of starting points that the target user is interested in as determined according to the user preference profile of the target user is not limited to 2, but may also be 3, 4, or even more. Based on the relevant descriptions of the above multimedia playback solution, some embodiments further provide a multimedia playback method. The multimedia playback method may be executed by the target terminal mentioned above, or executed jointly by the target terminal and the server. For ease of description, some embodiments are mainly described using an example where the multimedia playback method is executed by the target terminal. As shown inFIG.2, the multimedia playback method may include the following operations S201-S203. S201. Play first multimedia data. The first multimedia data may be any multimedia data in a first data set, and multimedia data in the first data set is arranged in sequence and played in sequence. The first data set may be a video set (such as a film and television drama including a plurality of film and television videos, a video collection including a plurality of game videos of a game, a video collection including a plurality of short videos, etc.). The first data set may also be a music set (such as a music album of a singer, a music collection including a plurality of popular music, a music collection including a plurality of music of the same music style (such as lyrical style, rock style, etc.), an opera collection including a plurality of operas, etc.). Correspondingly, the first multimedia data may be any video in the video set (such as a film and television video, a game video, a short video, etc.), or any music in the music set (such as popular music, opera, etc.), which is not limited herein. For a short video scenario, a data set may be composed of short videos under one video channel, i.e., one data set corresponds to one video channel. The video channel herein may include, but is not limited to: a video recommendation channel, a video follow channel, a same-city channel, etc. In some embodiments, a dataset may be composed of short videos under the same theme (or topic). The topic herein may include: a travel theme, a food theme, a funny theme, etc. In some embodiments, a dataset may be composed of short videos published by the same user. The first multimedia data may include multimedia frames corresponding to a plurality of time points. The term “plurality of” mentioned herein means at least two. When playing the first multimedia data for a target user (that is, a user using the target terminal), the target terminal may play the first multimedia data starting from a multimedia frame corresponding to the first time point in the first multimedia data (that is, a head multimedia frame), or may play the first multimedia data starting from a multimedia frame corresponding to a first starting point of the first multimedia data, which is not limited herein. The first starting point is a time point in which the target user may be interested, which is determined among the plurality of time points of the first multimedia data according to at least one of a user preference profile of the target user and hot spot information of the first multimedia data. By playing the first multimedia data starting from the first starting point, the attention of the target user can be quickly attracted to a great extent, making the target user more willing to continue playing the first multimedia data, thereby effectively improving the user stickiness of the first multimedia data. Moreover, by reducing the playback of useless multimedia frames (that is, multimedia frames that the target user is not interested in), processing resources can also be effectively saved, and the playback efficiency of the first multimedia data can be improved. During the playing of the first multimedia data, the target user may input different media switching operations to trigger the target terminal to select the second multimedia data from the corresponding data set and switch from playing the first multimedia data to playing the second multimedia data. Correspondingly, if detecting the media switching operation, the target terminal may execute S202. In an implementation, the media switching operation may include a first interaction operation for instructing to perform multimedia switching in a same data set, a second interaction operation for instructing to perform multimedia switching in different data sets, etc. Any of the first interaction operation and the second interaction operation may be inputted in any of following manners: a gesture, voice, triggering a switching element on a terminal screen, and triggering a terminal physical key (such as a volume key, a power key, etc.). The switching element may include a switching component or a blank area in the terminal screen, where the switching component is displayed on the terminal screen during the playing of the first multimedia data. When any of the first interaction operation and the second interaction operation is inputted through a gesture, the first interaction operation may include an operation of inputting a first gesture, and the second interaction operation may include an operation of inputting a second gesture. The first gesture and the second gesture mentioned herein may be set in advance according to an empirical value or a service requirement, or may be set by the target user through the target terminal as needed, which is not limited herein. For example, the first gesture may be a gesture of sliding left and right, and the second gesture may be a gesture of sliding up and down. For another example, the first gesture may be a gesture for inputting “M”, and the second gesture may be a gesture for inputting “N”. The target user may input the first gesture or the second gesture by touching the terminal screen, or may input the first gesture or the second gesture by mid-air sensing, which is not limited herein. In a case where the target user inputs the first gesture or the second gesture through mid-air sensing, the target user may not touch the terminal screen, but makes the first gesture or the second gesture in front of a camera component (such as a front-facing camera) of the target terminal, so that the target terminal can acquire the first gesture or the second gesture through the camera component. When any of the first interaction operation and the second interaction operation is inputted through voice, the first interaction operation may include an operation of inputting a first voice, and the second interaction operation may include an operation of inputting a second voice. The first voice and the second voice mentioned herein may be set in advance according to an empirical value or a service requirement, or may be set by the target user through the target terminal as needed, which is not limited herein. For example, the first voice may be “please perform media switching in the same data set”, and the second voice may be “please perform media switching in different data sets”. For another example, the first voice may be “switch to the next multimedia data in the same data set”, and the second voice may be “switch to the previous data set”. When any of the first interaction operation and the second interaction operation is inputted by triggering the switching element on the terminal screen, the first interaction operation and the second interaction operation may be inputted by triggering different switching elements. In this case, a first switching element and a second switching element may be displayed on the terminal screen of the target terminal. The first interaction operation may include a triggering operation for the first switching element (i.e., a first blank area or the first switching component) on the terminal screen. The second interaction operation may include a triggering operation for the second switching component (i.e., a second blank area or the second switching component) on the terminal screen. The number of the first blank area, the number of the second blank area, the number of the first switching component and the number of the second switching component may be one or more, which is not limited herein. In some embodiments, the first interaction operation or the second interaction operation may also be inputted by triggering the same switching element with different triggering operations. In this case, a target switching element (i.e., a target blank space or a target switching component) may be displayed on the terminal screen of the target terminal. The first interaction operation may include a first triggering operation (such as a double-click operation or a single-click operation) for the target switching element. The second interaction operation may include a second triggering operation (such as a touch and hold operation) for the target switching element. When any of the first interaction operation and the second interaction operation is inputted by triggering the terminal physical key, the first interaction operation and the second interaction operation may be inputted by triggering different terminal physical keys. In this case, a first terminal physical key and a second terminal physical key may be arranged on the target terminal. The first interaction operation may include a triggering operation for the first terminal physical key (such as a volume key). The second interaction operation may include a triggering operation for the second terminal physical key (such as a power key). The number of the first terminal physical key and the number of the second terminal physical key may one or more, which is not limited herein. In some embodiments, the first interaction operation or the second interaction operation may also be inputted by triggering the same terminal physical key with different triggering operations. In this case, a target terminal physical key may be arranged on the target terminal. The first interaction operation may include a first triggering operation (such as a double-click operation) for the target terminal physical key. The second interaction operation may include a second triggering operation (such as a touch and hold operation) for the target terminal physical key. S202. Switch, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation. The second multimedia data may include multimedia frames corresponding to a plurality of time points, and one time point corresponds to one multimedia frame. The plurality of time points in the second multimedia data may include pre-configured P candidate starting points, P being a positive integer greater than 1. The target starting point belongs to the P candidate starting points, and may specifically be the first candidate starting point, the second candidate starting point, or the last candidate starting point among the P candidate starting points, which is not limited herein. In an implementation, the P candidate starting points may be pre-configured according to one or more of the user preference profile of the target user and the hot spot information of the second multimedia data. That is to say, the target starting point mentioned in S202may be determined according to the user preference profile of the target user, may be determined according to the hot spot information of the second multimedia data, or may be determined according to both the user preference profile of the target user and the hot spot information of the second multimedia data, which is not limited herein. It can be seen that the target starting point is essentially one of the plurality of time points of the second multimedia data. Based on this, a specific implementation of S202may be: playing a target multimedia frame corresponding to the target starting point in response to the media switching operation, the target multimedia frame being a multimedia frame corresponding to the target starting point in the second multimedia data; and after the playing of the target multimedia frame is finished, continuing to play a remaining multimedia frame located after the target multimedia frame in the second multimedia data. The phrase “continuing to play” mentioned herein may mean: continuing to play frames in sequence, continuing to play frames in a frame skipping manner, etc. If the target multimedia frame is the fifth frame of the second multimedia data, continuing to play frames in sequence means: continuing to play the sixth frame, the seventh frame, the eighth frame, and so on; and continuing to play frames in a frame skipping manner means: continuing to play the seventh frame, the eighth frame, the tenth frame, and so on. For example, it is assumed that the first multimedia data is music A, the second multimedia data is music B, and the 30th second in the music B is the target starting point. In this case, if a media switching operation is detected when the playback reaches an audio frame at the 3rd minute and 15th second of the music A, switch from the music A to the music B, start to play an audio frame at the 30th second of the music B, and continue to play music frames in the music B that are after the 30th second. For another example, it is assumed that the first multimedia data is a video A, the second multimedia data is a video B, and the 5th second in the video B is the target starting point. In this case, if a media switching operation is detected when the playback reaches an image frame at the 30th minute and 15th second of the video A, switch from the video A to the video B, start to display an image frame at the 5th second of the video B, and continue to play image frames in the video B that are after the 5th second. It may be seen from the above that the media switching operation may include the first interaction operation or the second interaction operation, the first interaction operation is used for instructing to perform media switching in the same data set, and the second interaction operation is used for instructing to perform media switching in different data sets. Therefore, it may be seen that the manner in which the second multimedia data involved in S202is determined varies with different media switching operations. In an implementation, the second multimedia data is determined in the following manner: (1) When the media switching operation includes the first interaction operation, the target terminal may determine the second multimedia data from the first data set to which the first multimedia data belongs. That is, in this case, the second multimedia data is multimedia data in the first data set other than the first multimedia data. Multimedia data in the first data set is arranged in sequence and played in sequence, and the second multimedia data may be located before or after the first multimedia data. It may be understood that, when the second multimedia data is located before the first multimedia data, the second multimedia data is multimedia data that has been played; when the second multimedia data is located after the first multimedia data, the second multimedia data is multimedia data that has not been played. In an implementation, after detecting the first interaction operation, the target terminal may, by default, select the second multimedia data from the multimedia data located after the first multimedia data in the first data set. In some embodiments, after detecting the first interaction operation, the target terminal may determine operation information of the first interaction operation; and if the operation information of the first interaction operation is first information, may select the second multimedia data from the multimedia data located after the first multimedia data in the first data set; or if the operation information of the first interaction operation is second information, may select the second multimedia data from the multimedia data located before the first multimedia data in the first data set. For example: {circle around (1)} When the first interaction operation includes the operation of inputting the first gesture, the operation information of the first interaction operation may include a gesture trajectory of the first gesture. The first information may be a first trajectory, and the second information may be a second trajectory. For example, assuming that the first gesture is a gesture of sliding left and right, the first trajectory may be a trajectory of sliding from left to right, and the second trajectory may be a trajectory of sliding from right to left. Assuming that the first gesture is the gesture of inputting “M”, the first trajectory may be a trajectory of drawing “M” from left to right, and the second trajectory may be a trajectory of drawing “M” from right to left. {circle around (2)} When the first interaction operation includes the operation of inputting the first voice, the operation information of the first interaction operation may include a voice content of the first voice. The first information may be a first content (e.g., “switch to playing the next multimedia data in the same data set”), and the second information may be a second content (e.g., “switch to playing previous multimedia data in the same data set”). 3. When the first interaction operation includes the triggering operation for the first switching element, and if the number of the first switching elements is 2 or more, the operation information of the first interaction operation may include an element identifier of the triggered first switching element. The first information may be an element identifier of the 1st first switching element, and the second information may be an element identifier of the 2nd first switching element. If the number of the first switching elements is one, the operation information of the first interaction operation may include a triggering manner for the first switching element, the first information may be a first manner (e.g., double-click), and the second information may be a second manner (e.g., touch and hold). {circle around (4)} When the first interaction operation is inputted by triggering the terminal physical key, the operation information involved is similar to the operation information mentioned in the case {circle around (3)} of the first interaction operation, and the details will not be repeated here. (2) When the media switching operation includes the second interaction operation, the target terminal may select multimedia data that comes first in the second data set as the second multimedia data by default; or may determine the second multimedia data from the second data set according to the user preference profile of the target user, or according to the hot spot information of the second multimedia data, or according to the user preference profile and the hot spot information of the second multimedia data. That is, in this case, the second multimedia data is multimedia data determined in the second data set according to one or more of the user preference profile of the target user and the hot spot information of the second multimedia data. The second data set is selected from data sets in a multimedia playlist other than the first data set. Data sets in the multimedia playlist are arranged in sequence and played in sequence. The first data set may be a data set that comes first in the multimedia playlist, or may be data sets at other arrangement positions, which is not limited herein. The data sets located after the first data set may be: candidate data sets determined in real time or periodically according to one or more of the user preference profile of the target user and the hot spot information of the second multimedia data. Each candidate data set may include at least one multimedia data determined according to one or more of the user preference profile and the hot spot information of the second multimedia data. The second data set mentioned in some embodiments may be located before or after the first data set. It may be understood that, when the second data set is located before the first data set, although the second data set is a data set that has been played, the second multimedia data may be multimedia data that has been played or may be multimedia data that has not been played, which is not limited herein. For example, it is assumed that the multimedia playlist includes a dataset A, a dataset B, . . . , and a dataset G; the first data set is the data set B, the second data set is the data set A, and the second multimedia data is multimedia data that comes second in the second data set. In this case, if the target user switches to playing the data set B after playing the multimedia data that comes first in the data set A, the second multimedia data has not been played. When the second data set is located after the first data set, the second data set is a data set that has not been played, and the second multimedia data is multimedia data that has not been played. The target terminal may select the second data set from the multimedia playlist in any one of the following manners: In an implementation, after detecting the second interaction operation, the target terminal may select the second data set by default from data sets in a multimedia recommendation list that are located after the first data set. In some embodiments, after detecting the second interaction operation, the target terminal may determine operation information of the second interaction operation; and if the operation information of the second interaction operation is third information, may select the second data set from data sets in the multimedia playlist that are located after the first data set; or if the operation information of the second interaction operation is fourth information, may select the second data set from data sets in the multimedia playlist that are located before the first data set. For example: {circle around (1)} When the second interaction operation includes the operation of inputting the second gesture, the operation information of the second interaction operation may include a gesture trajectory of the second gesture. The third information may be a third trajectory, and the fourth information may be a fourth trajectory. For example, assuming that the second gesture is the gesture of sliding up and down, the third trajectory may be a trajectory of sliding from up to down, and the fourth trajectory may be a trajectory of sliding from down to up. Assuming that the second gesture is the gesture of inputting “N”, the third trajectory may be a trajectory of drawing “N” from left to right, and the fourth trajectory may be a trajectory of drawing “N” from right to left. {circle around (2)} When the second interaction operation includes the operation of inputting the second voice, the operation information of the second interaction operation may include a voice content of the second voice. The third information may be a third content (e.g., “switch to playing the previous data set”), and the fourth information may be a fourth content (e.g., “switch to playing the next data set”). {circle around (3)} When the second interaction operation includes the triggering operation for the second switching element, and if the number of the second switching elements is 2 or more, the operation information of the second interaction operation may include an element identifier of the triggered second switching element. The third information may be an element identifier of the 1st second switching element, and the fourth information may be an element identifier of the 2nd second switching element. If the number of the second switching elements is one, the operation information of the second interaction operation may include a triggering manner for the second switching element, the third information may be a third manner (e.g., single-click or double-click), and the fourth information may be a fourth manner (e.g., press or touch and hold). {circle around (4)} When the second interaction operation is inputted by triggering the terminal physical key, the operation information involved is similar to the operation information mentioned in the case {circle around (3)} of the second interaction operation, and the details will not be repeated here. Based on the relevant descriptions of the above operations S201-S202, specific scenarios where the target user inputs different interaction operations (that is, media switching operations) to triggering the terminal to switch to playing different second multimedia data are described exemplarily with reference to several specific examples where the first multimedia data is a video B in a film and television drama 1. In Example 1, it is assumed that any of the first interaction operation and the second interaction operation is inputted by a gesture. If the target user slides a finger from left to right on the terminal screen to input the first gesture (that is, the first interaction operation) when an image frame30in the video B is played, the target terminal may select a video C as the second multimedia data from videos in the film and television drama 1 that are after the video B, and determine the 5th second in the video C as the target starting point. Then, the target terminal may switch from displaying the image frame30in the video B to displaying an image frame31at the 5th second in the video C, and continue to display subsequent image frames, as shown inFIG.3A. If the target user inputs the first gesture by sliding from right to left on the terminal screen, the target terminal may select a video A as the second multimedia data from videos in the film and television drama 1 that are before the video B, and determine the 15th second in the video A as the target starting point. Then, the target terminal may switch from displaying the image frame30in the video B to displaying an image frame32at the 15th second in the video A, and continue to display subsequent image frames, as shown inFIG.3B. If the target user inputs the second gesture (that is, the second interaction operation) by sliding a finger from up to down on the terminal screen, the target terminal may select a film and television drama 0 as the second data set from film and television dramas in the multimedia playlist that are before the film and television drama 1, determine a video d in the film and television drama 0 as the second multimedia data, and determine the 16th second in the video d as the target starting point. Then, the target terminal may switch from displaying the image frame30in the video B to displaying an image frame33at the 16th second in the video d, and continue to display subsequent image frames, as shown inFIG.3C. If the target user inputs the second gesture by sliding a finger from down to up on the terminal screen, the target terminal may select a film and television drama 2 as the second data set from film and television dramas in the multimedia playlist that are after the film and television drama 1, determine a video f in the film and television drama 2 as the second multimedia data, and determine the 36th second in the video f as the target starting point. Then, the target terminal may switch from displaying the image frame30in the video B to displaying an image frame34at the 36th second in the video f, and continue to display subsequent image frames, as shown inFIG.3D. In Example 2, it is assumed that any of the first interaction operation and the second interaction operation is inputted by triggering the switching component, the target terminal may display a first switching component35and a second switching component36on the terminal screen, as shown in a left part ofFIG.3E. During playing of the image frame30in the video B, if the target user performs a single-click operation on the first switching component35(i.e., inputs the first interaction operation in the first manner), the target terminal may switch from displaying the image frame30in the video B to displaying the image frame31at the 5th second in the video C. If the target user performs a double-click operation on the first switching component35(i.e., inputs the first interaction operation in the second manner), the target terminal may switch from displaying the image frame30in the video B to displaying the image frame32at the 15th second in the video A. If the target user performs a single-click operation on the second switching component36(i.e., inputs the second interaction operation in the third manner), the target terminal may switch from displaying the image frame30in the video B to displaying the image frame33at the 16th second in the video d. If the target user performs a double-click operation on the second switching component36(that is, inputs the second interaction operation in the fourth manner), the target terminal may switch from displaying the image frame30in the video B to displaying the image frame34at the 36th second in the video f. In some embodiments, the number of the first switching components and the number of the second switching components are both 2. For example, the target terminal may display a first switching component351, a first switching component352, a second switching component361, and a second switching component362on the terminal screen, as shown in a right part ofFIG.3E. During playing of the image frame30in the video B, if the target user performs a triggering operation on the first switching component351, the target terminal may switch from displaying the image frame30in the video B to displaying the image frame31at the 5th second in the video C. If the target user performs a triggering operation on the first switching component352, the target terminal may switch from displaying the image frame30in the video B to displaying the image frame32at the 15th second in the video A. If the target user performs a triggering operation on the second switching component361, the target terminal may switch from displaying the image frame30in the video B to displaying the image frame33at the 16th second in the video d. If the target user performs a triggering operation on the second switching component362, the target terminal may switch from displaying the image frame30in the video B to displaying the image frame34at the 36th second in the video f. It is to be understood that,FIG.3A-FIG.3Eonly exemplarily show schematic interface diagrams when the target terminal is playing a video, and is not limited thereto. For example, when the terminal screen includes a switching component, the product form of the switching component is not limited to that shown inFIG.3E. The switching component may also be a “Select episode” component shown inFIG.3A-FIG.3E, etc. It may be known from the relevant descriptions of the above operations S201-S202that the first multimedia data and the second multimedia data may both be played starting from a starting point. Similarly, the target terminal may also play multimedia data other than the first multimedia data and the second multimedia data starting from a certain starting point. That is to say, for any multimedia data, in response to not detecting a media switching operation, the target terminal may also determine a starting point of the any multimedia data according to one or more of the user preference profile of the target user and hot spot information of the any multimedia data, and play the any multimedia data starting from the determined starting point. S203. Switch, during the playing of the second multimedia data, from playing a first multimedia frame in the second multimedia data to playing a multimedia frame corresponding to a new starting point, in response to a target triggering operation. The target starting point is one of the P candidate starting points, and each candidate starting point can attract the attention of the target user to a certain extent. Based on this, in the process of playing the second multimedia data based on the target starting point, the target terminal may also support the target user in inputting a target triggering operation to trigger the target terminal to switch from playing the first multimedia frame (which is a multimedia frame played in response to the target triggering operation) to playing the multimedia frame corresponding to the new starting point (which is a candidate starting point among the P candidate starting points other than the target starting point), so as to quickly switch to playing the new multimedia frame that the target user is interested in (that is, the multimedia frames corresponding to the new starting point), thereby improving the user stickiness and playback efficiency. Correspondingly, in response to detecting a target triggering operation during the playing of the second multimedia data based on the target starting point, switching from playing the first multimedia frame to playing the multimedia frame corresponding to the new starting point may be performed; and after the playing of the multimedia frame corresponding to the new starting point is finished, multimedia frames corresponding to time points after the new starting point in the second multimedia data continue to be played. The target triggering operation may be set according to an empirical value or a service requirement, or may be set by the target user through the target terminal as needed, as long as the target triggering operation does not conflict with the media switching operation mentioned above. In an implementation, the target user may input a specified target triggering operation to trigger the target terminal to randomly select a candidate starting point from the P candidate starting points other than the target starting point as the new starting point. In another implementation, the target user may input different target triggering operations to trigger the target terminal to select a new starting point from the P candidate starting points by using different starting point selection logics. Specifically, the target triggering operation may include: a first progress adjustment operation (or called “fast-forward operation”) used for instructing to select a new starting point from candidate starting points later than a first time point corresponding to the first multimedia frame, or a second progress adjustment operation (or referred to as a “backward operation”) used for instructing to select a new starting point from candidate starting points earlier than the first time point, or a custom selection operation for the P candidate starting points, etc. In a case that the target triggering operation includes the custom selection operation for the P candidate starting points, the new starting point may be a candidate starting point determined among the P candidate starting points according to the custom selection operation. In an implementation, an identification option of each candidate starting point may be displayed on the terminal screen. In this implementation, the custom selection operation may include: an operation of selecting any one of the identification options; and the new starting point is the candidate starting point corresponding to the selected identification option. In some embodiments, a starting point component may be displayed on the terminal screen. In this implementation, the custom selection operation may include: a custom triggering operation for the starting point component. The custom triggering operation may be a click operation of clicking or continuously clicking the starting point component within a preset time period. In this case, the new starting point may be a candidate starting point determined according to the number of clicks involved in the click operation. For example, when the number of clicks is 1, the new starting point may be the earliest candidate starting point among the P candidate starting points other than the target time point. When the number of clicks is 2, the new starting point may be the second earliest candidate starting point among the P candidate starting points other than the target time point. In some embodiments, the custom triggering operation may be a touch and hold operation of touching and holding on the starting point component. In this case, the new starting point may be a candidate starting point determined according to a holding duration involved in the touch and hold operation. For example, when the holding duration is 1 second, the new starting point may be the earliest candidate starting point among the P candidate starting points other than the target time point. When the holding duration is 2 seconds, the new starting point may be the second earliest candidate starting point among the P candidate starting points other than the target time point. In a case that the target triggering operation includes the first progress adjustment operation, the new starting point may be any candidate starting point later than the first time point corresponding to the first multimedia frame among the P candidate starting points. For example, the new starting point may be a candidate starting point that is later than the first time point and is closest to the first time point among the P candidate starting points, or a candidate starting point that is later than the first time point and is the second closest to the first time point, or a latest candidate starting point later than the first time point, etc. In a case that the target triggering operation includes the second progress adjustment operation, the new starting point may be any candidate starting point earlier than the target starting point among the P candidate starting points. For example, the new starting point may be a candidate starting point that is earlier than the first time point and is closest to the first time point among the P candidate starting points, or a candidate starting point that is earlier than the first time point and is the second closest to the first time point, or an earliest candidate starting point earlier than the first time point, etc. For another example, the new starting point may be a candidate starting point that is earlier than a target time point and is closest to the target time point among the P candidate starting points, or a candidate starting point that is earlier than the target time point and is the second closest to the target time point, etc. It is to be understood that, similar to the first interaction operation and the second interaction operation mentioned above, any one of the first progress adjustment operation and the second progress adjustment operation may be inputted in any one of the following manners: a gesture, voice, triggering a progress adjustment element on the terminal screen, triggering a terminal physical key (such as a volume key, a power key, etc.), etc. That is to say, the first progress adjustment operation may be an operation of inputting a third gesture or a third voice, or an operation for a first progress adjustment element or a third terminal physical key, etc. The second progress adjustment operation may be an operation of inputting a fourth gesture or a fourth voice, or an operation for a second progress adjustment element or a fourth terminal physical key, etc. The third gesture and the fourth gesture mentioned herein may be determined according to a service requirement or a user requirement (or operating habit) of the target user, as long as the third gesture and the fourth gesture can be distinguished from the first gesture and the second gesture mentioned above. For example, if the first gesture and the second gesture mentioned above are gestures for inputting “M” and “N”, the third gesture and the fourth gesture may respectively be a gesture of sliding from left to right and a gesture of sliding from left to right. For another example, if the first gesture and the second gesture mentioned above are respectively a gesture of sliding from left to right and a gesture of sliding from right to left, the third gesture and the fourth gesture may respectively be gestures of inputting “M” and “N”. The first progress adjustment operation and the second progress adjustment operation are respectively a gesture of sliding from left to right and a gesture of sliding from right to left, the sliding magnitudes of the first progress adjustment operation and the second progress adjustment operation may be related or unrelated to the result of selecting the new starting point. That the sliding magnitudes of the first progress adjustment operation and the second progress adjustment operation are unrelated to the result of selecting the new starting point means that when the target user inputs the first progress adjustment operation or the second progress adjustment operation, no matter how far the finger or other component (such as a stylus pen) slides on the terminal screen, the result of selecting the new starting point is not affected. For example, if the first progress adjustment operation specifically instructs to select the candidate time point later than the first time point and closest to the first time point as the new starting point, the target terminal always selects the candidate time point later than the first time point and closest to the first time point as the new starting point regardless of whether the target user slides the finger or other component (such as the stylus pen) for 1 cm, 5 cm, or even 10 cm on the terminal screen when inputting the first progress adjustment operation. Correspondingly, that the sliding magnitudes of the first progress adjustment operation and the second progress adjustment operation are related to the result of selecting the new starting point means that the result of selecting the new starting point is affected by the sliding magnitudes of the first progress adjustment operation and the second progress adjustment operation. For example, when the first progress adjustment operation instructs to select a candidate time point later than the first time point as the new starting point, the target terminal may select a candidate time point that is later than the first time point and is closest to the first time point as the new starting point if the target user slides the finger or other component (such as the stylus pen) for 1 cm on the terminal screen when inputting the first progress adjustment operation. The target terminal may select a candidate time point that is later than the first time point and is the second closest to the first time point as the new starting point if the target user slides the finger or other component (such as the stylus pen) for 5 cm on the terminal screen when inputting the first progress adjustment operation. Based on the related description of S203, how to switch to playing the multimedia frame of the new starting point is exemplarily described with reference toFIG.3Fby using an example where the target triggering operation includes the first progress adjustment operation. Specifically, for example, it is assumed that the second multimedia data is the video f as shown inFIG.3D, the P candidate starting points of the second multimedia data include 00:36 (the 36th second), 05:36 (the 5th minute and 36th second), and 30:14 (the 30th minute and 14th second), and the target starting point is the 36th second. In this case, after playing a target image frame at the 36th second, the target terminal may continue to play remaining image frames after the 36th second. If detecting the first progress adjustment operation at the 59th second, the target terminal may determine the 5th minute and 36th second as the new starting point, and switch from playing the image frame at the 59th second (i.e., the first media frame) to playing an image frame at the 5th minute and 36th second. It is to be understood thatFIG.3Fis described using an example where the second multimedia data is a target video, an implementation when the second multimedia data is music is similar to the implementation shown inFIG.3F, and the details will not be repeated herein. In some embodiments, if the target starting point is a time point among the plurality of time points other than the first time point, but the target user may intend to play starting from the head multimedia frame (that is, the multimedia frame corresponding to the first time point), the target terminal may also support the target user in inputting a head frame playback triggering operation in the process of playing the second multimedia data based on the target starting point, to trigger the target terminal to quickly jump from a second multimedia frame being currently played to the head multimedia frame, so as to play the second multimedia data starting from the head multimedia frame, thereby improving the user stickiness. The head frame playback triggering operation may be set according to an empirical value or a service requirement, or may be set by the target user through the target terminal as needed, as long as the head frame playback triggering operation does not conflict with the media switching operation and the target triggering operation (e.g., the first progress adjustment operation and the second progress adjustment operation) mentioned above. For example, the head frame playback triggering operation may be an operation of inputting a fifth gesture (e.g., a gesture of drawing a small circle, a gesture of inputting “L”, etc.) or a fifth voice, or an operation for a head frame playback triggering component or a fifth terminal physical key, etc. That is to say, during the playing of the second multimedia data, i.e., during the playing of the target multimedia frame or the remaining multimedia frames, switching from playing the second multimedia frame to playing the multimedia frame corresponding to the first time point is performed in response to the first frame playback triggering operation, and after the playing of the multimedia frame corresponding to the first time point is finished, multimedia frames corresponding to time points after the first time point in the second multimedia data continue to be played. The second multimedia frame is a multimedia frame played at a moment at which the head frame playback triggering operation is performed. For example, if the target terminal detects the head frame playback triggering operation while playing the multimedia frame at the 35th second, the second multimedia frame is the multimedia frame at the 35th second. In an example where the second multimedia data is the target video, a schematic diagram of switching, by the target terminal, from playing a second image frame (the second multimedia frame) to playing a head image frame (that is, the multimedia frame at the first time point) in response to the head frame playback triggering operation may be as shown inFIG.3G. It is to be understood that an implementation when the second multimedia data is music is similar to the implementation shown inFIG.3G, and the details will not be repeated herein. In some embodiments, P candidate starting points may be flexibly pre-configured for second multimedia data according to actual requirements, so that in response to a media switching operation during playing of first multimedia data, a target starting point may be flexibly selected from the P candidate starting points, and the second multimedia data starts to be played based on the target starting point in the second multimedia data, so that the flexibility of multimedia playback can be effectively improved. In addition, in response to a target triggering operation during playing of the second multimedia data, switching from a first multimedia frame to another multimedia frame that meets the actual requirements (that is, a multimedia frame corresponding to a new starting point) may be automatically performed by positioning and switching the playback point, thereby reducing the playback of useless multimedia frames (multimedia frames that do not meet the actual requirements). This not only can further improve the flexibility of multimedia playback, but also can effectively save processing resources and effectively improve the playback effectiveness and playback efficiency of the second multimedia data. Based on the above descriptions, some embodiments further provide a multimedia playback method. The multimedia playback method may be executed by the target terminal mentioned above, or executed jointly by the target terminal and the server. For ease of description, some embodiments are mainly described using an example where the multimedia playback method is executed by the target terminal. As shown inFIG.4, the multimedia playback method may include the following operations S401-S405. S401. Play first multimedia data. In a process of playing the first multimedia data, the target terminal may obtain a popularity value of each data set in a database, and add the data set with a large popularity value as a candidate data set into a multimedia recommendation list, so that in response to a second interaction operation, a second data set is selected from the multimedia recommendation list. In some embodiments, in the process of playing the first multimedia data, the target terminal may obtain a user preference profile of a target user. The user preference profile may be an initial user preference profile constructed according to user information of the target user (e.g., social information, surrounding environment information, etc.), or may be a profile obtained by updating the initial user preference profile according to multimedia history playback information of the target user, which is not limited herein. The user preference profile may include a first preference tag for multimedia matching and a second preference tag for starting point matching. The number of the first preference tag and the number of the second preference tag may one or more, which is not limited herein. The first preference tag and the second preference tag may be the same, different or partially the same. For example, the first preference tag may include “suspense drama”, “funny”, “emotional”, and “star X”, and the second preference tag may include “funny”, “star X”, “eating show”, etc. After obtaining the user preference profile, the target terminal may search the database for matching multimedia data according to the first preference tag in the user preference profile during the playing of the first multimedia data. The matching multimedia data is multimedia data corresponding to a data tag matching with the first preference tag, the matching multimedia data including one or more starting points. In an implementation, the target terminal may calculate a similarity between a feature vector of the first preference label and a feature vector of a data tag of each multimedia data in the database, and determine the multimedia data corresponding to the data tag corresponding to a similarity greater than a similarity threshold as the matching multimedia data. In some embodiments, a tag mapping table may be preset. The tag mapping table includes a mapping relationship between multiple data tags and multiple preset preference tags. The target terminal may first determine whether the first preference label exists in the multiple preset preference labels in the tag mapping table, and if yes, determine the multimedia data corresponding to the data tag mapped to the first preference label as the matching multimedia data. Then, the target terminal may obtain a tag information set of the matching multimedia data, the tag information set of the matching multimedia data including tag information of the one or more starting points in the matching multimedia data; search the tag information set for matching tag information matching with the second preference tag; and determine the matching multimedia data to which the starting point corresponding to the matching tag information belongs as candidate multimedia data, and add a data set to which the candidate multimedia data belongs as a candidate data set to a multimedia recommendation list, so that a second data set is selected from the multimedia recommendation list in response to the second interaction operation. If a large number of pieces of matching multimedia data are selected, a weight of each matching multimedia data may further be calculated according to a weighted value of the first preference tag and a weighted value of the second preference tag, and a preset number of pieces of matching multimedia data are selected as candidate multimedia data in descending order of the weights. The weight of any matching multimedia data is calculated in the following manner: obtaining a first matching degree between the data tag of any matching multimedia data and the first preference tag and a second matching degree between the tag information of each starting point in the matching multimedia data and the second preference tag, weighting the first matching degree using the weighted value of the first preference label to obtain a first weighted value, weighting the second matching degree using the weighted value of the second preference label to obtain a second weighted value, and calculating a sum of the first weighted value and the second weighted value to obtain the weight of the matching multimedia data. It may be known from the relevant description of S201in the foregoing embodiments that the multimedia recommendation list may further include a first data set in addition to the candidate data sets, and an arrangement position of the first data set is located before an arrangement position of the candidate data sets. Considering that the target user may input the second interaction operation to trigger multimedia switching in different data sets, and in order to quickly play the second multimedia data based on the target starting point for the target user in response to detecting the second interaction operation, the target terminal may preload the data sets in the multimedia recommendation list, so that after detecting the second interaction operation, the target terminal may perform playback based on the preloaded data. Considering that the second multimedia data is generally selected from neighboring data sets of the first dataset (a historical data set immediately before the first data set and a candidate data set immediately after the first data set) in response to the second interaction operation, the target terminal may preload only the neighboring data sets of the first data set to save processing resources. The neighboring data sets may specifically be preloaded in the following manner: preloading multimedia data in the neighboring data sets, and multimedia frames corresponding to starting points of each multimedia data. In some embodiments, the neighboring data sets may specifically be preloaded in the following manner: preloading candidate multimedia data in the neighboring data sets and multimedia frames corresponding to starting points that match a user preference in each candidate multimedia data, to save processing resources. The target terminal may further obtain a preference measurement value of the target user with respect to the first multimedia data after the multimedia recommendation list is determined, the preference measurement value being used for indicating a degree of preference of the target user for the first multimedia data. In an implementation, the target terminal may output inquiry information during the playing of the first multimedia data to inquire whether the target user likes the first multimedia data, and determine the preference measurement value according to response information inputted by the target user. In some embodiments, the target terminal may obtain the preference measurement value when the first multimedia data is switched. In this case, the target terminal may determine a duration for which the first multimedia data has been played, and use the duration as the preference measurement value; or the preference measurement value of the target user with respect to the first multimedia data determine according to a proportional relationship between the duration and the degree of preference. After obtaining the preference measurement value, the target terminal may update the user preference profile of the target user according to the preference measurement value and a data tag of the first data set to which the first multimedia data belongs. If the preference measurement value is greater than or equal to a measurement threshold, the target terminal may add the data tag of the first multimedia data as a new first preference tag to the user preference profile of the target user to update the user preference profile. If the preference measurement value is less than the measurement threshold, the target terminal may search in the user preference profile for the first preference label matching the data tag of the first multimedia data, and delete the found first preference label to update the user preference profile. Then, the target terminal may update all or part of the candidate data set in the multimedia recommendation list according to the updated user preference profile. In an implementation, invalid candidate data sets may be searched among all or part of the candidate data sets in the multimedia recommendation list according to the updated user preference profile. The invalid candidate data set is a candidate data set including at least one multimedia data of which the data tag does not match the updated user preference profile. The invalid candidate data sets are deleted from the multimedia recommendation list to realize the updating of the multimedia recommendation list. S402. Switch, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation. The second multimedia data may be target music or a target video, and the target starting point is one of the P candidate starting points. The P candidate starting points may be pre-configured according to one or more of the user preference profile of the target user and the hot spot information of the second multimedia data. In an implementation, the P candidate starting points may be configured using at least one of the following manners: Configuration manner 1: The second multimedia data includes a plurality of starting points, and each of the starting points has tag information. The target terminal may calculate a matching degree between the user preference profile and the tag information of each of the starting points to obtain a calculation result. Next, the target terminal may select P target matching degrees greater than a matching threshold from the calculation result, or sequentially select P target matching degrees from the calculation result in descending order of the matching degrees. Then, the target terminal may determine the starting point corresponding to each target matching degree among the P target matching degrees as a candidate starting point. In this configuration mode, the target starting point may be selected in any one of the following manners: In an implementation, an earliest candidate starting point may be selected from the P candidate starting points as the target starting point. That is, in this implementation, the target starting point is the first candidate starting point among the P candidate starting points. In some embodiments, the candidate starting point corresponding to a maximum matching degree may be selected from the P candidate starting points as the target starting point. That is, in this implementation, the target starting point is the candidate starting point having a maximum matching degree to the user preference profile among the P candidate starting points. In some embodiments, a popularity value of each candidate starting point among the P candidate starting points (where the popularity value is used to reflect the popularity of the multimedia frame corresponding to the candidate starting point) may be obtained, and the candidate starting point having a maximum popularity value is selected from the P candidate starting points as the target starting point. That is, in this implementation, the target starting point is the most popular candidate starting point among the P candidate starting points. The popularity value of any candidate starting point may be proportional to a number of historical playbacks or a historical playback frequency of a multimedia frame corresponding to the candidate starting point. The number of historical playbacks is a number of times the multimedia frame corresponding to the candidate starting point is played by users other than the target user. It is to be understood that the several specific implementations of selecting the target starting point from the P candidate starting points are described in some embodiments by way of example only, and are not exhaustive. For example, in some other embodiments, the target terminal may calculate a playback reasonableness measurement value of each candidate starting point by a weighted summation on the matching degree and the popularity value through preset weights, and select the target starting point from the P candidate starting points according to the playback reasonableness measurement value of each candidate starting point, and so on. Configuration manner 2: The hot spot information of the second multimedia data may be used to indicate a multimedia frame with a large popularity value in the second multimedia data. The target terminal may determine a time point at which each multimedia frame indicated by the hot spot information is located as the P candidate starting points. In this configuration mode, the target starting point may be selected in any one of the following manners: selecting an earliest candidate starting point from the P candidate starting points as the target starting point; or selecting the target starting point from the P candidate starting points based on the user preference profile. For example, the target terminal may calculate a matching degree between the user preference profile and the tag information of each of the starting points, and then select the candidate starting point corresponding to a maximum matching degree from the P candidate starting points as the target starting point. Configuration manner 3: A matching degree between the user preference profile and the tag information of each of the starting points is calculated to obtain a first recommendation score of the starting point. Each of the starting points is scored according to the hot spot information to obtain a second recommendation score of the starting point. A scoring rule may be: for any broadcast point, if the multimedia frame at the starting point belongs to the multimedia frames indicated in the hot spot information, the second recommendation score of the starting point is a valid score (e.g., “0.5”, “1”, etc.); otherwise, the second recommendation score of the starting point is an invalid score (e.g., “0”). Then, a weighted summation is performed on the first recommendation score and the second recommendation score of each starting point respectively, to obtain a target recommendation score of the starting point. Finally, P starting points corresponding to target recommendation scores greater than a score threshold may be selected as the P candidate starting points; or P target recommendation scores may be selected sequentially in descending order of the target recommendation scores, and the starting points corresponding to the selected target recommendation scores are determined as candidate starting points. In this configuration mode, the target starting point may be selected in any one of the following manners: selecting an earliest candidate starting point from the P candidate starting points as the target starting point; or selecting the candidate starting point corresponding to a maximum first recommendation score from the P candidate starting points as the target starting point, in which case the target starting point is the candidate starting point having a maximum matching degree to the user preference profile among the P candidate starting points; or selecting the candidate starting point corresponding to a maximum first recommendation score from the P candidate starting points as the target starting point, in which case the target starting point is the most popular candidate starting point among the P candidate starting points. In an implementation, when the second multimedia data is a target video, a multimedia frame corresponding to the target starting point in the target video is a target image frame. In this implementation, after detecting the media switching operation, the target terminal may play a switching animation first; after the playing of the switching animation is finished, play the second multimedia data starting from the target starting point of the second multimedia data. The switching animation may be an animation in which the target image frame and a current image frame of the first multimedia data (the image frame played at the moment at which the media switching operation is detected) are moved along a target direction. A displacement distance of the target image frame or the current image frame is defined as W, where W=screen width of the target terminal (when the target terminal is in landscape mode), or W=screen height of the target terminal (when the target terminal is in portrait mode). For a landscape image frame (that is, an image frame with more pixels in the horizontal direction than in the vertical direction), when the target terminal can display the landscape image frame in full screen, it may be considered that the target terminal is in landscape mode, and when the target terminal cannot display the landscape image frame in full screen, it may be considered that the target terminal is in portrait mode. With the switching animation, the target image frame can be gradually displayed on the terminal screen. After the target image frame is completely displayed on the terminal screen, the playing of the switching animation ends. The target direction may be determined according to a specific media switching operation. For example, if the media switching operation is an operation of inputting a first gesture by sliding from left to right, the target direction may be from left to right because the second multimedia data is located after the first multimedia data, so as to intuitively prompt the target user that the second multimedia data is located after the first multimedia data. For another example, if the media switching operation is an operation of inputting a second gesture by sliding from up to down, the target direction may be from up to down because the second data set to which the second multimedia data belongs is located before the first data set. As can be seen from the above, when the second multimedia data is the target video, the multimedia frame corresponding to the target starting point in the target video is the target image frame. If the target image frame is an image frame in the target video other than the head image frame, but the target user may intend to play the target video starting from the head image frame, the target terminal may simultaneously display the target image frame and the head image frame for the target user by various methods, thereby enriching multimedia playback logics and improving the user stickiness. Refer to the following description: In an implementation, the target image frame and the head image frame may be simultaneously displayed for the target user by split-screen displaying. The terminal screen may be split into a first screen region and a second screen region in response to the media switching operation. The target video is played in the first screen region starting from the target image frame. The target video is played in the second screen region starting from the head image frame. Assuming that the target video is the video f shown inFIG.3Dabove, a schematic diagram of playing the target video in a first screen region51based on the target image frame and playing the target video in a second screen region52based on the head image frame by the target terminal may be as shown inFIG.5A. It is to be understood that the left-right screen splitting inFIG.5Ais described by way of example only, and the specific screen splitting manner is not limited thereto. For example, in some other embodiments, the terminal screen may be split into a first screen region and a second screen region by vertical screen splitting. For another example, inFIG.5A, the terminal screen is equally split into a first screen region and a second screen region by left-right screen splitting. However, in some other embodiments, the areas of the first screen region and the second screen region obtained by the left-right screen splitting may also be different. After the target video is played respectively based on the target image frame and the head image frame by split-screen displaying, the target terminal may support the target user in selecting one of the screen regions to realize screen merging, and continuing to play the target video in the merged terminal screen according to the playback logic corresponding to the selected screen region. Taking the first screen region as an example, the first screen region and the second screen region are merged in response to a selection operation performed on the first screen region; and the target video continues to be played on the merged terminal screen starting from a reference image frame. The reference image frame is an image frame displayed in the first screen region at a moment at which the selection operation is performed. For example, when the selection operation is detected, an image frame at the 38th second is being displayed in the first screen region. In this case, the reference image frame is the image frame at the 38th second, and the target terminal may continue to play the image frames image frames after the 38th second in the merged terminal screen, as shown inFIG.5B. In some embodiments, the target image frame and the head image frame may be simultaneously displayed for the target user through a video sub-page. In response to the media switching operation, a video playback interface is outputted on the terminal screen and the target video is played in the video playback interface by using the head image frame as a playback starting point. A video sub-page53is outputted on the video playback interface, and the target image frame is displayed in the video sub-page53, or the target video is displayed in the video sub-page by using the target image frame as a playback starting point. The video sub-page is an interface independent of the video playback interface, that is, the video sub-page may be understood as a sub-window (or called a floating window) suspended on the video playback interface. Still assuming that the target video is the video f shown inFIG.3Dabove, a schematic diagram of playing the target video in the video playback interface based on the head image frame and displaying or playing the target image frame in the video sub-page by the target terminal may be as shown inFIG.5C. After displaying or playing the target image frame in the video sub-page, the target terminal may further support the target user in performing a triggering operation (such as a click operation, a press operation, a mid-air gesture operation, etc.) on the video sub-page to trigger the target terminal to continue to play the target video in full screen according to the playback logic of the video sub-page. Processing resources can be saved by fixedly displaying the target image frame in the video sub-page. By starting to play the target video in the video sub-page based on the target image frame, more image frames can be displayed for the target user through the video sub-page, so as to output more video information to the target user, thereby attracting the target user to trigger the video sub-page. Correspondingly, in a case that the video sub-page is triggered, switching from a first current image frame to a second current image frame in the video playback interface is performed, and image frames located after the second current image frame in the target video continue to be displayed. The first current image frame is an image frame displayed in the video playback interface at a moment at which the video sub-page is triggered. The second current image frame is an image frame displayed in the video sub-page at the moment at which the video sub-page is triggered. It is to be understood that, if the target image frame is always fixedly displayed in the video sub-page before the video sub-page is triggered, the second current image frame is the target image frame. If the target video is played in the video sub-page starting from the target image frame before the video sub-page is triggered, i.e., the image frame in the video sub-page is changing, the second current image frame may be the target image frame (the video sub-page triggered when the target image frame is displayed), or an image frame after the target image frame (the video sub-page triggered after the target image frame is played). For example, in the example shown inFIG.5C, the target video is the video f shown inFIG.3Dabove, and the target starting point is the 36th second in the video f. For the case where the target image frame is always fixedly displayed in the video sub-page: after the video sub-page is outputted, the image frame at the 36th second may be always fixedly displayed in the video sub-page; if the image frame at the 15th second is being displayed in the video playback interface when the video sub-page is triggered, the first current image frame is the image frame at the 15th second, and the second current image frame is still the image frame at the 36th second. Therefore, the target terminal may switch from playing the image frame at the 15th second to playing the image frame at the 36th second in the video playback interface, and continue to play the image frames after the 36th second, as shown inFIG.5D. In the case where the target video is played in the video sub-page starting from the target image frame, after the video sub-page is outputted, the target video may be played in the video sub-page starting from the 36th second. If the image frame at the 15th second is being displayed in the video playback interface when the video sub-page is triggered, the first current image frame is the image frame at the 15th second, and the video sub-page is displaying an image frame at the 51st second, that is, the second current image frame is the image frame at the 51st second. Therefore, the target terminal may switch from playing the image frame at the 15th second to playing the image frame at the 51st second in the video playback interface, and continue to play the image frames after the 51st second, as shown inFIG.5E. In some other embodiments, because the second current image frame has been displayed in the video sub-page, Therefore, when the video sub-page is triggered, switching from displaying the first current image frame to displaying a next image frame of the second current image frame may be performed in the video playback interface, and the image frames in the target video after the next image frame continue to be displayed. That is to say, in some other embodiments, the target terminal may also support continuing to play the target video following the second current image frame after the video sub-page is triggered, rather than displaying the second current image frame in full screen. S403. Display a playback progress axis of the second multimedia data on a terminal screen, the playback progress axis including playback progress positions corresponding to time points. S404. Display a progress marker element at a target playback progress position on the playback progress axis. The target playback progress position is the playback progress position corresponding to the target starting point on the playback progress axis. In an implementation, the progress marker element may be displayed at the target playback progress position while displaying the playback progress axis. That is, the progress marker element is displayed at the target playback progress position while switching from the first multimedia data to the second multimedia data. In some embodiments, the progress marker element may be displayed at a default playback progress position on the playback progress axis, the default playback progress position being the playback progress position corresponding to a first time point on the playback progress axis; and then, the progress marker element may be controlled to move from the default playback progress position to the target playback progress position on the playback progress axis, to display the progress marker element at the target playback progress position. By displaying the progress marker element in this implementation, a jump animation of the progress marker element may be realized, making the process more interesting. The jump animation may last for 0.2 seconds, 0.1 seconds, etc. The specific form of the progress marker element may be set to, for example, a small dot, a triangle, a five-pointed star, etc., according to a service requirement. For example, the form of the progress marker element54is a small dot, and a schematic diagram of controlling the progress marker element54to move from the default playback progress position to the target playback progress position on the playback progress axis is as shown inFIG.5F. In an implementation, the target terminal may read a preset playback duration of the second multimedia data, and after determining the preset playback duration, may associate the preset playback duration with the playback progress axis to determine a correspondence between displacement distances of the progress mark element on the playback progress axis and time points. After the target starting point is determined, the target playback progress position may be determined according to the correspondence and the target starting point, so that after the playback progress axis is displayed, the progress marker element is displayed at the target playback progress position. After the progress marker element is displayed at the target playback progress position, the progress marker element may automatically move with the playback progress of the second multimedia data until the playing of the second multimedia data is complete. {circle around (1)} The execution order of S403-S404and S402is not limited thereto. For example, S403-S404may be performed after S402, may be performed before S402, or may be performed concurrently with S402, which is not limited herein. {circle around (2)} If the first multimedia data and the second multimedia data are videos in a video playback application, the target terminal may play the first multimedia data and the second multimedia data in landscape mode to expand the display range of each image frame in the first multimedia data and the second multimedia data. Moreover, in addition to the first multimedia data and the second multimedia data, other videos in the video playback application may all be played in landscape mode. That is to say, any video triggered to be played in the video playback application is played in landscape mode. The playback principle of the video playback application in this case is roughly as follows: When the target user opens the video playback application, the video playback application is displayed in a horizontal full-screen video playback mode (that is, the landscape mode) by default without requiring manual switching. To play a certain video for the target user, a starting point of the video may be set according to the user preference profile of the target user, and the video is played based on the starting point. The target user may switch the video set (that is, the data set) through the second interaction operation of sliding in the vertical direction (that is, the up-down direction) on the terminal screen in the horizontal full-screen mode. If the user watching a video A in a video set is willing to continue watching, the user may continue watching the video A, or may input the first interaction operation to switch to other videos in the video set. Each time video switching is performed, the user preference profile of the target user may be updated. For example, if a relevant preference tag such as “Actor A” and “Spy War” is obtained according to a multimedia playback history of the target user, a next video is found according to the preference tag and switched to, and according to the preference label “Actor A” in the user preference profile, a starting point of the next video may be adjusted to a time point corresponding to an image frame containing an important plot of the actor A. S405. Switch, during the playing of the second multimedia data, from playing a first multimedia frame in the second multimedia data to playing a multimedia frame corresponding to a new starting point, in response to a target triggering operation. For the implementation of S405, reference may be made to the related description of S203in the foregoing method embodiments, and the details will not be repeated herein. In some embodiments, in response to the media switching operation during playing of the first multimedia data, the second multimedia data may be played starting from the target starting point of the second multimedia data. Since the target starting point is determined according to at least one of the user preference profile of the target user and the hot spot information, the multimedia frame corresponding to the target starting point can satisfy interest preferences of the target user to a greater extent. Therefore, playing the second multimedia data starting from the target starting point can greatly improve the attraction of the second multimedia data to the target user, making the target user more willing to continue playing the second multimedia data, thereby improving the user stickiness of the second multimedia data. In addition, in response to a target triggering operation during playing of the second multimedia data, switching from a first multimedia frame to a multimedia frame that can satisfy an interest preference of the target user (that is, a multimedia frame corresponding to a new starting point) may be automatically performed by positioning and switching the playback point, thereby improving the attraction of the second multimedia data to the target user and improving the user stickiness. Moreover, because the target user does not need to repeatedly drag the progress bar to find the next multimedia frame that the target user is interested in, the convenience can be effectively improved. Since the whole process does not require the target user to manually find and play data that can arouse the interest of the target user, not only the playback efficiency of multimedia data can be effectively improved, but also the playback of useless multimedia frames that are not of interest to the target user can be reduced, thereby saving processing resources and effectively improving the playback effectiveness of the second multimedia data. Based on the related descriptions of the method embodiments shown inFIG.2andFIG.4, some embodiments further provide a multimedia playback method shown inFIG.6A. The multimedia playback method may be executed by the target terminal mentioned above, or executed jointly by the target terminal and the server. For ease of description, some embodiments are mainly described using an example where the multimedia playback method is executed by the target terminal. As shown inFIG.6A, the multimedia playback method may include the following operations S601-S602. S601. Play first multimedia data. S602. Switch, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation. The target starting point is determined according to at least one of following information: a user preference profile of a target user and hot spot information of the second multimedia data. In an implementation, the second multimedia data includes a plurality of starting points, and each of the starting points has tag information. The target terminal may calculate a matching degree between the user preference profile and the label information of each of the starting points to obtain a calculation result. Then, the target terminal determines the starting point corresponding to a maximum matching degree in the calculation result as the target starting point. In some embodiments, the target terminal may select P candidate starting points according to the calculation result, and then select the target starting point from the P candidate starting points. For the specific selection manner, reference may be made to the related description of S402in the foregoing method embodiments. That is to say, in this implementation, the target starting point is determined according to the user preference profile of the target user. In some embodiments, the hot spot information of the second multimedia data may be used to indicate a multimedia frame with a large popularity value in the second multimedia data. The target terminal may determine the time point at which the multimedia frame having a maximum popularity value is located as the target starting point according to the hot spot information. In some embodiments, the target terminal may determine a time point at which each multimedia frame indicated by the hot spot information is located as the P candidate starting points, and then select the target starting point from the P candidate starting points. For the specific selection manner, reference may be made to the related description of S402in the foregoing method embodiments. That is to say, in this implementation, the target starting point is determined based on the hot spot information of the second multimedia data. In some embodiments, a matching degree between the user preference profile and the tag information of each of the starting points is calculated to obtain a first recommendation score of the starting point. Each of the starting points is scored according to the hot spot information to obtain a second recommendation score of the starting point. Then, a weighted summation is performed on the first recommendation score and the second recommendation score of each starting point respectively, to obtain a target recommendation score of the starting point. Finally, the starting point having a maximum target recommendation score may be selected as the target starting point. In some embodiments, P starting points corresponding to target recommendation scores greater than a score threshold are selected as the P candidate starting points, and then the target starting point is selected from the P candidate starting points. For the specific selection manner, reference may be made to the related description of S402in the foregoing method embodiments. That is to say, in this implementation, the target starting point is determined according to both the hot spot information of the second multimedia data and the user preference profile of the target user. {circle around (1)} for the specific playback manner used for the first multimedia data and the second multimedia data in S601and S602in some embodiments, reference may be made to the related descriptions of S201-S202and S401-S402in the above method embodiments, and the details will not be repeated herein. {circle around (2)} It may be known from the related descriptions of the foregoing method embodiments that the first multimedia data mentioned in some embodiments may be played starting from the first starting point, and the second multimedia data may be played starting from the target starting point. Similarly, the target terminal may also play multimedia data other than the first multimedia data and the second multimedia data starting from a certain starting point. That is to say, for any multimedia data, in response to not detecting a media switching operation, the target terminal may also determine a starting point of the any multimedia data according to one or more of the user preference profile of the target user and hot spot information of the any multimedia data, and sequentially and continuously play the any multimedia data starting from the determined starting point. In some embodiments, in response to the media switching operation during playing of the first multimedia data, switching from playing the first multimedia data to playing the multimedia frame corresponding to the target starting point in the second multimedia data may be performed based on the target starting point of the second multimedia data. Since the target starting point is determined according to at least one of the user preference profile of the target user and the hot spot information of the second multimedia data, the multimedia frame corresponding to the target starting point can satisfy interest preferences of the target user or arouse the interest of the target user to a greater extent. Therefore, playing the second multimedia data starting from the target starting point can greatly improve the attraction of the second multimedia data to the target user, making the target user more willing to continue playing the second multimedia data, thereby improving the user stickiness of the second multimedia data and the playback conversion rate of the second multimedia data. Since the whole process does not require the target user to manually find and play multimedia frames that can arouse the interest of the target user by repeatedly dragging the progress bar, not only the convenience and the playback efficiency of multimedia data can be effectively improved, but also the playback of useless multimedia frames that are not of interest to the target user can be reduced, thereby saving processing resources and effectively improving the playback effectiveness of the second multimedia data. The multimedia playback methods shown inFIG.2,FIG.4, andFIG.6Aare mainly described using the target terminal as the execution entity. Based on the descriptions of the embodiments shown inFIG.2,FIG.4, andFIG.6A, some embodiments further provide a multimedia playback method shown inFIG.6B. Some embodiments are mainly described using an example where the multimedia playback method is executed jointly by a target terminal and a server. The multimedia playback method may include the following operations. First, the target terminal may obtain user information of a target user when the target user uses a target application (e.g., a video playback application or a music playback application) for the first time. The user information is, for example, social information obtained through a social account (e.g., OpenID), terminal information (such as a terminal manufacturer, model (device layer), etc.) of the target terminal obtained after active authorization by the target user, surrounding environment information (e.g., a current location, resident location, etc. of the target user), behavior trajectory information obtained after authorization by the target user, a user relationship obtained through a shared network (e.g., a network hotspot, Wireless Fidelity (WIFI)) and from a user address book, application information of installed applications determined through application installation records, and so on. Then, a preference of the target user may be predicted according to the user information, so as to obtain an initial user preference profile. Different user information may correspond to different prediction methods. For example, for the surrounding environment information, the target terminal may determine other users who belong to the same area as the target user through big data analysis according to the current location or resident location of the target user, and add preference tags of the other users to the initial user preference profile. For another example, for the user relationship through the shared network, associated users who frequently use the same network as the target user may be determined according to the user relationship, and preference tags of the other users are added to the initial user preference profile. For another example, for the application information, an installed application may be determined according to the application information, and a preference tag of the target user may be determined according to an application function of the installed application. If the installed application often pushes adventure videos, it may be determined that the preference tag of the target user is “adventurous”. If the installed application often pushes news about a star XX, it may be determined that the preference tag of the target user is “star XX”. Then, the target terminal may send the initial user preference profile of the target user to the server. The server determines, according to a mapping relationship between user preference profiles and data tags of the multimedia data, a data tag mapped to the initial user preference profile of the target user, and stores the data identifier (e.g., multimedia A) of the multimedia data indicated by the determined data tag and the initial user preference profile (e.g., user preference profile A) of the target user to Table 1 below: TABLE 1UserUserUserUserpreferencepreferencepreferencepreferenceprofile Aprofile Bprofile Cprofile . . .PreferencePreferencePreferencePreferencetag x . . .tag w . . .tag r . . .tag p . . .Multimedia AMultimedia BMultimedia CMultimedia . . . When the target user has a demand for playing multimedia data, the target terminal may request the server to recommend multimedia data to be played through a recommendation engine based on the user preference profile. For example, the server may deliver multimedia data in Table 1 that corresponds to the multimedia A to the target terminal as first multimedia data. During the playing of the first multimedia data, the recommendation engine of the server may further obtain a relevant preference tag (e.g., a first preference tag) from the user preference profile of the target user, search a database for relevant matching multimedia data according to the relevant preference tag, and after determining the matching multimedia data, search for matching starting points in each matching multimedia data according to a second preference tag in the user preference profile. If the recommendation engine finds data matching the user preference profile through the two search operations, the recommendation engine calculates weights of all contents having matching items, determines candidate multimedia data according to the weight calculation result, and send candidate data sets to which the candidate multimedia data respectively belongs to the target terminal. After receiving the candidate data sets, the target terminal may add the candidate data sets to a multimedia playlist, and automatically preload candidate multimedia data in neighboring multimedia data sets and multimedia frames corresponding to the matching starting points according to the arrangement position of the first multimedia data in the multimedia playlist. During the playing of the first multimedia data, a preference measurement value of the target user with respect to the first multimedia data may be obtained, and the user preference profile may be updated according to the preference measurement value. In addition, it may further be detected whether the target user likes the first multimedia data. If the target user likes the first multimedia data, the target terminal continues playing the first multimedia data. If the target user does not like the first multimedia data, the target terminal may perform multimedia data switching and an operation of updating the multimedia recommendation list. In addition, the target user may also implement multimedia switching by inputting a media switching operation (e.g., an operation of inputting a first gesture or a second gesture). When the target user performs multimedia switching by inputting the first gesture or the second gesture, the target terminal may immediately obtain second multimedia data from the preloaded data set according to the specific interaction operation, and play a multimedia frame corresponding to a target starting point in the preloaded second multimedia data. If the target terminal has only one player and receives only one video player controller, the target terminal may bind a buffer address of the second multimedia data to be played by the video player controller, so as to realize the switching to playing of the second multimedia data. Similar to the first multimedia data, after the target user plays the second multimedia data, the target terminal may perform interest classification according to the degree of preference of the target user for the second multimedia data, and map interest data after the classification to a preset preference tag. Then, the target terminal may send the successfully mapped-to preference label to the server, so that the server updates the preference label to the user preference profile. It may be understood that, a larger number of pieces of multimedia data played by the target user indicates richer preference tags of the user preference profile of the target user and a more accurate recommendation result. It can be seen that the decision-making costs of the target user can be reduced by helping the target user to adjust the starting point according to the user preference profile while playing multimedia data, especially multimedia data of a long preset playback duration, so as to play the multimedia data. Moreover, the costs of previewing multimedia data can be effectively reduced by simple gestures such as sliding up and down or sliding left and right and instant playback, thereby improving the user stickiness. Based on the descriptions of the related embodiments of the multimedia playback methods shown inFIG.2andFIG.4, some embodiments further provide a multimedia playback apparatus. The multimedia playback apparatus may be a computer program (including a program code) running in a target terminal. The multimedia playback apparatus may execute the multimedia playback method shown inFIG.2orFIG.4. Referring toFIG.7, the multimedia playback apparatus may run the following units: a first playing unit701, configured to play first multimedia data; and a second playing unit702, configured to switch, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation, the second multimedia data including multimedia frames corresponding to a plurality of time points, the plurality of time points including pre-configured P candidate starting points, the target starting point belonging to the P candidate starting points, and P being a positive integer greater than 1; and the second playing unit702being further configured to switch, during the playing of the second multimedia data, from playing a first multimedia frame in the second multimedia data to playing a multimedia frame corresponding to a new starting point, in response to a target triggering operation, the first multimedia frame being a multimedia frame played at a moment at which the target triggering operation is performed, and the new starting point being a candidate starting point other than the target starting point among the P candidate starting points. In an implementation, the first multimedia data belongs to a first data set; in a case that the media switching operation includes a first interaction operation for instructing to perform multimedia switching in a same data set, the second multimedia data is multimedia data in the first data set other than the first multimedia data; and in a case that the media switching operation includes a second interaction operation for instructing to perform multimedia switching in different data sets, the second multimedia data is multimedia data in a second data set, and the second multimedia data is determined according to one or more of a user preference profile of a target user and hot spot information of the second multimedia data. In some embodiments, any of the first interaction operation and the second interaction operation is inputted in any of following manners: a gesture, voice, triggering a switching element on a terminal screen, and triggering a terminal physical key, where the switching element includes a switching component or a blank area in the terminal screen; and the switching component is displayed on the terminal screen during the playing of the first multimedia data. In some embodiments, when starting to play the second multimedia data based on the target starting point of the second multimedia data in response to the media switching operation, the second playing unit702may be further configured to execute operations of: playing a target multimedia frame corresponding to the target starting point in response to the media switching operation, the target multimedia frame being a multimedia frame corresponding to the target starting point in the second multimedia data; and after the playing of the target multimedia frame is finished, continuing to play a remaining multimedia frame located after the target multimedia frame in the second multimedia data. In some embodiments, in a case that the target triggering operation includes a first progress adjustment operation, the new starting point is a candidate starting point later than a first time point corresponding to the first multimedia frame and closest to the first time point among the P candidate starting points; in a case that the target triggering operation includes a second progress adjustment operation, the new starting point is a candidate starting point earlier than the target starting point and closest to the target starting point among the P candidate starting points; and in a case that the target triggering operation includes a custom selection operation for the P candidate starting points, the new starting point is a candidate starting point determined among the P candidate starting points according to the custom selection operation. In some embodiments, the second playing unit702may be further configured to execute operations of: after the playing of the multimedia frame corresponding to the new starting point is finished, continuing to play multimedia frames corresponding to time points after the new starting point in the second multimedia data. In some embodiments, the target starting point is a time point other than a first time point among the plurality of time points; and correspondingly, the second playing unit702may be further configured to execute operations of: switching, during the playing of the second multimedia data, from playing a second multimedia frame to playing a multimedia frame corresponding to the first time point, in response to a head frame playback triggering operation, the second multimedia frame being a multimedia frame played at a moment at which the head frame playback triggering operation is performed; and after the playing of the multimedia frame corresponding to the first time point is finished, continuing to play multimedia frames corresponding to time points after the first time point in the second multimedia data. In some embodiments, the second playing unit702may be further configured to execute operations of: displaying a playback progress axis of the second multimedia data on a terminal screen, the playback progress axis including playback progress positions corresponding to time points; and displaying a progress marker element at a target playback progress position on the playback progress axis, the target playback progress position being the playback progress position corresponding to the target starting point on the playback progress axis. In some embodiments, when displaying the progress marker element at the target playback progress position on the playback progress axis, the second playing unit702may be further configured to execute operations of: displaying the progress marker element at a default playback progress position on the playback progress axis, the default playback progress position being the playback progress position corresponding to a first time point on the playback progress axis; and controlling the progress marker element to move from the default playback progress position to the target playback progress position on the playback progress axis, to display the progress marker element at the target playback progress position. In some embodiments, the second multimedia data is a target video, a multimedia frame corresponding to the target starting point in the target video is a target image frame, and the target image frame is an image frame in the target video other than a head image frame; and correspondingly, when starting to play the second multimedia data based on the target starting point of the second multimedia data in response to the media switching operation, the second playing unit702may be further configured to execute operations of: splitting a terminal screen into a first screen region and a second screen region in response to the media switching operation; playing the target video in the first screen region starting from the target image frame; and playing the target video in the second screen region starting from the head image frame. In some embodiments, the second playing unit702may be further configured to execute operations of: merging the first screen region and the second screen region in response to a selection operation performed on the first screen region; and continuing to play the target video on the merged terminal screen, starting from a reference image frame, the reference image frame being an image frame displayed in the first screen region at a moment at which the selection operation is performed. In some embodiments, the second multimedia data is a target video, a multimedia frame corresponding to the target starting point in the target video is a target image frame, and the target image frame is an image frame in the target video other than a head image frame; and correspondingly, when starting to play the second multimedia data based on the target starting point of the second multimedia data in response to the media switching operation, the second playing unit702may be further configured to execute operations of: outputting a video playback interface on a terminal screen and playing the target video in the video playback interface by using the head image frame as a playback starting point, in response to the media switching operation; outputting a video sub-page on the video playback interface, and displaying the target image frame in the video sub-page, or playing the target video in the video sub-page by using the target image frame as a playback starting point, the video sub-page being an interface independent of the video playback interface; and switching from a first current image frame to a second current image frame in the video playback interface and continuing to display image frames located after the second current image frame in the target video, in a case that the video sub-page is triggered, the first current image frame being an image frame displayed in the video playback interface at a moment at which the video sub-page is triggered, and the second current image frame being an image frame displayed in the video sub-page at the moment at which the video sub-page is triggered. In some embodiments, the first multimedia data and the second multimedia data are a video in a video playback application; and any video triggered to be played in the video playback application is played in a landscape mode. In some embodiments, the user preference profile includes a first preference tag for multimedia matching and a second preference tag for starting point matching; and correspondingly, the second playing unit702may be further configured to execute operations of: searching a database for matching multimedia data according to the first preference tag in the user preference profile during the playing of the first multimedia data, the matching multimedia data being multimedia data corresponding to a data tag matching with the first preference tag, the matching multimedia data including one or more starting points; obtaining a tag information set of the matching multimedia data, the tag information set including tag information of the one or more starting points in the matching multimedia data; searching the tag information set for matching tag information matching with the second preference tag. determining the matching multimedia data to which the starting point corresponding to the matching tag information belongs as candidate multimedia data, and adding a data set to which the candidate multimedia data belongs as a candidate data set to a multimedia recommendation list, so that a second data set is selected from the multimedia recommendation list in response to the second interaction operation. In some embodiments, the second playing unit702may be further configured to execute operations of: obtaining a preference measurement value of the target user with respect to the first multimedia data after the multimedia recommendation list is determined, the preference measurement value being used for indicating a degree of preference of the target user for the first multimedia data; updating the user preference profile of the target user according to the preference measurement value and a data tag of the first data set to which the first multimedia data belongs; and updating all or part of the candidate data set in the multimedia recommendation list according to the updated user preference profile. In some embodiments, the P candidate starting points are pre-configured according to the user preference profile of the target user; and the second multimedia data includes a plurality of starting points, each of the starting points has tag information, and correspondingly, the second playing unit702may be further configured to execute operations of: calculating a matching degree between the user preference profile and the tag information of each of the starting points to obtain a calculation result; selecting P target matching degrees greater than a matching threshold from the calculation result, or sequentially selecting P target matching degrees from the calculation result in descending order of the matching degrees; and determining the starting point corresponding to each target matching degree among the P target matching degrees as a candidate starting point. According to some embodiments, the operations in the methods shown inFIG.2andFIG.4may be executed by the units of the multimedia playback apparatus shown inFIG.7. For example, S201shown inFIG.2may be executed by the first playing unit701shown inFIGS.7, and S202-S203may be executed by the first playing unit701and the second playing unit702shown inFIG.7. For another example, S401shown inFIG.4may be executed by the first playing unit701shown inFIGS.7, and S402-S405may be executed by the second playing unit702shown inFIG.7. According to some embodiments, units in the multimedia playback apparatus shown inFIG.7may be separately or wholly combined into one or several other units, or one (or more) of the units herein may further be divided into multiple units of smaller functions. In this way, same operations can be implemented, and implementation of the technical effects of some embodiments is not affected. The foregoing units are divided based on logical functions. In some embodiments, a function of one unit may also be implemented by a plurality of units, or functions of a plurality of units are implemented by one unit. In some embodiments, the multimedia playback apparatus may also include other units. In some embodiments, the functions may also be cooperatively implemented by other units and may be cooperatively implemented by a plurality of units. According to some embodiments, a computer program (including program code) that can perform the operations in the corresponding method shown inFIG.2orFIG.4may run on a general computing device, such as a computer, which include processing elements and storage elements such as a central processing unit (CPU), a random access memory (RAM), and a read-only memory (ROM), to construct the multimedia playback apparatus shown inFIG.7, and implement the multimedia playback method in some embodiments. The computer program may be recorded in, for example, a computer readable recording medium, and may be loaded into the foregoing computing device by using the computer readable recording medium, and run in the computing device. In some embodiments, P candidate starting points may be flexibly pre-configured for second multimedia data according to actual requirements, so that in response to a media switching operation during playing of first multimedia data, a target starting point may be flexibly selected from the P candidate starting points, and switching from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data is performed based on the target starting point in the second multimedia data, so that the flexibility of multimedia playback can be effectively improved. In addition, in response to a target triggering operation during playing of the second multimedia data, switching from a first multimedia frame to another multimedia frame that meets the actual requirements (that is, a multimedia frame corresponding to a new starting point) may be automatically performed by positioning and switching the playback point, thereby reducing the playback of useless multimedia frames (multimedia frames that do not meet the actual requirements). This not only can further improve the flexibility of multimedia playback, but also can effectively save processing resources and effectively improve the playback effectiveness and playback efficiency of the second multimedia data. Based on the descriptions of the related embodiments of the multimedia playback methods shown inFIG.6A, some embodiments further provide a multimedia playback apparatus. The multimedia playback apparatus may be a computer program (including a program code) running in a target terminal. The multimedia playback apparatus may execute the multimedia playback method shown inFIG.6A. Referring toFIG.8, the multimedia playback apparatus may run the following units: a playing unit801, configured to play first multimedia data; and a switching unit802, configured to switch, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation, the target starting point being determined according to at least one of following information: a user preference profile of a target user and hot spot information of the second multimedia data. According to some embodiments, units in the multimedia playback apparatus shown inFIG.8may be separately or wholly combined into one or several other units, or one (or more) of the units herein may further be divided into multiple units of smaller functions. In this way, same operations can be implemented, and implementation of the technical effects of some embodiments is not affected. The foregoing units are divided based on logical functions. In some embodiments, a function of one unit may also be implemented by a plurality of units, or functions of a plurality of units are implemented by one unit. In some embodiments, the multimedia playback apparatus may also include other units. In some embodiments, the functions may also be cooperatively implemented by other units and may be cooperatively implemented by a plurality of units. According to some embodiments, a computer program (including program code) that can perform the operations in the corresponding method shown inFIG.6Amay run on a general computing device, such as a computer, which include processing elements and storage elements such as a central processing unit (CPU), a random access memory (RAM), and a read-only memory (ROM), to construct the multimedia playback apparatus shown inFIG.8, and implement the multimedia playback method in some embodiments. The computer program may be recorded in, for example, a computer readable recording medium, and may be loaded into the foregoing computing device by using the computer readable recording medium, and run in the computing device. In some embodiments, in response to the media switching operation during playing of the first multimedia data, switching from playing the first multimedia data to playing the multimedia frame corresponding to the target starting point in the second multimedia data may be performed based on the target starting point of the second multimedia data. Since the target starting point is determined according to at least one of the user preference profile of the target user and the hot spot information of the second multimedia data, the multimedia frame corresponding to the target starting point can satisfy interest preferences of the target user or arouse the interest of the target user to a greater extent. Therefore, playing the second multimedia data starting from the target starting point can greatly improve the attraction of the second multimedia data to the target user, making the target user more willing to continue playing the second multimedia data, thereby improving the user stickiness of the second multimedia data and the playback conversion rate of the second multimedia data. Since the whole process does not require the target user to manually find and play multimedia frames that can arouse the interest of the target user by repeatedly dragging the progress bar, not only the convenience and the playback efficiency of multimedia data can be effectively improved, but also the playback of useless multimedia frames that are not of interest to the target user can be reduced, thereby saving processing resources and effectively improving the playback effectiveness of the second multimedia data. Based on the descriptions of the foregoing method embodiments and apparatus embodiments, some embodiments further provide a target terminal (terminal for short). Referring toFIG.9, the terminal at least includes a processor901, an input interface902, an output interface903, and a computer storage medium904. The processor901, the input interface902, the output interface903, and the computer storage medium904in the terminal may be connected by a bus or in another manner. The computer storage medium904may be stored in a memory of the terminal. The computer storage medium904is configured to store a computer program. The computer program includes program instructions. The processor901is configured to execute the program instructions stored in the computer storage medium904. The processor901(or referred to as a central processing unit, CPU) is a computing core and control core of the terminal, which is adapted to implement one or more instructions, and specifically, adapted to load and execute one or more instructions to implement corresponding method processes or corresponding functions. In another embodiment, the processor901described in some embodiments may be configured to perform a series of multimedia playback processing, including: playing first multimedia data; switching, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation, the second multimedia data including multimedia frames corresponding to a plurality of time points, the plurality of time points including pre-configured P candidate starting points, the target starting point belonging to the P candidate starting points, and P being a positive integer greater than 1; and switching, during the playing of the second multimedia data, from playing a first multimedia frame in the second multimedia data to playing a multimedia frame corresponding to a new starting point, in response to a target triggering operation, the first multimedia frame being a multimedia frame played at a moment at which the target triggering operation is performed, and the new starting point being a candidate starting point other than the target starting point among the P candidate starting points. In another embodiment, the processor901described in some embodiments may be configured to perform a series of multimedia playback processing, including: playing first multimedia data; switching, based on a target starting point of second multimedia data, from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data, in response to a media switching operation, the target starting point being determined according to at least one of following information: a user preference profile of a target user and hot spot information of the second multimedia data. Some embodiments may further provide a non-transitory computer storage medium. The computer storage medium is a memory device in a terminal and is configured to store programs and data. As can be understood, the computer storage medium herein may include an internal storage medium of the terminal and may further include an extended storage medium supported by the terminal. The computer storage medium provides storage space, and the storage space stores an operating system of the terminal. In addition, the storage space further stores one more programs instructions suitable for being loaded and executed by the processor901. The instructions may be one or more computer programs (including program code). The computer storage medium herein may be a high-speed RAM memory, or may be a non-volatile memory, such as at least one magnetic disk storage. In some embodiments, the computer storage medium may be at least one computer storage medium far away from the foregoing processor. In an embodiment, the processor901may load and execute one or more instructions stored in the computer storage medium, to implement the method operations in the embodiments of the multimedia playback method shown inFIG.2,FIG.4, orFIG.6. In some embodiments, P candidate starting points may be flexibly pre-configured for second multimedia data according to actual requirements, so that in response to a media switching operation during playing of first multimedia data, a target starting point may be flexibly selected from the P candidate starting points, and switching from playing the first multimedia data to playing a multimedia frame corresponding to the target starting point in the second multimedia data is performed based on the target starting point in the second multimedia data, so that the flexibility of multimedia playback can be effectively improved. In addition, in response to a target triggering operation during playing of the second multimedia data, switching from a first multimedia frame to another multimedia frame that meets the actual requirements (that is, a multimedia frame corresponding to a new starting point) may be automatically performed by positioning and switching the playback point, thereby reducing the playback of useless multimedia frames (multimedia frames that do not meet the actual requirements). This not only can further improve the flexibility of multimedia playback, but also can effectively save processing resources and effectively improve the playback effectiveness and playback efficiency of the second multimedia data. According to some embodiments, a computer program product or a computer program may be provided, the computer program product or the computer program including computer instructions, the computer instructions being stored in a computer-readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device performs the methods provided in various embodiments of the foregoing multimedia playback method shown inFIG.2,FIG.4, orFIG.6A. The foregoing embodiments are used for describing, instead of limiting the technical solutions of the disclosure. A person of ordinary skill in the art shall understand that although the disclosure has been described in detail with reference to the foregoing embodiments, modifications can be made to the technical solutions described in the foregoing embodiments, or equivalent replacements can be made to some technical features in the technical solutions, provided that such modifications or replacements do not cause the essence of corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the disclosure.
139,248
11943511
DETAILED DESCRIPTION As referred to herein, the term “media asset” should be understood to refer to an electronically consumable user asset, e.g., television programming, as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, webcasts, etc.), video clips, audio, playlists, websites, articles, electronic books, blogs, social media, applications, games, and/or any other media or multimedia, and/or combination of the above. FIG.1Ashows a block diagram of an illustrative system for providing supplemental content relevant to metadata of a particular scene of a media asset, in accordance with some embodiments of this disclosure. A media application (e.g., executed at least in part on user equipment104) may generate for display media asset102on user equipment104, e.g., in response to receiving a user request to view media asset102. Media asset102may be generated for display from a broadcast or stream received at user equipment104, or from a recording stored in a memory of user equipment104and/or a remote server (e.g., from media content source602or server604ofFIG.6). User equipment104may be, e.g., a television and/or may include an integrated display, e.g., on a smartphone or tablet, or may by connected to an external display device, e.g., a television. When generating for presentation scene101of media asset102, the media application may provide progress bar106overlaid on scene101(e.g., associated with a timestamp range of 0:00-8:00 as shown at134ofFIG.1B) and indicating a current play position time108(e.g., 8:00 minutes) within total time duration110(e.g., 30 minutes) of media asset102. Scene101of media asset102may comprise one or more frames and depict a plurality of objects, such as, for example, first actor112walking near a building114(e.g., a bank). The object may be a person, an item, a product, a location, a landmark, or any other suitable object. The media application may identify objects in scene101using any suitable technique. For example, the media application may receive (e.g., from media content source602or server604ofFIG.6) metadata (e.g., metadata of category128for time period134of data structure150shown inFIG.1B) including detailed information about the objects associated with particular timestamps associated with frames or segments of media asset102, and such metadata may be stored in data structure150. In some embodiments, detecting an object may comprise utilizing one or more techniques for object recognition such as, for example, image processing, edge detection, color pattern recognition, partial linear filtering, regression algorithms, and neural network pattern recognition. The media guidance application may perform image analysis on each object that is detected to determine the identity of each object, and may be configured to search a database of videos and associated objects for each of the plurality of candidate objects. The media application may determine that scene101is associated with a portion of media asset102that corresponds to an introduction or exposition116of media asset102. The media application may employ any suitable technique to identify a plot structure category or other suitable scene identifier that a particular segment or scene of media asset102corresponds to. For example, the media application may utilize one or more of metadata items associated with a particular scene (e.g., manually labeled by human curators based on a review of media asset102), analysis of audiovisual attributes of the particular scene (e.g., genre-based heuristics), and/or viewership information related to consumption of the media asset by a plurality of users (e.g., online plot reviews, viewership curves, social media activity), etc. In some embodiments, the media guidance application may search through metadata associated with each scene of the media asset, and/or utilize any other suitable technique, to extract or generate a set of tags that identify themes in the respective scene, e.g., “exposition,” “fight,” “car chase,” “plot twist,” “inciting incident,” “climax,” “resolution,” “denouement,” etc. The media application may identify a timestamp within media asset102of a significant occurrence in media asset102, e.g., the climax or a fight. For example, the media application may reference the timestamp of a current scene and associate such timestamp with the identified plot structure category or other scene identifier of a particular scene or segment. Scene103(e.g., associated with a timestamp range of 8:01-19:59 as shown at136ofFIG.1B) of media asset102may depict first actor112(e.g., a police officer, a superhero, or Good Samaritan) running towards and chasing second actor118, upon first actor112realizing that second actor118has robbed bank114and is running away with money stolen from the bank. The media application may identify these objects and determine that current play position time108(e.g., 10 minutes) of scene103is associated with a plot structure category of “Rising Action/Conflict”121using any of the aforementioned suitable techniques. For example, the media application may reference metadata labeling scene103as “Rising Action/Conflict”121, and/or the media application may reference metadata of media asset102indicating a genre of “Action” for media asset102. Based on this genre, the media application may perform a heuristic-based analysis to determine the plot structure category or other scene identifier. To determine whether scene103is a particular scene of interest, the media application may analyze scene103to determine the occurrence of fast-paced and/or loud audio above a certain threshold, actors112,118rapidly moving, important actors or characters being depicted together in a scene, the scene being towards the end of the movie, the occurrence of violence or a car chase, etc. For example. scene109(e.g., associated with the timestamp of 20:00-26:59 of media asset102) may be determined to correspond to (e.g., constitute or otherwise form part of) a particular scene (e.g., a climax of media asset102) due to the media application detecting audio of a certain level (e.g., 80 dB) which exceeds a predefined threshold (e.g., 75 dB) in conjunction with other factors. For example, such audio level above the predefined threshold may be determined by the media application to begin to occur at the timestamp of 20:00 of scene109for a predefined period of time (e.g., 5 seconds), which may signify the beginning of the particular scene of interest, such as the climax or other segment of media asset102. The end of the particular scene (e.g., the climax) may be identified based on the media application determining that the audio level has dropped below the threshold (e.g., for a predetermined period of time, such as, for example, 10 seconds). Such determinations may be coupled with the determination by the media application that scene109(e.g., the climax) is occurring in towards the end of media asset102(e.g., in the second half of the playing time) and thus may be more likely to reflect a scene of interest to the user. As an example, scene109(e.g., associated with the timestamp of 20:00-26:59 of media asset102) may be determined to correspond to (e.g., constitute or otherwise form part of) a particular scene (e.g., a climax of media asset102) due to one or more of being 75% toward the end of the movie, having audio above a predefined threshold and/or depicting an amount of violence above a predefined threshold, the remaining scenes of media asset comprising dialogue and minimal movement until the end of the media asset, and/or any other suitable factors. However, given the relatively early timing of scene103within duration110(e.g., 10 minutes into a 30-minute runtime, roughly at the ⅓ mark of media asset102), the media application may make a heuristic-based determination that it is unlikely that scene103corresponds to a particular scene of interest (e.g., a climax) of media asset102. Additionally or alternatively, the media application may reference, retrieve or generate a viewership curve associated with media asset102, e.g., based on whether particular scenes were skipped, a number of users that consumed a certain scene, a number of (and/or contents of) social media comments associated with a particular scene, etc. Such viewership information may be received from, e.g., from media content source602or server604ofFIG.6). In some embodiments, any suitable scene of media asset102may be identified by the media application as a particular scene of interest for which associated supplemental content may be generated. For example, any scene determined by the media application as likely to capture the attention of the user consuming media asset102and/or any scene associated with a product placement likely to interest the user may be identified by the media application as the particular scene, regardless of whether the particular scene corresponds to the climax of media asset102. For example, a fight scene or car chase occurring relatively early in the playing of a media asset may not correspond to the climax of the media asset but may nonetheless feature a product (e.g., a sports car) and/or other features likely to interest the user. Scene105(e.g., associated with a timestamp range of 8:01-19:59 as shown at136ofFIG.1B) of media asset102may depict first actor112continuing to chase second actor118, and may introduce cars120,122that first actor112and second actor118are respectively running towards. In some embodiments, the media application may compute certain scores (e.g., computed scores of category130ofFIG.1Bincluding one or more of a product placement strength score, and/or a product price score) for each identified object, e.g., cars120,122. For example, for car120, a product placement strength score may be weighted depending on a variety of factors, e.g., whether actor112touching or riding in or otherwise associated with car120is a popular actor; whether actor112is discussing car120in a positive way or at all; how many pixels of the screen car120is associated with in a current scene; social media activity discussing car120; whether a user profile of a user consuming media asset102specifies preferences or viewing histories indicative of a likely interest in car120or actor112, etc. Based on the techniques described above, the media application may determine whether a scene corresponds to a particular scene of interest (e.g., a climax, or another scene which is determined to be associated with scores above a certain threshold) for which supplemental content may be generated or retrieved. In some embodiments, computed scores of category130may include a product price score, e.g., the media application may query a search engine or invoke an API call to a website of an application associated with car120or otherwise offering car120for sale to determine whether a current price of car120reflects car120being on sale relative to a typical price of car120(e.g., represented as a percent offer over a regular sticker price). In some embodiments, product price score may be weighted based on the inference that the lower the price, the higher the likelihood of user interest in car120. Such product price score may be taken into in determining whether a scene corresponds to a particular scene of interest. Scene107(e.g., associated with a timestamp range of 8:01-19:59 as shown at136ofFIG.1B) of media asset102may depict a car chase in which car120(being driven by first actor112) is chasing car122, being driven by second actor118. Computed scores of category130may be computed for scene107based on the techniques discussed above and below, and the media application may determine that scene107continues to correspond to a plot structure category of “Rising Action/Conflict”121. In some embodiments, the computed scores may be used for various purposes, e.g., to determine whether (and which) supplemental content should be generated for display and/or retrieved in connection with a particular scene. The computed scores may be employed in determining whether a scene corresponds to a particular scene of interest, e.g., whether the scene comprises a product placement for which the media application determines supplemental content should be generated for display). Scene109(e.g., associated with a timestamp range of 20:00-26:59 as shown at138ofFIG.1B) of media asset102may depict a car chase in which car120(being driven by first actor112) of media asset102may depict a conclusion of the car chase in which car120(having been driven by first actor112) has caused car122(having been driven by second actor118) to flip, leading to a fistfight between actor112and actor118, each having exited their respective vehicles. Scene109may be determined by the media application to correspond to a time of the particular scene (e.g., climax124) of media asset102based on one or more of a variety of factors. For example, metadata of category128corresponding to timestamp138(within which scene109falls) may indicate that scene109corresponds to the climax. As another example, the media application may determine that a current play position108(e.g., 20 minutes) within duration110(e.g., 30 minutes) may weigh in favor of a determination that a current scene corresponds to the climax, e.g., based on the inference that the scene is towards the end of media asset102where a climax typically occurs; but based on the determined remaining time (e.g., 10 minutes), enough time still remains to resolve any outstanding conflicts after the climax prior to the conclusion of media asset102. Additionally or alternatively, the media application may analyze audiovisual attributes of scene109and may determine that the significant amount of action having taken and taking place (e.g., a flipped car, a fistfight, violence involving main actor112), taken together with the genre of media asset102(e.g., “Action”) may indicate that current scene109is a time of a particular scene of interest (e.g., a climax or otherwise constitutes a scene of interest in connection with identifying supplemental content associated therewith). In some embodiments, different heuristics may be employed by the media application based on the genre of the media asset. For example, the media application may identify portions of a media asset as being of the genre of a romantic comedy, and identify portions of the media asset in which the main characters are together for a relatively long period of time and/or kissing or a proposal is occurring or is likely to occur, and determine that such an identified scene weighs in favor of a determination of a time of a particular scene of interest (e.g., a climax of media asset102). As another example, if the genre corresponds to a sports movie, the media application may identify certain scenes (e.g., a last-second shot in a championship basketball game) as weighing in favor of a determination of a time of a particular scene of interest (e.g., a climax or critical moment likely to capture the attention of the user). Additionally or alternatively, the media application may reference or otherwise generate information related to viewership information (e.g., online plot reviews, social media activity, viewership curves associated with the particular scene), which may weigh in favor of a determination of a time of the particular scene of interest. In some embodiments, computed scores of category130may inform the determination of whether a current scene corresponds to the particular scene of interest (e.g., the climax), e.g., time138may be determined to correspond to the particular scene of interest (e.g., a climax) based at least in part on the computed scores of category130for time138exceeding computed scores of other scenes of media asset102. The media application, e.g., in response to determining that scene109corresponds to a particular scene of interest (e.g., climax124of media asset102and/or a fight scene and car chase of media asset102), may identify supplemental content relevant to metadata of the media asset. In some embodiments, supplemental content may be text, graphics, video, or any other visual (and/or audio) depiction of information related to metadata of the media asset, e.g., an advertisement, a website, auxiliary information or auxiliary videos related to a particular product or service, such as, for example, a product or service shown, discussed, or otherwise referenced in the media asset. For example, supplemental content may be related to a promotion, sale, coupon, discount, newly available product, wish list of the user, or any other information the advertiser wishes to inform the user about to entice him or her to purchase goods or a service. As shown inFIG.1B, category132of data structure132specifies supplemental content132for particular timestamps. The identified particular scene of interest (e.g., climax124) may be associated with a URL related to supplemental content142associated with a video related to car120as well as a URL at which the video related to car122may be accessed. For example, a product placement strength score for scene109may be determined based on combining (e.g., computing an average, adding together, etc.) numerical scores of a variety of factors, e.g., a timing in the plotline of the product placement; a character and/or actor associated with the product placement; a prominence of the product placement; whether user preferences of the user consuming media asset102indicated an interest in the product placement. Such scores may be output by a trained machine learning model, which may be trained with labeled examples indicating scores associated with each training example. In some embodiments, the product placement strength score for car120(“Car A”) ofFIG.1Ain scene109corresponding to timestamp138may be computed in the following illustrative manner. A score for timing in the plotline may be assigned based on whether the particular scene is likely to be of interest to the user (e.g., proximity to the climax or proximity to a fight scene), e.g., a score of 100 may be assigned since scene109corresponds to a particular scene of interest (e.g., the climax). A score for the character or actor may be assigned a score of 90, e.g., the main character may be assigned a score of 100, and actor112may be included in a predefined rank of popular actors assigned a score of 80, which may average to score of 90 for actor112in scene109. A score for a prominence of the product placement may be, e.g., 86, based on the percentage of time of the scene that car120is prominently visible during the scene, 86% of the scene. A score of, e.g.,88may be assigned based on the user preferences, e.g., based on how often and/or how recently a user searched for content similar to car120. The product placement strength score may be calculated as, e.g., an average of each of these four scores, which results in a product placement strength score of 91 (100+90+86+88)/(4). The product price score of car120may be based on the comparison of the best current offer to the sticker price, e.g., a score of 90 may be assigned if a product is available at a 90% discount; in this instance, since car120is available at a 25% discount, a score of 25 may be assigned. It should be appreciated that the computations described above are illustrative, and any suitable factors and methodologies may be used to compute the product placement strength score and product price score. The media application may determine whether a particular scene is scene of interest, and/or whether to present supplemental content based at least in part on the computed scores of category130. For example, the media application may compute an overall score for a particular item (e.g., depicted in the partial scene of interest, and used to identify the supplemental content) based on combining the product placement strength score and the product price score for the particular item, and may compare such score to a threshold. In some embodiments, the threshold may be a standard value, or may differ for different users based on viewing habits and purchasing habits of the user, and if the media application determines the combined score exceeds the threshold score, the media application may determine that a likelihood of user interaction with supplemental content related to the particular item is sufficiently high to justify generating for presentation alert message or notification122associated with the supplemental content. In some embodiments, the combined score may correspond to a ratio between the product placement strength score and the product price score, an average score as between the product placement strength score and the product price score, and/or one or more of the scores may be weighted more highly than another in the computation, or a higher of the two scores may be used for comparison to the threshold. In some embodiments, if each of the combined scores for car120and122exceeds the threshold value, a notification or alert message (e.g., an overlay or pop-up) may be generated for display for supplemental content associated with each of the cars120and122, or supplemental content may be provided only for the object having the higher of the scores. In some embodiments, the notification or alert may be provided to a second screen device (e.g., a mobile device of the user) while media asset102is being generated for display at another device (e.g., a smart television in a vicinity of the second screen device). Notification122may be provided in any suitable format (e.g., displayed to the user, audio alert, haptic alert, or any combination thereof). The media application, upon receiving selection of the message122, may cause the device (e.g., device104or another device in a vicinity of device104) to be re-directed to a website or application associated with the supplemental content related to car120. In some embodiments, the media application may delay or time shift the presentation of notification122until after the identified particular scene (e.g., climax124) concludes, e.g., to avoid interrupting the user during the particular scene. As shown inFIG.1B, supplemental content142may be inserted at scene111(e.g., associated with a timestamp range of 27:00-30:00 as shown at140ofFIG.1B) corresponding to falling action and resolution portion125of the plot structure, once the particular scene (e.g., the climax) concludes. For example, the media application may generate for presentation alert122(associated with supplemental content related to car120) at scene111corresponding to falling action and resolution portion125of the plot structure, once the particular scene (e.g., the climax) concludes. In some embodiments, generating for presentation alert122or other supplemental content may correspond to retrieving stored supplemental content (e.g., from media content source602or server604), or otherwise processing the supplemental content for presentation to the user, at any suitable time (e.g., prior to, during or after the particular scene of interest). Falling action and resolution portion125depicts actor112having returned to bank114in car120to return the money stolen by second actor118. In some embodiments, even if car120associated with supplemental content is not depicted in falling action and resolution portion125, a thumbnail depicting car120that is related to the supplemental content may be presented to the user as a reminder of the object. In some embodiments, the supplemental content alert may be presented during the particular scene of interest (e.g., the climax), or right before the particular scene of interest (e.g., at scene107). In some embodiments, any portion of asset102determined to be likely to capture the attention (and/or cause excitement) of a user consuming media asset102may be identified as a portion at which to present supplemental content, e.g., even if such portion of the media asset does not correspond to the particular scene of interest. In such instance, the media application may present the supplemental content overlaid at the identified portion, or may identify the next portion of media asset102unlikely to interest the user and present the supplemental content at the identified next portion. In some embodiments, other plot structure portions (e.g., falling action and resolution125) may be leveraged as a portion at which to provide supplemental content, e.g., in a case that a price of a particular product is detected at a significant discount, e.g., as compared to the climax scene, such as if a new sale recently was released in the interim. In some embodiments, machine learning techniques may be employed to determine a time of a particular scene of interest (e.g., a climax, a fight scene, a car chase, etc.) of media asset102and/or when the particular scene of interest has concluded and/or whether supplemental content should be provided to the user. For example, a machine learning model (e.g., a neural network, a native Bayes model, logistic regression, etc.) may be trained to recognize a beginning and an end of the particular scene of interest using training data of various audiovisual frames of media assets manually labeled to indicate whether certain portions correspond to a beginning or end of the particular scene of interest. The trained machine learning model may learn, e.g., genre-specific patterns of which features of content are indicative of a beginning and an end of the particular scene of interest. In addition, a machine learning model (e.g., a neural network, native Bayes model, logistic regression, etc.) may be trained on information to determine a suitable threshold that computed product placement and/or product price scores may be compared to, and/or whether to schedule placement of supplemental content. For example, the machine learning model may be trained using data indicating a time when prior users interacted with supplemental content or submitted a request to purchase a product associated with the supplemental content. Based on such training data, the machine learning model can learn patterns of past users, and may output a prediction of when a current user consuming a media asset, and having certain interests and a certain viewing history, is more likely to consume or interact with supplemental content. In some embodiments, training image data may be preprocessed and represented as feature vectors. In some embodiments, determining whether to present supplemental content may take into account a varying popularity of a particular character over the course of an episodic series (e.g., the actor may be more popular earlier in a season of episodes but become less relevant as the series progresses). In some embodiments, determining whether to present supplemental content may take into account whether a user is likely to stop watching media asset102at a current time. For example, the media application may communicate with a calendar application to determine that a user has a scheduled appointment, or user viewing history may be analyzed to determine that a user typically changes a channel at a particular time of day, and the supplemental content may be presented prior to the identified time of the appointment or likely channel change (e.g., even if during the particular scene of interest, such as, for example, the climax). FIG.2shows an illustrative technique for generating a viewership score, in accordance with some embodiments of this disclosure.FIG.2shows a device204of a user at which the media application is providing a stream or broadcast of media asset102. In some embodiments, the media application may be providing a website or application (e.g., a live streaming platform) providing users with the ability to collectively view a synchronized presentation of a media asset and interact with one another. For example, the media application may receive comments202from users during presentation of media asset102and generate for display the comments in the any suitable form (e.g., text, voice, images, emojis, etc.). The media application may correlate each comment with a particular timestamp within duration110of media asset102. For example, the comments shown at202may be associated with current play position time108(and/or the entire particular scene (e.g., climax124) time period). The media application may analyze the number of user interactions at comment section202and/or the content of the interactions occurring during presentation of media asset102at particular scenes of media asset102. For example, natural language processing circuitry or other linguistic analysis circuitry may apply linguistic, sentiment, and grammar rules to tokenize words from a text string of a comment; identify part-of-speech (i.e., noun, verb, pronoun, preposition, adverb, conjunction, participle, article); perform named entity recognition; and identify phrases, sentences, proper nouns, or other linguistic features of the text string. In some embodiments, statistical natural language processing techniques may be employed. Extracted keywords may be compared to keywords stored in a database to perform semantic and/or sentiment analysis in order to determine whether a particular comment or image is indicative of user interest in a particular scene of media asset102. Based on this analysis, the media application may generate viewership curve206indicative of user interaction and/or user interest in particular portions during duration110of media asset102. Various metrics may be considered in generating viewership curve206, e.g., whether users skipped over certain portions of media asset102when consuming media asset102across various platforms, and/or social media activity across various platforms indicative of interest in a portion of media asset102. In some embodiments, viewership curve206may be analyzed to determine the occurrence of a particular scene of interest (e.g., climax124), determine the insertion point and type of supplemental content during the presentation of media asset102, and/or may be used to inform computation of the scores associated with category130of data structure150. FIG.3shows an illustrative block diagram300of a system for determining when to provide supplemental content during presentation of a media asset, in accordance with some embodiments of this disclosure. The media asset may be associated with an end time310(e.g., 0:52 minutes) and a current play position308(e.g., 0:45 minutes). The media application may determine (e.g., based on metadata received from media content source602, and/or any other suitable technique) a total time302of a particular scene of interest (e.g., a climax, which may be determined to be 4 minutes in duration, from 0:44-0:48 minutes) and a remaining time304(e.g., three minutes) of the particular scene of interest (e.g., the climax). The media application may determine that metadata of the media asset (e.g., specified in data structure150ofFIG.1B) indicates an upcoming product placement (e.g., car120ofFIG.1A) at the play time indicated at314(e.g., 0:46 minutes). At316, the media application may determine whether the upcoming product placement determined at312is scheduled to occur within a predefined threshold time of an end time (e.g., 0:48 minutes) of the time of a particular scene (e.g., a climax) of the media asset. For example, if the threshold period of time is three minutes, and the media application determines based on the product placement time314of 0:46 minutes that the product placement is schedule to occur within two minutes of the end time (e.g., 0:48 minutes) of the particular scene, the media application may determine (at318) that the presentation of supplemental content related to the product placement should be time-shifted to occur after the time of the particular scene concludes (e.g., at an appropriate time after the 00:48 minute mark). In this way, intrusion into the user's consumption of the media asset during a particular scene (e.g., a pivotal climax or other scene of interest of the media asset) may be avoided, while at the same time supplemental content likely to be of interest to the user may still be provided during a less critical portion of the media asset. On the other hand, if the product placement is scheduled to occur at a time that exceeds a threshold period of time (e.g., if the threshold is 1 minutes), such as at position314ofFIG.3which is two minutes of the conclusion of the time of the particular scene, processing may proceed to320, where the supplemental content may be presented at the scheduled time in connection with the product placement indicated in the metadata. This may be desirable because in some circumstances performing time-shifting of the supplemental content when a time gap between the product placement and supplemental content is considered to be too long may risk the user forgetting, or otherwise losing interest in, the supplemental content. In some embodiments, processing at320may still involve a time shift of the supplemental content, e.g., to a less interesting portion of the particular scene (e.g., climax). In some embodiments, the supplemental content may be provided to a second screen device (e.g., a mobile device of the user) simultaneously with the presentation on the first device of the product placement associated with the supplemental content, which may allow the user to view his or her mobile device at his or her own leisure, such as after the conclusion of the particular scene. Alternatively, the supplemental content may be provided to the second screen device at the conclusion of the particular scene, to avoid interrupting the viewing session by prompting the user to check his or her mobile device. In some embodiments, content may be provided to the second screen upon detecting a user has started using a second screen, the supplemental content is deemed to be too large or distracting to be shown at a device providing the media asset, and/or the supplemental content is the same or similar color to a background portion of the scene and thus may not be sufficiently noticeable to the user. In some embodiments, the threshold may be adjusted over time based on monitoring viewing habits and interaction with supplemental content for a specific user. In some embodiments, the threshold may vary based on a length of time of a media asset (e.g., the threshold may be less for a 30-minute program than for a 2-hour program). FIG.4is an illustrative technique for determining a likelihood of user interaction with supplemental content based on a product placement strength score and a product sale price score, in accordance with some embodiments of this disclosure. In some embodiments, the below formula (1) may be employed to determine a likelihood of user interaction (e.g., a product purchase or other interaction) with supplemental content: L=so(1) where L may correspond to a threshold value of a likelihood of user interaction, S may correspond to product placement strength score, and O may correspond to a discount (e.g., a percent discount) of an identified offer over a sticker price. In some embodiments, if the value of L surpasses a certain threshold LT(e.g., represented by area402), the media application may determine that the likelihood of the viewer purchasing a product is higher than the threshold, and thus may proceed to cause the generation for display of a notification or alert to the viewer, which may be selectable to cause the user device (e.g., a browser of the user device) to be re-directed to a product purchase landing resource (e.g., a URL or application associated with the product). In some embodiments, LTmay typically be a relatively high value for a specific viewer to avoid excessively disturbing the user viewing session. The media application may monitor, e.g., viewing and purchasing habits of the use, and may adjust the likelihood threshold for each user based on the monitored user characteristics. For example, as shown at404, the threshold may be reduced for user A over time, and as shown at406, the threshold may be increased for user B over time. In some embodiments, the techniques ofFIG.4may be utilized in determining whether a scene corresponds to a particular scene of interest (e.g., the climax of a media asset) and thus whether supplemental content related to the scene should be presented at all, alternatively or in addition to selecting the supplemental content to be generated for display. In adjusting the thresholds for user A, the media application may determine that user A frequently interacts with (and/or purchases products or services based on) the supplemental content, even where user preferences inferred based on a user profile or viewing history of user A are tangentially related to features of a product placed in a media asset and associated with the supplemental content, and even where a particular product price is not a particularly significant discount. On the other hand, the media application may determine that user B rarely interacts (or purchases products based on) the supplemental content, unless user preferences of user B directly align with features of the product associated with the supplemental content, and/or a particular product price is a particularly significant discount. The media application may determine that presentation of supplemental content during presentation of the media asset causes the user to skip to another portion of the media asset or cease access of the media asset. The media application may log each of these occurrences in the user profile of the user, and may adjust the threshold for user A or user B to reduce and increase the likelihood threshold, respectively, for each logged occurrence. FIGS.5-6describe illustrative devices, systems, servers, and related hardware for providing supplemental content relevant to the metadata of a particular scene of a media asset, in accordance with some embodiments of the present disclosure.FIG.5shows generalized embodiments of illustrative user equipment devices500and501, which may correspond to user equipment device104,204ofFIGS.1and3, respectively, and/or a second screen device. For example, user equipment device500may be a smartphone device. In another example, user equipment device501may be a user television equipment system. User television equipment device501may include set-top box516. Set-top box516may be communicatively connected to microphone518, speaker514, and display512. In some embodiments, microphone518may receive voice commands for the media application. In some embodiments, display512may be a television display or a computer display. In some embodiments, set-top box516may be communicatively connected to user input interface510. In some embodiments, user input interface510may be a remote control device. Set-top box516may include one or more circuit boards. In some embodiments, the circuit boards may include processing circuitry, control circuitry, and storage (e.g., RAM, ROM, hard disk, removable disk, etc.). In some embodiments, the circuit boards may include an input/output path. More specific implementations of user equipment devices are discussed below in connection withFIG.5. Each one of user equipment device500and user equipment device501may receive content and data via input/output (I/O) path502. I/O path502may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry504, which includes processing circuitry506and storage508. Control circuitry504may be used to send and receive commands, requests, and other suitable data using I/O path502, which may comprise I/O circuitry. I/O path502may connect control circuitry504(and specifically processing circuitry506) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path inFIG.5to avoid overcomplicating the drawing. Control circuitry504may be based on any suitable processing circuitry such as processing circuitry506. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, control circuitry504executes instructions for the media application stored in memory (e.g., storage508). Specifically, control circuitry504may be instructed by the media application to perform the functions discussed above and below. In some implementations, any action performed by control circuitry504may be based on instructions received from the media application. In client/server-based embodiments, control circuitry504may include communications circuitry suitable for communicating with a media application server or other networks or servers. The instructions for carrying out the above mentioned functionality may be stored on a server (which is described in more detail in connection withFIG.5. Communications circuitry may include a cable modem, an integrated services digital network (ISDN) modem, a digital subscriber line (DSL) modem, a telephone modem, Ethernet card, or a wireless modem for communications with other equipment, or any other suitable communications circuitry. Such communications may involve the Internet or any other suitable communication networks or paths (which is described in more detail in connection withFIG.5). In addition, communications circuitry may include circuitry that enables peer-to-peer communication of user equipment devices, or communication of user equipment devices in locations remote from each other (described in more detail below). Memory may be an electronic storage device provided as storage508that is part of control circuitry504. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage508may be used to store various types of content described herein as well as media application data described above. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). Cloud-based storage, described in relation toFIG.5, may be used to supplement storage508or instead of storage508. Control circuitry504may include video generating circuitry and tuning circuitry, such as one or more analog tuners, one or more MPEG-2 decoders or other digital decoding circuitry, high-definition tuners, or any other suitable tuning or video circuits or combinations of such circuits. Encoding circuitry (e.g., for converting over-the-air, analog, or digital signals to MPEG signals for storage) may also be provided. Control circuitry504may also include scaler circuitry for upconverting and downconverting content into the preferred output format of user equipment500. Control circuitry504may also include digital-to-analog converter circuitry and analog-to-digital converter circuitry for converting between digital and analog signals. The tuning and encoding circuitry may be used by user equipment device500,501to receive and to display, to play, or to record content. The tuning and encoding circuitry may also be used to receive guidance data. The circuitry described herein, including for example, the tuning, video generating, encoding, decoding, encrypting, decrypting, scaler, and analog/digital circuitry, may be implemented using software running on one or more general purpose or specialized processors. Multiple tuners may be provided to handle simultaneous tuning functions (e.g., watch and record functions, picture-in-picture (PIP) functions, multiple-tuner recording, etc.). If storage508is provided as a separate device from user equipment device500, the tuning and encoding circuitry (including multiple tuners) may be associated with storage508. Control circuitry504may receive instruction from a user by way of user input interface510. User input interface510may be any suitable user interface, such as a remote control, mouse, trackball, keypad, keyboard, touch screen, touchpad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display512may be provided as a stand-alone device or integrated with other elements of each one of user equipment device500and user equipment device501. For example, display512may be a touchscreen or touch-sensitive display. In such circumstances, user input interface510may be integrated with or combined with display512. Display512may be one or more of a monitor, a television, a display for a mobile device, or any other type of display. A video card or graphics card may generate the output to display512. The video card may be any processing circuitry described above in relation to control circuitry504. The video card may be integrated with the control circuitry504. Speakers514may be provided as integrated with other elements of each one of user equipment device500and user equipment system501or may be stand-alone units. The audio component of videos and other content displayed on display512may be played through the speakers514. In some embodiments, the audio may be distributed to a receiver (not shown), which processes and outputs the audio via speakers514. The media application may be implemented using any suitable architecture. For example, it may be a stand-alone application wholly-implemented on each one of user equipment device500and user equipment device501. In such an approach, instructions of the application are stored locally (e.g., in storage508), and data for use by the application is downloaded on a periodic basis (e.g., from an out-of-band feed, from an Internet resource, or using another suitable approach). Control circuitry504may retrieve instructions of the application from storage508and process the instructions to provide supplemental content as discussed. Based on the processed instructions, control circuitry504may determine what action to perform when input is received from user input interface510. For example, movement of a cursor on a display up/down may be indicated by the processed instructions when user input interface510indicates that an up/down button was selected. In some embodiments, the media application is a client/server-based application. Data for use by a thick or thin client implemented on each one of user equipment device500and user equipment device501is retrieved on-demand by issuing requests to a server remote to each one of user equipment device500and user equipment device501. In one example of a client/server-based guidance application, control circuitry504runs a web browser that interprets web pages provided by a remote server. For example, the remote server may store the instructions for the application in a storage device. The remote server may process the stored instructions using circuitry (e.g., control circuitry504) to perform the operations discussed in connection withFIGS.1-3. In some embodiments, the media application may be downloaded and interpreted or otherwise run by an interpreter or virtual machine (run by control circuitry504). In some embodiments, the media application may be encoded in the ETV Binary Interchange Format (EBIF), received by the control circuitry504as part of a suitable feed, and interpreted by a user agent running on control circuitry504. For example, the media application may be an EBIF application. In some embodiments, the media application may be defined by a series of JAVA-based files that are received and run by a local virtual machine or other suitable middleware executed by control circuitry504. In some of such embodiments (e.g., those employing MPEG-2 or other digital media encoding schemes), the media application may be, for example, encoded and transmitted in an MPEG-2 object carousel with the MPEG audio and video packets of a program. FIG.6is a diagram of an illustrative streaming system, in accordance with some embodiments of this disclosure. User equipment devices608,609,610(e.g., user equipment device104ofFIG.1, user equipment device104ofFIG.2) may be coupled to communication network606. Communication network606may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g., a 5G, 4G, or LTE network), cable network, public switched telephone network, or other types of communication network or combinations of communication networks. Paths (e.g., depicted as arrows connecting the respective devices to the communication network606) may separately or together include one or more communications paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. Communications with the client devices may be provided by one or more of these communications paths but are shown as a single path inFIG.6to avoid overcomplicating the drawing. Although communications paths are not drawn between user equipment devices, these devices may communicate directly with each other via communications paths as well as other short-range, point-to-point communications paths, such as USB cables, IEEE 1394 cables, wireless paths (e.g., Bluetooth, infrared, IEEE 702-11x, etc.), or other short-range communication via wired or wireless paths. The user equipment devices may also communicate with each other directly through an indirect path via communication network606. System600includes a media content source602and a server604, which may comprise or be associated with database605. Communications with media content source602and server604may be exchanged over one or more communications paths but are shown as a single path inFIG.6to avoid overcomplicating the drawing. In addition, there may be more than one of each of media content source602and server604, but only one of each is shown inFIG.6to avoid overcomplicating the drawing. If desired, media content source602and server604may be integrated as one source device. In some embodiments, server604may include control circuitry611and a storage614(e.g., RAM, ROM, Hard Disk, Removable Disk, etc.). Storage614may store a one or more databases. Server604may also include an input/output path612. I/O path612may provide device information, or other data, over a local area network (LAN) or wide area network (WAN), and/or other content and data to the control circuitry611, which includes processing circuitry, and storage614. Control circuitry611may be used to send and receive commands, requests, and other suitable data using I/O path612, which may comprise I/O circuitry. I/O path612may connect control circuitry604(and specifically processing circuitry) to one or more communications paths. Control circuitry611may be based on any suitable processing circuitry such as one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. In some embodiments, control circuitry611may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). In some embodiments, the control circuitry611executes instructions for an emulation system application stored in memory (e.g., the storage614). Memory may be an electronic storage device provided as storage614that is part of control circuitry611. Server604may retrieve guidance data from media content source602, process the data as will be described in detail below, and forward the data to user equipment devices608,609,610. Media content source602may include one or more types of content distribution equipment including a television distribution facility, cable system headend, satellite distribution facility, programming sources (e.g., television broadcasters, such as NBC, ABC, HBO, etc.), intermediate distribution facilities and/or servers, Internet providers, on-demand media servers, and other content providers. NBC is a trademark owned by the National Broadcasting Company, Inc., ABC is a trademark owned by the American Broadcasting Company, Inc., and HBO is a trademark owned by the Home Box Office, Inc. Media content source602may be the originator of content (e.g., a television broadcaster, a Webcast provider, etc.) or may not be the originator of content (e.g., an on-demand content provider, an Internet provider of content of broadcast programs for downloading, etc.). Media content source602may include cable sources, satellite providers, on-demand providers, Internet providers, over-the-top content providers, or other providers of content. Media content source602may also include a remote media server used to store different types of content (including video content selected by a user), in a location remote from any of the client devices. Media content source602may also provide supplemental content relevant to the metadata of a particular scene of a media asset as described above. Client devices may operate in a cloud computing environment to access cloud services. In a cloud computing environment, various types of computing services for content sharing, storage or distribution (e.g., video sharing sites or social networking sites) are provided by a collection of network-accessible computing and storage resources, referred to as “the cloud.” For example, the cloud can include a collection of server computing devices (such as, e.g., server604), which may be located centrally or at distributed locations, that provide cloud-based services to various types of users and devices connected via a network such as the Internet via communication network606. In such embodiments, user equipment devices may operate in a peer-to-peer manner without communicating with a central server. FIG.7is a flowchart of a detailed illustrative process for providing supplemental content relevant to the metadata of a particular scene (e.g., the climax) of a media asset, in accordance with some embodiments of this disclosure. In various embodiments, the individual steps of process700may be implemented by one or more components of the devices and systems ofFIGS.1-6. Although the present disclosure may describe certain steps of process700(and of other processes described herein) as being implemented by certain components of the devices and systems ofFIGS.1-6, this is for purposes of illustration only, and it should be understood that other components of the devices and systems ofFIGS.1-6may implement those steps instead. For example, the steps of process700may be executed at device610and/or server604ofFIG.6to perform the steps of process700. At702, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may receive a request to play a media asset (e.g., media asset102ofFIG.1A). The media asset may be a live broadcast, a recorded program, streaming content, etc. The media asset may be requested from media content source602. At704, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may begin playing the requested media asset. For example, a remote server (e.g., control circuitry611of server604and/or media content source602ofFIG.6) may be configured to provide segments of a media asset to user equipment (e.g., user equipment607,608,610) over a network (e.g., communication network606). At706, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may analyze metadata (e.g., metadata specified in category128of data structure150ofFIG.1B) associated with media asset102. Such metadata may be received from a remote server (e.g., control circuitry611of server604and/or media content source602ofFIG.6). At708, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may determine whether the metadata indicates a time of a particular scene of interest (e.g., a climax). For example, the control circuitry may determine that metadata associated with a particular timestamp of the media asset (e.g., timestamp138specified in data structure150ofFIG.1B) indicates that the particular scene of the media asset occurs during that time period. If such metadata is present, processing may proceed to714. Otherwise, processing may proceed to710. At710, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may identify viewership information related to consumption of the media asset (e.g., media asset102ofFIG.1A). For example, the control circuitry may be configured to generate or retrieve a viewership curve (e.g., viewership curve206) on the basis of mining one or more sources of information, e.g., social media activity across any relevant platform such as, for example, comments202on a stream of media asset204, online plot reviews, consumption information indicating a number of viewers of a particular scene and/or whether particular scenes were skipped, etc. Based on the viewership score, the control circuitry may determine that a particular scene or scenes of the media asset corresponds to a particular scene of interest (e.g., a climax). Otherwise, processing may proceed to712. At710, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may analyze audiovisual attributes of a currently playing portion of the media asset (e.g., scene109of media asset102ofFIG.1A). The control circuitry may use any suitable image recognition technique to analyze frames of the media asset and may use any suitable audio analysis technique (e.g., speech-to-text transcription and natural language analysis of dialogue of a scene to discern semantics and sentiments of the scene) to identify audiovisual attributes. In some embodiments, the audiovisual attributes may be indicative of a time of the particular scene (e.g., a climax or other portion of the media asset likely to capture the attention of the user consuming the media asset and/or featuring a product the user consuming the media asset is likely to be interested in) based on a genre of the media asset. For example, if the control circuitry determines that a particular scene has a lot of action and violence and features the main actor in a media asset having a genre of “Action” (and is occurring towards an end of the media asset) such determinations may weigh in favor of a finding that the particular scene is a time of the particular scene of interest. At714, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may determine the time of the particular scene based on one or more of the metadata, viewership information, and analyzed audiovisual characteristics. For example, the control circuitry may determine that scene109ofFIG.1Acorresponds to the time of the particular scene (e.g., climax124). At716, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may identify metadata (e.g., related to product placement) of the media asset corresponding to the time of the particular scene (e.g., climax). For example, metadata of category128of data structure150may be used by the control circuitry to identify different objects (e.g., products such as car120ofFIG.1A) in particular scenes. Additionally or alternatively, the control circuitry may utilize object detection techniques to identify objects in a currently playing scene (e.g., by utilizing machine learning techniques and/or comparing extracted features of an object in a current scene to features of objects in a database, such as database605ofFIG.6). At718, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may identify supplemental content relevant to the metadata of the particular scene of interest. For example, the control circuitry may reference a database (e.g., database605ofFIG.6) storing associations between certain products and network locations of supplemental content, and/or may crawl the web to identify suitable supplemental content related to metadata (e.g., car120ofFIG.1A) of the time of the particular scene of interest (e.g., time period138ofFIG.1B). At720, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may compute a product placement strength score and a product price score for one or more objects of a particular scene (e.g., as shown at category130of data structure150ofFIG.1A). Such aspects are discussed in more detail in connection withFIG.8. At722, the control circuitry may determine whether the computed scores (e.g., a ratio of the computed scores) exceeds a predefined threshold. For example, a likelihood threshold may be determined based on the technique discussed in connection withFIG.4. In some embodiments, the predefined threshold may be adjustable over time based on monitoring user interactions and viewing patterns. At724, if the control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) determines the computed scores do not exceed the threshold, the control circuitry may refrain from generating for display supplemental content. This may avoid providing supplemental content to a user when he or she is determined as unlikely to be sufficiently interested in the supplemental content and/or where the price of a product associated with the supplemental content is not optimal, thereby reducing the likelihood of user interaction with the supplemental content. At726, if the control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) determines the computed scores do exceed the threshold, the control circuitry may determine that the supplemental content (e.g., alert122associated with a video accessible by way of a URL related to car120) should be presented. At728, the control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may determine whether a current play time (e.g., play time108) of the media asset (e.g., media asset102ofFIG.1A) matches the determined time of the particular scene of interest (e.g., climax124ofFIG.1A). The control circuitry may perform this step by comparing the current time to the determined time of the particular scene. At730, the control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may determine whether the product placement associated with the supplemental content is to occur or is occurring within a predefined threshold of the end of the particular scene (e.g., climax124ofFIG.1A). For example, the control circuitry may determine that the time of a product placement in the particular scene (e.g., car120is depicted at the time point of 20:00 at current play position108of scene109corresponding to climax124of media asset102) is occurring a certain period of time (e.g., about 7 minutes) from the end of the particular scene (e.g., climax124). If the threshold time is, e.g., 5 minutes, processing may proceed to732based on determining the product placement is not within a predefined end of the particular scene. On the other hand, if the control circuitry determines that the product will continue to be displayed during the particular scene (e.g., to at least a time of the particular scene that matches or exceeds the threshold time) processing may proceed to736. As another example, if the control circuitry determines that the product placement in the particular scene is scheduled to first occur at a time in the particular scene at which the remaining time in the particular scene is less than the predefined threshold, processing may proceed to736. At732, the control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611OfFIG.6) may determine to generate for display the supplemental content during the time of the particular scene, e.g., to avoid the possibility that the user may forget about or lose interest in the product related to the supplemental content by the time the particular scene concludes. At734, the control circuitry may generate for display the supplemental content (e.g., user equipment104and/or a second screen device of the user in a vicinity of user equipment104ofFIG.1A). At736, the control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may determine that the supplemental content should be presented after the particular scene. For example, the control circuitry may determine that the product placement is sufficiently close to the end of the time of the particular scene (e.g., time of climax124ofFIG.1B) that the user is likely to be engaged with the supplemental content even if presentation of the supplemental content is time-shifted to, e.g., immediately after the conclusion of the particular scene, and thus distraction of the user during the particular scene (e.g., a pivotal climax portion) of the media asset (e.g., media asset102ofFIG.1A) may be avoided. In some embodiments, the control circuitry may determine to present the supplemental content prior to the beginning of the climax, or after conclusion of the media asset (e.g., at the credits). At738, in response to determining that the time of the particular scene (e.g., at the 27:00 minute mark of climax124media asset102ofFIGS.1A-1B) has concluded, processing may proceed to740. Otherwise, the control circuitry may wait for the end of the particular scene. Any suitable technique may be utilized to determine the end of the particular scene. As an example, metadata (e.g., specified in data structure150ofFIG.1A) may indicate a timestamp associated with the conclusion of the particular scene. Additionally or alternatively, viewership information may be referenced in a similar manner as discussed in connection with710, e.g., a time period in which social activity decreases or viewership decrease may be correlated to an end of the particular scene. As another example, audiovisual attributes of the media asset may be analyzed by, e.g., a heuristic-based technique and/or machine learning techniques. For example, in a media asset of the genre “Action,” the control circuitry may determine that violence and/or fast movements have concluded, and/or a main character has exited the scene, which may suggest an ending of the particular scene. In some embodiments, during a live event (e.g., a sports game), the control circuitry may determine to present the supplemental content after the conclusion of the game (e.g., during a trophy presentation, to avoid interrupting the final moments of a close game). For example, the control circuitry may determine whether a score of the game is within a threshold amount (e.g., varying depending on the sport, such as, for example, 14 points in football, or 10 points in basketball, which may be stored in a database and/or dynamically determined based on analysis of sporting event data), and if so, time-shift presentation of supplemental content until the conclusion of the sporting event. At740, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may generate for display the supplemental content after the identified end of the particular scene (e.g., climax124ofFIG.1). In some embodiments, the control circuitry may generate for display an alert (e.g., message122selectable to display auxiliary content associated with car120on the current screen or re-direct the user device to a URL associated with the video) or may present content automatically on the current screen (e.g., an auxiliary video related to car120within scene111) FIG.8is a flowchart of a detailed illustrative process for determining whether to provide supplemental content relevant to the metadata of a particular scene (e.g., a climax) of a media asset, in accordance with some embodiments of this disclosure. In various embodiments, the individual steps of process800may be implemented by one or more components of the devices and systems ofFIGS.1-6. Although the present disclosure may describe certain steps of process800(and of other processes described herein) as being implemented by certain components of the devices and systems ofFIGS.1-6, this is for purposes of illustration only, and it should be understood that other components of the devices and systems ofFIGS.1-6may implement those steps instead. For example, the steps of process800may be executed at device610and/or server604ofFIG.6to perform the steps of process800. At802, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may identify metadata (e.g., related to product placement) of the media asset.802may be performed in a similar manner as discussed in connection with716ofFIG.7. For example, the control circuitry may identify car120ofFIG.1Aon the basis of the identified metadata of media asset102. At804, the control circuitry may determine a timing of product placement in media asset. For example, using the techniques discussed in connection with708,710,712,714ofFIG.7, the control circuitry may identify a plot structure category for each portion of the media asset (e.g., media asset102ofFIG.1A), e.g., “exposition,” “fight,” “plot twist,” “inciting incident,” “climax,” “resolution,” “denouement,” etc. The media application may identify a timestamp within media asset102of a particular scene, such as, for example, a significant occurrence in media asset102, e.g., the climax or a fight. For example, a product placement associated with a the particular scene (e.g., the climax or portion of the media asset likely to be more interesting to a user) may be assigned a higher weight than a product placement at a more obscure point of the media asset. At806, the control circuitry may determine a character associated with the product placement, e.g., based on one or more of metadata associated with a media asset (e.g., media asset102ofFIG.1A), object recognition techniques to identify certain actors (e.g., by extracting features of an on-screen actor and comparing the features to those stored in a database or images of popular actors accessible via a search engine), and/or viewership information (e.g., online comments regarding the performance of a particular actor in the media asset). For example, detected object features thereof may be compared (e.g., pixel by pixel) to objects and associated features stored in a database (e.g., database605ofFIG.6) to determine whether the detected features match an object in the database. In some embodiments, if the character associated with a product placement (e.g., a particular brand of beer) is associated with a particular character (e.g., James Bond is depicted drinking the particular brand of beer) and a famous actor (e.g., Daniel Craig), this factor may be assigned a higher weight than a product placement with a less well known actor playing a less significant role in the media asset. In some embodiments, the importance of an actor or character may be determined based on the total amount of time the actor has been depicted on-screen up to the current point of the media asset and/or by referencing a table indicating the most popular actors. At808, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may determine a prominence of a placement of the product in the media asset. For example, the control circuitry may determine a number of pixels (or percentage of a current display) associated with each of one or more products relative a total number of displayed pixels. Such determination may be based on metadata associated with the media asset (e.g., retrieving coordinates of objects in a scene), and/or based on edge detection techniques to determine boundaries (e.g., edges, shape outline, border) of objects in a scene and/or analyzing pixel values of the area surrounding objects. For example, if the media application detects that brightness of adjacent pixels abruptly changes, the media application may determine that this is indicative of an edge of an object, and may calculate the number of pixels of the object based on the determined edges being the perimeter of the object. In some embodiments, the prominence of the product placement may be determined based at least in part on whether the product placement is in the center of the screen rather than off to a side. In some embodiments, the prominence of the product placement may be determined at least in part based on whether a main character is holding, using or otherwise interacting or discussing the product. The prominence of the product placement may depend on how central the product is to a scene, e.g., car120ofFIG.1Abeing used in a pivotal car chase of the media asset may be assigned a higher weight, whereas a car parked on the street and not playing a key role in the scene may be assigned a lower rate. At810, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may compare user preferences, e.g., associated with a user profile, to the product. For example, the control circuitry may access, from a profile associated with the user, user interest data, which may include the user's social media activity, online search history, online purchase history, and other personal data indicative of the user's interests. Metadata of the product may be compared to the user interest data to determine whether there is a match. For example, a product (e.g., car120) may be assigned certain tags (e.g., sports car, Porsche), and if the user has recently been searching for sports cars of German cars, or has searched for sports cars or German cars more than a threshold number of times, the media guidance application may determine a match, and assign a higher weight to the product. At812, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may compute a product placement strength score based on one or more of the factors determined at804,806,808, and any other suitable factors. In some embodiments, the control circuitry may combine individual scores of the timing of the product placement, the character associated with the product placement, the prominence of the product placement, and user preferences relative to the type of product, to generate the product placement strength score (e.g., on a scale of 0-100). A higher product placement strength score may indicate a higher likelihood of user interaction with supplemental content related to the product. In some embodiments, a machine learning model may be employed to take as input the individual scores or weights assigned at804,806,808,810and output a combined score reflecting the likelihood of user interaction. At814, the control circuitry may determine a usual price or sticker price for the product (e.g., car120ofFIG.1A). To make this determination, the control circuitry may access a database or website indicating the historical or typical pricing at popular sites (e.g., the company website associated with the product, a website that aggregates prices from across the web, a most common website from which products of this type are purchased, and/or a website associated with a company within a threshold distance from the user, etc.). At816, the control circuitry may determine whether a price of the product associated with the supplemental content is available at a better price than the identified usual price. For example, if the product corresponds to a soda, the control circuitry may determine that while the sticker price of a bottle of Coca Cola is $1.00, currently the lowest price available is $0.65, and thus a 35% discount is available (e.g., 65% of the sticker price). In some embodiments, a lower price may be assumed to enhance the likelihood of user interaction with the supplemental content. At818, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may compute a product price score (e.g., from 0-100) on the basis of the comparison of814the sticker price to the current sale price of816. At820, control circuitry (e.g., control circuitry504ofFIG.5and/or control circuitry611ofFIG.6) may compute a combined score on the basis of the product placement strength score and the product price score, e.g., the control circuitry may determine a ratio of one of the scores to the other of the scores, multiply the scores, add the scores, use the higher of the two scores, determine if each score is above a particular threshold, etc. The control circuitry may compare the computed combined score to a threshold to determine whether supplemental content should be presented to a user during presentation of the requested media asset (e.g., media asset102).822,824, and826ofFIG.8may be performed in a similar manner as to722,724, and726, respectively, ofFIG.7. The processes discussed above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the steps of the processes discussed herein may be omitted, modified, combined and/or rearranged, and any additional steps may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be illustrative and not limiting. Only the claims that follow are meant to set bounds as to what the present invention includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods.
79,426
11943512
DETAILED DESCRIPTION The post-production for digital footages of movies, TV programs and other multimedia contents can be a complex process involving many components and operations. As advancement in network technologies facilitates streaming of audio and video contents to users in their homes and other places, distribution of the multimedia content also requires complex processing operations to ensure satisfactory viewing experiences for the viewers.FIG.1illustrates example operations of three example stages102,104and106that can be performed to digitally process and package the content for distribution of movies, TV shows, or other multimedia contents after the post-production of such contents completes. In some implementations, for example, these operations can be performed to the multimedia contents after they are delivered to the content providers such as video streaming service companies from the producers. The Operation102is content editing via digital editing operations, either automatically by machine or with manual operations of software by human operators. The storyline of a movie, a TV show, or other multimedia content often includes multiple scenes, each having multiple shots. Multiple cameras and cast members are involved for producing one shot of the scene. In some embodiments, production of the multimedia content can be performed according to a set of machine-readable scripts generated based on the storyline as disclosed in International Patent Application No. PCT/CN2019/090722, entitled “Product-As-a-Service Systems for Making Movies, TV Shows and Multimedia Contents,” filed on Jun. 11, 2020, and U.S. Pat. No. 10,721,377 issued on Jul. 21, 2020, which are incorporated by reference in their entirety. After production is completed, the produced raw video/audio data for each scene and difference scenes can be transferred to one or more subsequent processing modules or subsystems to perform subsequent editing operations. In some embodiments, the raw footages can be edited automatically by editing software without manual controlled editing by human operators by digitally processing of the machine-readable scripts as disclosed in International Patent Application No. PCT/US2020/032217, entitled “Fully Automated Post-Production Editing for Movies, TV Shows, and Multimedia Contents,” filed on May 8, 2020, which is incorporated by reference in its entirety. Alternatively, or in addition, manual editing with one or more human operators can be performed to incorporate artistic features desired by the editors or directors. The next Operation104in the process inFIG.1is Multi-Compression Level Transcoding. The edited version of the multimedia content can be encoded into different formats, such as Flash (.f4v), Moving Picture Experts Group (MPEG) 4 (mp4), or QuickTime (.mov). The encoded video may have a large size (e.g., multiple Gigabytes) and thus the speed of transmission of such large encoded video data online may be limited by the bandwidth available for the transmission and such limitations may cause undesired delays that adversely affect the viewing experience. To allow viewers to view the content seamlessly on various streaming platforms, the Operation104includes a video compression process to reduce the amount of video data to be transmitted to ensure timely transmission of encoded video data for satisfactory user viewing experience. One example of such a compression process is adaptive streaming compresses multimedia content at different compression levels according to the network conditions and stream such compressed multimedia content data via communication networks to reduce delays in receiving the video data at the user devices.FIG.2illustrates an example adaptive streaming process200as part of the Operation104inFIG.1to generate contents with different compression levels and/or bitrates for different viewers. The input stream202of the adaptive streaming process200is the edited version of the multimedia content that typically has a relatively high bitrate. The input stream202goes through an encoder204that is configured to process the input stream202using different compression levels and generate multiple output streams having different bitrates. For example, the output stream206ahas a high bitrate corresponding to a low compression level, the output stream206bhas a medium bitrate corresponding to a medium compression level, and the output stream206chas a low bitrate corresponding to a high compression level. Based on the network conditions and/or device capabilities for the viewers, the control server208can provide the appropriate output stream to different viewers. Referring back toFIG.1, the Operation106is Adaptive Streaming Transmuxing by processing the output from the Operation104. The transmuxing process is to package the compression encoded media stream into a container for online streaming. Metadata, which may be in the XML format in implementations, is created in the Operation106to provide information on the encoded data such as the encoding information, the bit rate, a playlist of chunks or segments of the content and other information that the client side player needs before the media stream starts to be transmitted by the server and received by the client. To provide smooth viewing experience, the adaptive transmuxing process enables viewers to start viewing part of the content before the entire content becomes available at the client side. In achieving so, the transmuxing operation is designed to divide the content into smaller segments such that some segments become viewable to the viewers while remaining segments are being transferred over the network. Adaptive streaming transmuxing divides each encoded file (e.g., with an individual bitrate) into multiple equal transport units (also referred to as chunks). The length of a chunk can be configured based on the bitrate and/or compression level to adapt to network condition changes. In some embodiments, all chunks in the multimedia content are packaged in an adaptive streaming container in a particular format, such as Hypertext Transfer Protocol (HTTP) Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH), etc. Based on network conditions, the client-side player requests contents having different bitrates and/or compression levels based on the network conditions, and the corresponding chunks can be transferred over the network. In some existing implementations, encoding and distribution of the multimedia contents are disassociated from the production and post-production stages. Under such a design, the transcoding and transmuxing processes of the distribution system are not correlated with how the content was produced (e.g., whether the whole content is simply a continuous/non-stop video capture or it is a complex structured media product such as a movie of TV episode with many scenes as defined in the storyline, how many shots in each scene, etc.). Correspondingly, such transcoding and transmuxing operations when used in the process illustrated inFIG.1are performed to account for changes in network conditions and/or device capacity only. However, changes in scenes and/or shots of the multimedia contents can bring significant impact on the transcoding and transmuxing operations of the content. For example, the compression efficiency is heavily dependent on motion detection in or between the scenes. Compression efficiency can be much higher if the encoder is aware of scene changes and/or shot changes. If the compression system has information on what kind of scene it is (e.g., as defined in the movie script), this information can be processed to determine what kind of compression it should use to achieve the highest efficiency. Furthermore, the chunk size determination in various implementations of the process inFIG.1is completely agnostic about the scene/shot structure of the multimedia content. When the network condition changes, a bit rate change can happen in the middle of a shot, resulting in suboptimal viewing experience. In some cases, content providers may need to insert commercials between chunks in the multimedia contents. A commercial may be inserted in the middle of the shot, causing interruptions that can negatively affect viewers' viewing experiences. This patent document discloses techniques that can be implemented in various embodiments to provide effective packaging and distribution of the multimedia contents based on production stage information. Production stage information, such as the structure of the scenes/shots in the storyline or equipment/staff involved for each scene or shot, can be determined during or after the production stage and be included as metadata in the adaptive streaming container, thereby allowing adaptive transcoding and transmuxing to adapt to scene/shot changes in the multimedia contents. Such encoded streaming video data with such production stage information can be streamed to viewer devices to enhance the viewing experience by allowing viewers to select the ways in which the multimedia contents are to be viewed based on selection of certain parameters provided based on the production stage information including, for example, different storylines within a movie (e.g., different endings) or different shots of a scene in a movie. Metadata about the production stage information can be generated during or after the production stage (e.g., in Operation102shown inFIG.1). The metadata can be represented in a structured data format such as the Extensible Markup Language (XML) format.FIG.3illustrates an example structured data format300in accordance with the present technology. The structured data format300describes the hierarchical structure of the multimedia content. The root element301, for example, can include production stage information such as the title, the genre of the content, the producer, and overall cast information. Intermediate elements311,313can include scene/shot-level information for corresponding scenes/shots. For example, information can be included in the intermediate elements311,313to indicate the level of movement or the amount of actions in the corresponding scenes/shots. Each scene/shot-level element corresponds to multiple leaf elements322,324,326,328,330(each corresponds to one camera capture) that includes each camera information. For example, each of the leaf elements322,324,326,328,330can include an identifier for the corresponding camera capture, information about the equipment for the shot (e.g., camera angle, zoom, etc.), information about the cast involved for the shot, and other characteristics of the video capture (e.g., whether the capture is mostly static or full of actions). The leaf element can also include a link or a location indicator indicating the location of the video capture in the multimedia content so that the video clip can be readily located in the content. In some embodiments, machine-readable scripts are used in the production stage and/or post-production editing, as disclosed in International Patent Application No. PCT/CN2019/09072, U.S. Pat. No. 10,721,377 and International Patent Application No. PCT/US2020/032217. Information included the machine-readable scripts can be exported to the structured data format as metadata to be used for transcoding and transmuxing. For example, the machine-readable scripts can provide information such as the type of scene/shot, actor/actress info, location, time, objects used in each of the shots in a scene. In some embodiments, the metadata to be used for transcoding and transmuxing can be generated based on operations performed in the content editing operation. For example, as the director goes through the raw video data of captured scenes and shots, information about the individual scenes/shots and the corresponding hierarchical structure can be labeled. The labeled metadata is then organized into the structured data format such as an XML file. The metadata can be implemented using industry standard MPEG-7 format with certain extensions. The metadata can also be implemented using proprietary format(s). Table 1 shows an example of the proprietary structured data format in accordance with the present technology. TABLE 1Example Metadata in Structured Data Format<movie><title> Forest Gump </title><genre> Romance </genre><scene id=5><transition>fade in</transition><movietime>00:3 0:25</movietime><duration unit=minute>15</duration><location><city>San Francisco</city><latitude>120000</latitude><longitude>120000</ longitude><indoor outdoor>outdoor</indoor outdoor><address>...</address></location><staff><director>John Doe</director><photographers>...</photographers>...</staff><casts><actor>Forrest</actor><actress>Jenny</actress>...</casts><commercials><commercial id=1><type>billboard</type><shape>rectangle</shape><pattern>black-white-grid</pattern><commercial id=1>...</commercials><cameras>...</cameras><vehicles>...</vehicles><shot id=1><camera id=1><shot_type>close-up shot</shot_type><direction>Forrest/right</direction><angle>horizontal</angle><URL>http://example.com/movies/forrestgump/sce5-shot1-camera1.mp4</URL></camera><camera id=2><shot_type>close-up shot</shot_type><direction>Jenny/left</direction><angle>horizontal</angle><URL>http://example.com/movies/forrestgump/sce5-shot1-camera2.mp4</URL></camera><camera id=3><shot_type>media shot</shot_type><direction>Forrest/Jenny/front</direction><angle>horizontal</angle><URL>http://example.com/movies/forrestgump/sce5-shot1-camera3.mp4</URL></camera><cut type=‘jump’>camera 3</cut><action id=1 lapse=5s>Forrest andJenny walking forward</action><cut type=‘jump’>camera 2</cut><line speaker=‘Forrest’>Which college are you going</line><music><id>12</id><type>romantic</type><action>play</action></music><cut type=‘match’>camera 3</cut><action id=2>Jenny turned to Forrest</action><line speaker=‘Jenny’>I am going to DC to protest</line><action id=3 lapse=40s>Forrest andJenny walk in silence</action><cut type=‘jump’>camera 1</cut><music><action>stop</action></music></shot>...<shot id=2></shot>...<shot id=3></shot><transition>dissolve</transition></scene>...</movie> In some embodiments, the raw multimedia video and/or audio data can be organized according to the hierarchical structure indicated by the structured data format. For example, as shown in Table 1, multiple cameras are used for a particular shot in a scene. Raw video/audio clips captured by different devices for the shot (e.g., from different angles) can be saved into separate file containers. The final edited video for the shot/scene can be saved into an additional file container. The separate file containers of different shots and/or scenes are organized into the hierarchical structure corresponding to the metadata. The availability of raw video/audio data from different cameras enables custom editing of the multimedia content. Additional and/or alternative storylines can be created based on adding or changing the metadata of the multimedia content. Given the hierarchical structure of the multimedia content and the production stage information in the metadata, transcoding operation can be performed adaptively at a shot-level or a scene level according to the characteristic of the shot and/or scene. For example, for static shots or scenes without much motion, the compression level can be increased to create copies of the video data having different bitrates. On the other hand, if the scene or the shot includes lots of motions or actions, the compression level can be adjusted to account for the complexity in video compression. That is, instead of having uniform levels of bitrates for the entire multimedia content, file containers for different scenes/shots can have different bitrate levels corresponding to the contents of the scenes/shots. In some embodiments, transmuxing operation can be performed at a shot-level so that chunks are generated according to the boundaries of the shots.FIG.4illustrates an example of segmented shot in accordance with the present technology. The shot401has a length of Ti in the time domain. The shot401is segmented into five chunks411-415in time. The chunk415has a shorter length so that it does not expand across two shots. The next shot402is then segmented into additional chunks, including chunk416. In some embodiments, the chunk size can be adjusted according to the characteristic of the shot. For example, for static shots, larger chunk sizes can be used for efficient video compressions. For shots that include lots of actions, smaller chunk sizes can be used to account for compression complexity. FIG.5is a flowchart representation of an example of a method500for processing a multimedia content in accordance with the present technology. The method500includes, at operation510, receiving one or more media files and metadata information of a multimedia content. Each of the one or more media files comprises video or audio data (e.g., video/audio clips) captured at a production stage for producing the multimedia content. The metadata information indicates production stage information of the multimedia content. The metadata information can be determined during the production stage for producing the multimedia content (e.g., represented as the machine-readable scripts as disclosed in International Patent Application No. PCT/CN2019/09072, U.S. Pat. No. 10,721,377 and International Patent Application No. PCT/US2020/032217). That is, the production stage information (e.g., The metadata information can also be generated after the production stage (e.g., based on operations performed in the content editing operation). The production stage information comprises at least a genre of the multimedia content, information about the devices and cast for a shot, or content of video or audio data corresponding to a shot. The method500includes, at operation520, determining a hierarchical structure of the multimedia content based on the production stage information. The hierarchical structure indicates that the multimedia content includes multiple scenes and each of the multiple scenes includes multiple shots produced with corresponding devices and cast. In some embodiments, the hierarchical structure can be the same as the machine-readable scripts or a simplified version of the machine-readable scripts. In some embodiments, the one or more media files are organized according to the hierarchical structure, and there is information identifying a location of a media file in the multimedia content. For example, video clips captured from different angles by different devices can be organized as leaf elements of a corresponding shot in the hierarchical structure. The method500includes, at operation530, identifying, for individual scenes in the hierarchical structure of the multimedia content, characteristics associated with the individual scenes based on the production stage information. As discussed above, the production stage information can include a genre of the multimedia content, information about the devices and cast for a shot, or content of video or audio data corresponding to a shot. In some embodiments, the characteristics associated with the individual scenes indicate an amount of motions in the individual scenes. For example, the information provided in the hierarchical structure (e.g., the XML file) can indicate whether a scene or a shot comprises lots of actions or mostly static. As shown in Table 1, the shot type (e.g., close-up shot) and the action identifier (e.g., Forrest and Jenny walking forward) can be used to determine that the corresponding shot is mostly a static shot with conversations. As another example, an action identifier identifying a fight between the characters can be used to determine that the corresponding shot includes lots of motions and changes. The characteristics associated with the individual scenes can be used for subsequent transcoding and transmuxing. The method500includes, at operation540, generating multiple copies of the multimedia content at different compression levels. The different compression levels are adaptively adjusted for the individual scenes based on the characteristics associated with the individual scenes. For example, to achieve the same bitrate, a higher compression level can be applied for scene(s)/shot(s) that are mostly static as compared to scene(s)/shot(s) that have lots of motions and changes. The method600also includes, at operation650, dividing each of the multiple copies of the multimedia content into segments based on the hierarchical structure, where a length of a segment is adaptively adjusted based on the characteristics associated with the individual scenes. That is, instead of using a uniform chunk size, the chunk size can vary adaptively according to boundaries of shot(s)/scene(s) to ensure a seamless viewing experience for the viewers. The disclosed techniques can be implemented in ways to provide various unique services with useful features such as post-production customized editing, customized viewing, fast video searching, etc. For example, the disclosed techniques can be implemented to enable producers, directors, or the content providers/distributers to make different custom versions of a movie/TV program suitable for different viewer groups. In some implementations, the disclosed techniques can be used to allow producers, directors, or the content providers/distributers to generate and to store, for one or more individual scenes in the multimedia content, differently edited media files based on video or audio data captured during shooting of the scene. Multiple differently edited media files are produced for each shot in the scene. Based on the hierarchical structure of the multimedia content, the one or more edited media files can be stored separately from the video or audio data captured for the individual scene. In some embodiments, the disclosed techniques can be used to generate, based on the same video or audio data captured at the production stage, multiple versions of the multimedia content corresponding to different storylines for the same movie or TV program. The option for selecting one of the different versions of the multimedia content is provided via a user interface with a navigation list or menu that contains the different versions and represents the hierarchical structure of the multimedia content of each version. During the custom editing process, the disclosed techniques enable commercials and other digital material to be inserted into the content based on the boundaries of the shots/scenes and/or the content of the shots/scenes so as to minimize the level or extent of viewing interruption caused by an inserted commercial or advertisement and to provide a seamless viewing experience to the viewers. For example, some implementations of the commercial insertion allows inserting a commercial media file between two adjacent segments of the multimedia content based on a content of the commercial media file and contents of the two adjacent segments. The navigation list comprises information about a transition type between the commercial media file and the two adjacent segments. In addition to providing editing and modifying options for producers, directors, or the content providers/distributers, the disclosed techniques can also be implemented to provide viewer options in connection with the different versions of a movie or TV program generated by producers, directors, or the content providers/distributers. For example, the disclosed techniques can be implemented to provide a user interface in a media player for viewers to select and view different existing versions of the content and/or to create custom content on the fly at viewing time. Some implementations can include a user interface for displaying, in one or more navigation lists shown via a user interface of a media player, multiple versions of the multimedia content corresponding to different storylines generated based on the same video or audio data captured at the production stage. Specifically, when the different storylines include at least a first storyline and a second different storyline, the disclosed techniques can be used to provide a viewer user interface in a viewer media player for receiving a user input indicating a switch between the first storyline and the second storyline (e.g., switching from the first storyline to the second when the viewer previously selects the first storyline) and displaying a version of the multimedia content corresponding to the second storyline to the user. In some embodiments, the viewer user interface can be structured for displaying a first media file of the multimedia content to a user via a network, displaying, in a navigation list shown via a user interface of a media player, information about the hierarchical structure of the multimedia content to a user, and receiving a user input via the user interface of the media player. The user input indicates a viewing change from the first media file to a second media file, where the first and second media files are associated with a same device, a same shot, or a same scene in the multimedia content. The method also includes switching to display the second media file to the user. In some embodiments, the first media file and the second media file are captured at different angles by the same device or different devices. In some embodiments, the second media file comprises video or audio data captured for a shot or a scene, and the first media file comprises an edited version of the captured video or audio data. In some embodiments, the method includes simultaneously displaying, in a view area that comprises multiple sub-regions, multiple media files captured at different angles for one or more shots. One of the media files is displayed in a first sub-region that has a larger dimension than other sub-regions. In some embodiments, each segment of a copy of the multimedia content is within boundaries of a shot. As yet another example, the disclosed techniques can be used to perform fast video search on a shot/scene level. In some embodiments, the fast search can include a viewer search process that receives a search keyword via a user interface of a media player from a user, determines, based on the characteristics associated with the individual scenes, a subset of media files corresponding to the search keyword, and provides, via the user interface of the media player, a list of the subset of media files. In some embodiments, a restriction may be imposed on the list of the subset of media files based on a user input (e.g., for parental control, view group control, or other control purposes). Some examples of the disclosed techniques are further described in the following example embodiments. Embodiment 1: Custom Editing Service Typically, the director(s)/editor(s) may produce a single version of the final edited content. That is, regardless of how many shots and/or scenes were captured during the production stage, a limited number of edited storylines is generated in the end (often times, only one or two storylines are available). The techniques disclosed herein allow not only the director(s)/producer(s) to produce multiple versions of contents but also enable the content provider(s)/distributor(s) to perform desired editing on the contents. The custom editing service, provided either to the director(s)/producer(s) or to the content provider(s)/distributor(s), takes advantage of the availability of raw video/audio data corresponding to the hierarchical structure of the storyline such that multiple custom versions of the edited content can be created for different viewers or viewer groups. A new navigation list is created for each new version of the edited content. When the viewer chooses to view a particular version of the content, the corresponding navigation list can be transmitted to the viewer to enable the viewing experience. As discussed above, raw video/audio clips captured by different devices for the shot can be saved into separate file containers along with the metadata. For example, at the shot level, multiple versions of the edited shot can be created and stored as additional file containers. Similarly, at the scene level, multiple versions of the edited scene can be saved separately from the raw video/audio content. Metadata stored in the structured data format (e.g., the XML file) can be updated to indicate different or alternative hierarchical structures of the edited file containers so that different final versions of the multimedia content can be provided to the viewers. In one example aspect, the custom editing service can be provided by a system for editing a multimedia content that includes one or more processors and one or more memories including processor executable code. The processor executable code upon execution by the one or more processors configures the one or more processors to receive one or more media files comprising video or audio data captured at a production stage of producing the multimedia content and determine a hierarchical structure of the multimedia content based on production stage information of the multimedia content. The hierarchical structure indicates that the multimedia content comprises multiple scenes, each of which comprises multiple shots produced with corresponding devices and cast. The one or more processors are configured to generate, for an individual scene of the multimedia content, one or more edited media files based on video or audio data captured for the scene and store the one or more edited media files separately from the video or audio data captured for the individual scene according to the hierarchical structure of the multimedia content. In some embodiments, the one or more processors are configured to generate, based on the same video or audio data captured at the production stage, multiple versions of the multimedia content corresponding to different storylines. In some embodiments, the one or more processors can be further configured to insert a commercial media file between two segments of the multimedia content. The custom editing service can be used by editor(s) or director(s) to minimize the amount of work needed to create different versions of the content (e.g., to create different storylines, to meet different rating requirements, etc.). The custom editing service can also be used as a real-time service by content providers to insert different commercials at the boundaries of the shots/scenes. Streaming service providers can perform real-time custom editing based on the content of the scene/shot as well as the bidding prices of commercials to optimize the placement of the commercials within the multimedia contents. In some embodiments, knowing the scene/shot boundaries allows the custom editing service to define video transition effects (e.g., fade, dissolve, wipe, etc.). In particular, in the case of inserting a commercial between scenes, it is desirable to use a video transition type that suits both the multimedia content and commercial content involved in the transition to enable a smooth viewing experience. The transition type can also be defined in the navigation list. Embodiment 2: Custom Viewing Service In some embodiments, the techniques disclosed here can be implemented to provide a custom viewing service. Conventionally, viewers are provided a single version of the edited content. However, because multiple versions of the edited content can be composed much more easily using the disclosed techniques, viewers can have the freedom to select the desired storyline(s) based on their interests and/or tastes. For example, given the availability of multiple versions of edited content (e.g., as discussed in Embodiment 1), viewers can pick and choose which storyline they would like to watch. A viewer can start viewing a multimedia content according to the default storyline and pause the content to make a selection during the viewing time. In some embodiments, a media player can be implemented to show a navigation list, via a user interface, to illustrate different sequences of scenes/shots that correspond to different storylines. The navigation list can be a simplified version of the hierarchical structure of the content generated based on the structured data format as shown in Table 1.FIG.6illustrates an example scenario600of using a navigation list to view different sequences of shots in accordance with one or more embodiments of the present technology. In this example, the navigation list650includes different sequences641,643of playing the content. The default sequence641indicates that the edited clips626,630of the two shots are to be played at viewing time. Alternatively, the viewer can select the custom sequence643, which plays the raw video clip from camera 1 (622) for shot1first, followed by the raw video clip from camera 1 (628) for shot2. In some embodiments, the video clips in a custom sequence are associated with each other at a device level (e.g., as in custom sequence643), at a shot level (e.g., a sequence can include video clips622,624that are captured for the same shot by the same or different devices), or at a scene level (e.g., a sequence can include video clips in the same scene, captured by the same/different devices for the same/different shots). The custom sequences can be created by the editor, the director, or the content provider using the custom editing service as discussed in Embodiment 1. Alternatively, the viewer can create the custom sequence643based on the hierarchical structure of the multimedia content. For example, the media player can display auxiliary information to help user identify which storyline or sequence is suitable for the viewer's taste. The user interface allows the viewer to select which sequence/scene/shot/camera of the storyline to continue the viewing experience. Additional video transition effects (e.g., fade, dissolve, wipe, etc.) can be added automatically or based on viewer's selection should the view decides to switch the playing sequence. In some embodiments, given the availability of the raw video clips from different cameras and/or angles, the viewer interface of a media player can be designed to allow viewers can simultaneously watch multiple video captures from different cameras and/or camera angles for same shot. When a viewer chooses to view clips from multiple cameras, the streaming server can send multiple video clips (e.g., with different resolutions and/or compression levels) to the media player. The media player can split the viewing screen into multiple rectangle sub-screens, each displaying a video clip from a corresponding camera. For example, the viewing screen can be divided into a main sub-region with a larger dimension and several small sub-regions. The main sub-region displays the producer's edited version, and smaller sub-regions display video clips from cameras with different shooting angles. In some embodiments, given the navigation list, viewers can fast-forward or rewind the media content more precisely according to the scenes and/or shots. For example, instead of fast-forwarding or rewinding the multimedia content based on equally-sized time units (e.g., 1 second as 1× speed, 5 seconds as 2× speed, etc.), the media player can fast-forward or rewind the content to the time-domain boundary (e.g., the beginning or the end) of a different shot or a different scene. Accordingly, when viewers try to move to a target scene or shot, there is no need for them to go through unrelated scenes or shots. Also, the viewers would not miss the target scene or shot due to the time unit size being too large (e.g., the fast-forwarding or rewinding speed is too fast). In one example aspect, the custom viewing service can be provided by a system for viewing a multimedia content that includes one or more processors and one or more memories including processor executable code. The processor executable code upon execution by the one or more processors configures the one or more processors to display, by a media player, a first media file of a copy of the multimedia content to a user over a network and receive, via a user interface of the media player, a first user input that triggers a display of information about a hierarchical structure of the multimedia content. The hierarchical structure indicates that the multimedia content comprises multiple scenes, each of which comprises multiple shots produced with corresponding devices and cast. The one or more processors are configured to receive, via the user interface, a second user input that indicates a viewing change from the first media file to a second media file. The first and second media files are associated with a same shot or a same scene in the multimedia content. The one or more processors are further configured to display, by the media player, the second media file to the user. In some embodiments, the first media file and the second media file are generated based on same video or audio data captured at the production stage corresponding to different storylines. In some embodiments, the first media file and the second media file are captured by different cameras at different angles for the same shot. In some embodiments, the second media file comprises video or audio data captured for a shot or a scene, and the first media file comprises an edited version of the captured video or audio data (that is, the user chooses to view the raw captured video/audio data). Using the disclosed techniques, the viewing experience now becomes much more interactive, and viewers are given the freedom to explore different possible endings of the content. Embodiment 3: Video Searching Service Because raw audio/video data as well as the edited content are organized according to the metadata which has information for each scene/shot/camera capture, video searching efficiency can be vastly improved with the assistance of the metadata information. Furthermore, instead of locating the entire multimedia content based on the search keywords, the disclosed techniques enable the viewers to locate smaller snippets of the content (e.g., a few shots, or a scene, and even a shot from a specific camera angle) in a vast database of multimedia contents. In some embodiments, the hierarchical structured metadata can be converted to a flat structure format and stored in database for search and analytical purposes. Certain key words of the scenes/shots can be indexed to allow viewers to quickly search through the available multimedia contents and locate desired shots. For example, a viewer can query all the kissing shots by entering the keyword “kiss” via the user interface of the media player. The viewer can add additional filtering options to limit the amount of returned results. In some embodiments, the viewer can impose a restriction on the search results so that the relevant video snippets are restricted or not viewable (e.g., for parental control purposes). In one example aspect, the video searching service can be provided by a system for searching one or more multimedia contents that includes one or more processors and one or more memories including processor executable code. The processor executable code upon execution by the one or more processors configures the one or more processors to receive, via a user interface of a media player, a search keyword from a user and select one or more media files from multiple multimedia contents according to the search keyword. Each of the multiple multimedia contents comprises a hierarchical structure having multiple scenes, each of which comprises multiple shots produced with corresponding devices and cast. The one or more media files are selected based on characteristics associated with individual scenes of each of the multiple multimedia contents, which are determined according to production stage information of the multimedia content. The one or more processors are also configured to provide, via the user interface of the media player, a list of the one or more media files to the user. In some embodiments, the one or more processors are also configured to receive, via a user interface of a media player, a user input from the user and impose a restriction on the list of the one or more media files based on the user input (e.g., preventing kids from viewing violent video snippets). FIG.7is a block diagram illustrating an example of the architecture for a computer system or other control device700that can be utilized to implement various portions of the presently disclosed technology (e.g., processor(s) to perform transcoding or transmuxing). The computer system700includes one or more processors705and memory710connected via an interconnect725. The interconnect725may represent any one or more separate physical buses, point to point connections, or both, connected by appropriate bridges, adapters, or controllers. The interconnect725, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 674 bus, sometimes referred to as “Firewire.” The processor(s)705may include central processing units (CPUs) to control the overall operation of, for example, the host computer. In certain embodiments, the processor(s)705accomplish this by executing software or firmware stored in memory710. The processor(s)705may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. The memory710can be or include the main memory of the computer system. The memory610represents any suitable form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices. In use, the memory710may contain, among other things, a set of machine instructions which, when executed by processor705, causes the processor705to perform operations to implement embodiments of the presently disclosed technology. Also connected to the processor(s)705through the interconnect725is a (optional) network adapter715. The network adapter715provides the computer system700with the ability to communicate with remote devices, such as the storage clients, and/or other storage servers, and may be, for example, an Ethernet adapter or Fiber Channel adapter. With the assistance from the production stage information, the techniques as disclosed herein allow viewers to have completely different viewing experiences of movies, TV shows or videos. Using the disclosed techniques, not only the directors/producers can produce different versions of the content based on the same raw data captured at the production stage, content providers also enjoy the flexibility of creating custom versions of the movies, TV shows, or other contents suitable for various viewer groups (e.g., based on viewer subscription plans). Furthermore, content providers can have better control of commercial placement in the movies, TV shows, or other contents to provide seamless viewing experience to the viewers. Real-time streaming of different versions of the same content, such as drama or movies, becomes possible. Moreover, viewers can have the flexibility of creating custom content on the fly at the viewing time. Viewers also have the option of viewing the same shot/scene from different angles based on the draw data captured at the production stage. Given the rich production stage information embedded in the metadata, the disclosed techniques can be used enable to the viewers to locate contents of interest from a vast amount of available contents on the network. The above examples demonstrate that the techniques and systems disclosed in this patent document for packaging and distribution of movies, TV shows and other multimedia can be used to enhance the existing technologies adopted by movie and TV production companies to provide flexibility and features that are not available in various conventional movie or TV programs. In addition, the disclosed techniques make real-time content distribution and viewing much more user friendly. Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program (also known as a program, software, software application, machine-readable script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments. Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
50,324
11943513
DESCRIPTION OF EMBODIMENT A detailed description of an embodiment of the present invention will be given below with reference to drawings. FIG.1is a configurational block diagram illustrating an overall outline of an information processing system1including client apparatuses10that are information processing apparatuses according to the embodiment of the present invention. As illustrated inFIG.1, the information processing system1includes the plurality of client apparatuses10and a server apparatus20. Each of the client apparatuses10is, for example, an information processing apparatus such as a home gaming console or a personal computer. As illustrated inFIG.1, the client apparatus10includes a control section11, a storage section12, and an interface section13. Also, the client apparatus10is connected to an operating device14and a display apparatus15. The control section11includes at least one processor such as a CPU (Central Processing Unit) and performs a variety of information processing tasks by executing a program stored in the storage section12. It should be noted that a specific example of a processing task performed by the control section11in the present embodiment will be described later. The storage section12includes at least one memory device such as a RAM (Random-Access Memory) and stores the program executed by the control section11and data processed by the program. The interface section13is an interface for data communication with the operating device14and the display apparatus15. The client apparatus10is connected to each of the operating device14and the display apparatus15in a wired or wireless manner via the interface section13. Specifically, the interface section13includes a multimedia interface such as an HDMI (registered trademark) (High-Definition Multimedia Interface) to send video image data supplied by the client apparatus10, to the display apparatus15. Also, the interface section13includes a data communication interface such as a USB (Universal Serial Bus) to receive an operation signal indicating details of a user operation received by the operating device14. Further, in the present embodiment, the interface section13includes a communication interface such as a LAN (Local Area Network) to send and receive data to and from other information processing apparatuses via a communication network such as the Internet. The client apparatus10is connected to the other client apparatuses10and the server apparatus20for communication via this interface section13. The operating device14is, for example, a dedicated controller of a home gaming console or the like. The operating device14receives an operation instruction from the user and sends an operation signal indicating details of the operation instruction to the client apparatus10. The display apparatus15displays a video image corresponding to a video image signal sent from the client apparatus10, to let the user view the video image. The display apparatus15may be a stationary display apparatus such as a home television receiver. Alternatively, the display apparatus15may be a head-mounted display or the like worn by the user on the head. The server apparatus20is a server computer that manages a running status of each client apparatus10and that provides the user of each client apparatus10with a service realized by the information processing system according to the present embodiment. In the description given below, one of the plurality of client apparatuses10will be denoted as a client apparatus10n,and a user using the client apparatus10nwill be denoted as a featured user Un for reasons of convenience. Then, the user using each of the other client apparatuses10will be denoted as another user. Also, users including the featured user Un and the other user and using the service provided by the information processing system1will be denoted as service users. A friend relationship may be set between service users. Such setting of a friend relationship is managed by the server apparatus20according to registered information of each service user. That is, the server apparatus20retains friend relationship setting information indicating with whom each service user is a friend. In the description given below, another user who has a friend relationship with the featured user Un will be denoted as a friend user. A description of functions realized by the client apparatus10will next be given with reference toFIG.2by taking the client apparatus10nas an example. As illustrated inFIG.2, the client apparatus10nfunctionally includes a content selection screen presentation section51, a preview screen presentation section52, and a content presentation section53. These functions are realized as a result of execution of the program stored in the storage section12by the control section11. This program may be provided to the client apparatus10nvia a communication network such as the Internet or in such a manner as to be stored in a computer-readable information storage medium such as an optical disc. The content selection screen presentation section51draws a content selection screen CS and displays the content selection screen CS in a display region of the display apparatus15. The content selection screen CS is a screen for allowing the featured user Un to select a plurality of types of content (hereinafter referred to as content options C) that can be provided to the featured user Un by the client apparatus10n,and includes content images CI representing the respective content options C. The content images CI may be icon images, cover images, and the like.FIG.3illustrates an example of the content selection screen CS that includes six content images CI1to CI6representing a total of six content options C1to C6, respectively. The content options C that are selectable on the content selection screen CS may include a various types of content including, for example, game content that can be played by the featured user Un, video content such as movies, and music content. In addition, content realized by a variety of independent application programs may also be included. Examples of content realized by such application programs include chat content realized by a chat application and message content realized by a message application. Also, the content options C presented on the content selection screen CS may include information regarding a user group presented by a user group application. A user group is created by any one of service users and is a group of which a plurality of service users are members. The server apparatus20manages which service user is a member of which user group. The user group application displays states of other users who are members of a user group in which the featured user Un takes part, and displays a schedule of the group. For example, in the case where a plurality of service users intend to play the same game at the same time together, these service users can share the schedule by becoming members of one user group and registering the schedule with the user group. In the case where the featured user Un selects any one of the content options C in a state where the content selection screen CS is displayed by the content selection screen presentation section51, the preview screen presentation section52displays a preview screen PS corresponding to the selected content option C. Specifically, the featured user Un selects any one piece of content from the content options C presented as a result of an instruction operation performed on the operating device14, in a state where the content selection screen CS is displayed in the display region of the display apparatus15. When any one of the content options C is placed into a selected state, the preview screen presentation section52displays the preview screen PS regarding the content option C in a selected state. While viewing a preview of each content option C, the featured user Un selects content which the featured user Un intends to use, and then performs an operation to confirm the selection on the operating device14. In response to this confirmation operation, the content presentation section53executes a program corresponding to the selected content and starts the presentation of the content. It should be noted that details of control realized by the preview screen presentation section52will be described later. The content presentation section53executes the program corresponding to the content option C which is confirmed as a final selection by the featured user Un in a state where the content selection screen CS is presented, thus presenting the corresponding content. For example, in the case where game content is selected, the content presentation section53executes a corresponding game program. In the case where video or music content is selected, the content presentation section53executes a player program that reproduces the selected content. Also, in the case where a user group is selected as the content option C, the content presentation section53executes an application program that manages the user group, and presents content including information regarding the user group. A detailed description of the preview screen PS presented while the content selection screen CS is presented will be given below. First, when presenting the content selection screen CS before the presentation of the preview screen PS by the preview screen presentation section52, the content selection screen presentation section51acquires information regarding another user currently using each piece of content (hereinafter referred to as current user information), thus displaying current user information within the content selection screen CS. For example, in the case where the content option C1is content of a game A and where users U1and U2are currently playing the game A by using their own client apparatuses10, the content selection screen presentation section51displays two user images UI representing these two other users, as current user information in association with the content option C1of the game A. It should be noted that the user images UI may be avatar images, photograph images, or the like registered by their users in advance. InFIG.3, such user images UI are arranged side by side at an upper or lower edge of each content image CI. For example, a user image UI1representing the user U1and a user image UI2representing the user U2are displayed side by side at the lower edge of the content image CI1representing the content option C1. It should be noted here that each of the user images UI is arranged at a position close to a side of a perimeter of the associated content image CI, the side being closer to the center of the content selection screen CS than the perimeter thereof. In order to control the display of such current user information, when starting presentation of content, the content presentation section53of each client apparatus10notifies the server apparatus20of information identifying the content whose presentation will be started (e.g., ID (Identification) assigned to content) and information identifying a service user watching the content (e.g., user account). Also, when terminating the presentation of content, the content presentation section53notifies the server apparatus20of the termination. The server apparatus20manages which content is used by which service user at present in each of the client apparatuses10, by receiving such a notice. When starting the presentation of the content selection screen CS, the content selection screen presentation section51of the client apparatus10nsends an acquisition request for current user information to the server apparatus20. This acquisition request includes information identifying the content options C to be included in the content selection screen CS. The server apparatus20that has received the acquisition request identifies, for each content option C included in the acquisition request, the service user currently using the content and returns the information to the client apparatus10nthat has sent the request, as current user information. This reply includes at least information identifying a current user currently using each content option C. Further, the reply may include access destination information (e.g., IP (Internet Protocol) address) of the client apparatus10used by the current user and data of the user image UI representing the current user. On the basis of the current user information received from the server apparatus20, the content selection screen presentation section51of the client apparatus10ndisplays, in association with each of the content options C, the user image UI representing the current user currently using the content option C. Here, current user information sent from the server apparatus20to the client apparatus10may include only friend user information of the featured user Un. Alternatively, the current user information may include information regarding other users except friend users. Also, each service user may set, in advance, a disclosure rule such as whether to disclose his or her own current user information only to friend users or to all service users. In such a case, when receiving an acquisition request from each client apparatus10, the server apparatus20narrows down service users who are actually using the content option C for which the acquisition request has been made, to select current users to be included in a reply, on the basis of the disclosure rule and the friend setting of each of the service users. Also, in the case where there are a plurality of service users who are currently using the same content option C, the server apparatus20may include, in current user information, only information regarding one user selected according to a given criterion or may send information regarding a plurality of current users. Also, in the case where the content option C is game content and where there are a plurality of friend users who take part in the same session and play the game together, information regarding these friend users may be sent as current user information. It should be noted that users currently using certain content change over time. Accordingly, while the content selection screen CS is presented, the content selection screen presentation section51may repetitively send an acquisition request for current user information to the server apparatus20on a regular basis and may update current user information to be displayed, to the latest information every time the content selection screen presentation section51sends the acquisition request. Also, as for the content option C such as a chat application, the content selection screen presentation section51may display the user image UI representing another user who exchanges messages with the featured user Un, by using the application, instead of information regarding a user currently using the content. Also, as for the content option C of a user group, the user images UI each representing another user who is a member of the user group may be displayed. In the example illustrated inFIG.3, the content image CI3representing the content option C3of a user group includes the user images UI representing other users who are members of the user group. Thereafter, in the case where the featured user Un selects any one of the content options C while the content selection screen CS is displayed, the preview screen presentation section52displays the preview screen PS corresponding to the content option C. At this time, the preview screen presentation section52displays the state of the content screen actually watched by the current user associated with the content option C, within the preview screen PS. This allows the featured user Un to view, for example, the state of a game actually being played by another user and select content which the featured user Un intends to use from now. In particular, in the case where, for example, the featured user Un intends to join the game currently played by a friend user, the featured user Un can select desired content by using the preview screen PS described above. In order to control the display of the preview screen PS described above, in the case where the featured user Un selects one of the content options C, the preview screen presentation section52of the client apparatus10nidentifies another user currently using the selected content option C, by referring to current user information received in advance from the server apparatus20. Then, the preview screen presentation section52sends a content delivery request to the client apparatus10used by the user. The content presentation section53of the client apparatus10that has received the content delivery request delivers on a real time basis, to the client apparatus10nthat has sent the request, a video representing the content screen currently displayed in the display region of the display apparatus15for presentation to the user of the client apparatus10. The preview screen presentation section52of the client apparatus10nreproduces the delivered video within the preview screen PS on a real time basis (i.e., in parallel with and at the same time as the reception of the delivered video). Such control allows the featured user Un to watch the content screen actually being watched by another user. FIG.4illustrates a display example in which the preview screen PS described above is displayed in such a manner as to be superimposed on the content selection screen CS. Here, the preview screen PS is in the form of a speech balloon, pointing to one of the user images UI. This user image UI corresponds to a current user who delivers the video currently presented on the preview screen PS. As described above, it is possible to readily find out to whom the currently displayed video is presented as a content screen, by displaying the preview screen PS in association with the user image UI included in the content selection screen CS. It should be noted that, in the case where there are a plurality of other users who are currently using the selected content option C, the preview screen presentation section52may receive a video representing the content screen currently watched by one of the other users who is selected from the plurality of other users according to a given criterion and may present the video as the preview screen PS. Alternatively, the preview screen presentation section52may receive, in parallel, videos representing content screens individually from the plurality of client apparatuses10used by the plurality of other users and may present the plurality of videos as the preview screens PS at the same time.FIG.5illustrates an example in which a plurality of videos are displayed side by side as described above. In this example, the states of three other users who are playing the same game are displayed simultaneously. It should be noted that, in the example illustrated inFIG.5, three videos are arranged in order corresponding to that of the three user images UI arranged side by side along an edge portion of the content image CI. As described above, it is possible for the featured user Un to find out which video is presented to whom as a content screen, by displaying the plurality of videos in such a manner as to associate the plurality of videos with corresponding ones of the user images representing the plurality of other users. As described above, by receiving a video representing a content screen currently watched by another user for display, it is possible for the preview screen presentation section52to present the user with a preview of a content screen that can be presented by a program corresponding to the content option C, without directly executing the program by the preview screen presentation section52. It should be noted that, in the case where the content option C that is not associated with current user information is selected (e.g., in the case where no user is currently using the content option C), the preview screen presentation section52may display a video or the like prepared in advance, within the preview screen PS. What is displayed on the preview screen PS presented by the preview screen presentation section52may change depending on the type of the selected content option C. For example, in the case where content that is realized by an application program for interaction with other users, such as a chat application or a message application, is selected, the preview screen presentation section52may display, within the preview screen PS, information indicating messages exchanged in the past by using the program, messages not read by the featured user Un, or the like. Also, messages actually posted in a chat room or the like may be displayed on a real time basis. Such information can be acquired from the server apparatus20that relays messages. Also, in the case where the content option C corresponding to a user group is selected, the preview screen presentation section52may present the preview screen PS including information such as current situations of other users who are members of the user group (information regarding content currently used by each of the other users) and a future schedule. Such information can also be received and acquired from the server apparatus20.FIG.6illustrates a display example of the preview screen PS illustrating such a user group status. InFIG.6, the next schedule set for the user group and the remaining time until the scheduled date and time are displayed on the left on a real time basis. Also, a current status of each of four other users who are members of the user group is illustrated on the right. In particular, as for users currently using some kind of content with the client apparatuses10, videos representing screens of content viewed by the respective users are displayed side by side as, similarly to the example illustrated inFIG.5. Also, while the preview screen PS is displayed, the featured user Un may select whether to start the use of the content by activating a program corresponding to the content by himself or herself or to continue watching the video presented on the preview screen PS. In the case where an operation for selecting the continuation of watching the video presented on the preview screen PS is received, the preview screen presentation section52switches what is displayed, to a delivered video viewing mode in which a video currently delivered is displayed over the entire display region of the display apparatus15. This allows the featured user Un to concentrate on viewing a video representing the state where another user is playing. In the examples described so far, the content options C presented on the content selection screen CS are those that can be presented by the client apparatus10n,such as a game program that has already been downloaded to the client apparatus10nin advance or video data that has already been downloaded. However, the content options C presented on the content selection screen CS are not limited thereto, and the content selection screen presentation section51may include, in the content options C, content whose data does not exist in the client apparatus10n(hereinafter referred to as non-presented content). For example, there is a possibility that a game played by a friend user, a video watched by the friend user, or the like may capture the attention of the featured user Un. Accordingly, when the content selection screen presentation section51sends an acquisition request for current user information to the server apparatus20, the server apparatus20replies not only current user information of the content option C included in the acquisition request but also information identifying non-presented content such as a game actually being used by a friend user of a service user who has made the request (here, service user is featured user Un), in such a manner as to include the information identifying the non-presented content in the current user information. The content selection screen presentation section51presents the user with the non-presented content that is currently used by the friend user and that is included in the replied current user information, in such a manner as to be included in the content selection screen CS as one of the content options C. Data of this non-presented content does not exist in the client apparatus10n.However, by receiving a video delivered from the client apparatus10used by the friend user (i.e., video representing a content screen currently presented to the friend user), it is possible for the preview screen presentation section52to present the featured user Un with the preview screen PS of the non-presented content in a similar manner to those of the other content options C. In the case where the featured user Un performs a selection confirmation operation for using the non-presented content, the content presentation section53performs a preparatory process to allow presentation of the non-presented content to the featured user Un. This preparatory process may be, for example, a process of downloading data of the non-presented content from a provider, a process of displaying a purchase procedure screen for prompting the featured user Un to purchase the non-presented content, or the like. This allows the featured user Un to newly use the non-presented content which has been unavailable up to that point. It should be noted that, in this example, non-presented content included in current user information provided to the client apparatus10nby the server apparatus20is not limited to content currently used by a friend user and may be content currently used by another user other than a friend user. For example, the server apparatus20may present the client apparatus10nwith information regarding non-presented content having a large number of current users at that point, content set in advance as recommended content, or other content, in such a manner as to be included in the current user information. Also, in the case where the preview screen PS of non-presented content is presented, the preview screen presentation section52may present the preview screen PS in a manner different from that for other presentable content. Specifically, it is assumed that non-presented content is not owned by the featured user Un. Therefore, for example, in the case where the non-presented content is a movie or a video, it may not be preferred that the featured user Un continue watching a preview of the content. Accordingly, the preview screen presentation section52may impose a predetermined restriction on the presentation of a preview of non-presented content, such as a restriction which limits an amount of time that the featured user Un is allowed to continue watching a preview of content, to equal to or less than a predetermined amount of time, or a restriction which presents a preview at a lower resolution than that of other content. Also, the usage of the delivered video viewing mode described above may be restricted for non-presented content. In the case where the preview screen presentation section52receives a video representing a currently presented content screen from the other client apparatus10, it may take time to receive the video, and the display may not be started immediately. Further, in the case where the featured user Un gives an instruction to start the usage of non-presented content, the process of downloading data of the non-presented content may also take time. As described above, in the case where a process that requires the user to wait is performed, the client apparatus10may give a guidance display for reporting the progress level of the process (hereinafter referred to as a progress guidance display). Specific examples of such a progress guidance display will be described here. In order to report a progress level of the process that requires the user to wait, it has been common to use a progress guidance display such as a progress bar that changes what is displayed, as needed, as the process proceeds. Further, in addition to the progress guidance display described above, it is common to display a numerical value indicating a progression level of the process, such as a progress rate (ratio), an amount of processing actually completed (e.g., downloaded data size), or estimated remaining time, and update this numerical value as the process proceeds. In the case where such a progress guidance display is given, the client apparatus10checks the progression level of the process at relatively short time intervals and performs a process of updating the display if the process proceeds to or beyond a predetermined ratio. According to such a process, it is possible for the user to readily find out, by a progress guidance display, approximately how far the process has proceeded so far and approximately how much more time it will take, for example. However, in the case of a type of progress guidance display that updates what is displayed, as needed, as the process proceeds, if the progression of the process is temporarily delayed, the display may not be updated. A description of a progress guidance display used in the case where the preview screen presentation section52performs a process of receiving delivered video image data from the client apparatus10used by another user, will be given as a specific example. When starting the display of the preview screen PS, the preview screen presentation section52first receives a volume of delivered video images that allows reproduction for a predetermined time period, and starts the display of the delivered video images after the reception thereof is complete. Then, until the display of the delivered video images is started, the preview screen presentation section52gives a progress guidance display indicating the progress status of the data reception process required for the start. In this example, while a progress guidance display is given, a situation in which a received data volume increases only little by little due to congestion in the communication network, for example, may arise. Here, in the case where the display is updated for each 1% increase in the progress rate of the process (the ratio of received data to an entire data size required to start the display), if it takes a long time to receive data corresponding to 1%, what is displayed in the progress guidance display is not updated for a relatively long time. In such a situation, the user viewing the progress guidance display cannot determine whether the speed at which the process proceeds has simply temporarily slowed down or the process has come to a halt due to an internal error or the like. Accordingly, in the case where a situation where the progress guidance display is not updated for a predetermined reference time period (i.e., the progress level of the process does not increase by a predetermined update threshold or more within the predetermined reference time period) arises, the preview screen presentation section52increases the number of displayed digits of the numerical value of the displayed progress level.FIG.7is a diagram illustrating an example of such a progress guidance display, and an example of a display in a normal state (before the number of displayed digits is increased) is illustrated at the top, and an example of a display in a slowed-down state (after the number of displayed digits is increased) is illustrated at the bottom. In this example, the numerical value of the progress level is displayed in percentage to ones place in a normal state, and no values to the right of a decimal point are displayed. However, in the case where this display is not updated continuously for a predetermined determination time or more, the preview screen presentation section52increases the number of displayed digits of the numerical value of the progress level to a second decimal place as illustrated at the bottom inFIG.7. If the number of displayed digits increases, the update threshold for updating the numerical value of the progress level also drops (in the example illustrated inFIG.7, the numerical value is updated if the progress rate increases by 0.01% or more). Accordingly, even in a situation where the processing speed has slowed down and where the progression of the process cannot be expressed by a progress bar, the user can confirm, from the numerical value of the progress level, that the process has not come to a halt and is progressing little by little. On the other hand, in the case where the process has come to a halt, even if the number of displayed digits increases, the situation where the numerical value of the progress rate is not updated continues. Accordingly, by increasing the number of displayed digits of the numerical value of the progress level, it becomes easier for the user to distinguish between a situation where the speed at which the process proceeds has simply slowed down and a situation where the process has come to a halt. In this display example, the number of displayed digits of the numerical value is not increased until a situation where the numerical value is not updated for a predetermined reference time period arises. The reason for this is that, while the process proceeds at a normal speed, the numbers in low places are updated at a speed that does not permit distinction by the user, and that the user is less likely to find out the progression level of the process. Also, in the case where the progress level of the process increases again by the predetermined update threshold or more within the predetermined reference time period after the number of displayed digits is increased, the preview screen presentation section52restores the display to the initial number of digits by reducing the number of displayed digits of the numerical value of the progress level. According to such control over switching between numbers of displayed digits, it is possible to give a detailed guidance regarding the progression level to the user only in the case where the user can make a distinction. It should be noted that the progress guidance display is not limited to that illustrated inFIG.7and may have a variety of modes including a numerical value of the progress level.FIG.8illustrates another example of the progress guidance display. Also, inFIG.8, an example of a display in a normal state before the number of displayed digits is increased is illustrated at the top, and an example of a display after the number of displayed digits is increased is illustrated at the bottom. Also, in the examples illustrated inFIG.8, the numerical value of the progress level is indicated by a received data size instead of a ratio (percentage). Further, in this example, displayed is a marker M that moves on a circular route according to the progress level of the process to display the progress level with animation. This marker M starts moving from an upward direction position (12 o'clock position), moves one lap around a circular route as the process proceeds, and returns to the initial 12 o'clock position when the process is complete. Further, the preview screen presentation section52may give the user a guidance regarding not only the progress level of the process at that point but also the speed at which the process proceeds at that point, by causing this marker M to flash at a cycle corresponding to the speed at which the process proceeds, for example. This makes it possible for the user to find out not only the progress level of the process but also the speed at which the process proceeds at present. In this case, the cycle may be controlled such that the more slowly the process proceeds, the shorter the cycle is. In the case where a guidance regarding the speed at which the process proceeds is given, it is easy for the user to find out that the progression speed has slowed down, by ensuring that the lower the progression speed is, the more quickly what is displayed changes (the shorter the time intervals for changing what is displayed), compared with the case of controlling the display such that the slower the progression speed is, the less often what is displayed changes. It should be noted that, in the case where the progression speed drops below a given reference value, a display that is distinguishable from other situations, such as a display where the display of the marker M itself is deleted, may be given. This makes it possible to present the user with a situation where the process has come to a halt, in an easy-to-understand manner. As described above, according to the client apparatus10of the embodiment of the present invention, it is possible to more clearly present the user with details presented by content, such as the state of a game being played, by presenting a video representing a screen viewed by another user currently using the content, as the preview screen PS corresponding to each of the content options C when the content selection screen CS that receives a selection of the content option C is presented. This makes it easier for the user to select desired content. It should be noted that embodiments of the present invention are not limited to the embodiment described above. In the description given above, for example, a layout of the content selection screen CS or the preview screen PS and what is displayed on the content selection screen CS or the preview screen PS are both merely illustrative. Also, at least some of the processes performed by the server apparatus20as described above may be performed by the client apparatus10. For example, the client apparatus10nmay acquire information regarding content currently used by the client apparatus10of another user registered in advance, directly from the client apparatus10and may present the information as current user information to the user. Also, the progress guidance display given by the preview screen presentation section52during reception of a delivered video image in the description given above is not limited thereto and may be applied to a variety of processes whose progress level is required to be presented to the user. REFERENCE SIGNS LIST 1: Information processing system10: Client apparatuses11: Control section12: Storage section13: Interface section14: Operating device15: Display apparatus20: Server apparatus51: Content selection screen presentation section52: Preview screen presentation section53: Content presentation section
38,514
11943514
DETAILED DESCRIPTION In order to make the purposes, embodiments and advantages of the disclosure clearer, the embodiments of the disclosure will be described clearly and completely below in combination with the accompanying drawings in embodiments of the disclosure. Obviously the described embodiments are only some but not all the embodiments. Based upon embodiments of the disclosure, all of other embodiments obtained by those ordinary skilled in the art without creative work shall fall within the protection scope of the disclosure. FIG.1Ashows a schematic diagram of a scenario between a display apparatus and a control device in an embodiment. As shown inFIG.1A, the communications between the control device100and the display apparatus200may be performed in a wired or wireless manner. The control device100is configured to control the display apparatus200, receive a command input from the user, and convert the command into an instruction that can be recognized and responded by the display apparatus200, which serves an intermediary media between the user and the display apparatus200. For example, the user operates the channel +/− keys on the control device100, and the display apparatus200responds to the channel +/− operations. The control device100may be a remote controller100A, which includes the infrared protocol communication or Bluetooth protocol communication and other short-range communication methods, etc., and controls the display apparatus200wirelessly or by other wired methods. The user may input commands through the keys on the remote controller, voice inputs, control panel inputs, etc. to control the display apparatus200. For example, the user may input corresponding commands through the volume +/− keys, channel keys, up/down/left/right directional keys, voice input keys, menu key, power key, etc. on the remote controller to control the functions of the display apparatus200. The control device100may also be a smart device, such as a mobile terminal100B, a tablet computer, a computer, a notebook computer, etc. For example, an application running on the smart device is used to control the display apparatus200. This application may be configured to provide the user with various controls through an intuitive User Interface (UI) on the screen associated with the smart device. In some embodiments, the mobile terminal100B and the display apparatus200may install software applications, and implement the connection and communication through the network communication protocols, achieving the purpose of one-to-one control operation and data communication. For example, the mobile terminal100B and the display apparatus200may establish an instruction protocol, and the functions of the physical keys arranged in the remote controller100A are realized by operating various function keys or virtual buttons on the user interface provided on the mobile terminal100B. The audio and video content displayed on the mobile terminal100B may also be transmitted to the display apparatus200to realize the synchronous display function. The display apparatus200may provide a broadcast receiving function and a computer supported network TV function. The display apparatus may be implemented as digital TV, Internet TV, Internet Protocol TV (IPTV), etc. The display apparatus200may be a liquid crystal display, an organic light emitting display, or a projection device. The specific type, size and resolution of the display apparatus are not limited. The display apparatus200communicates with a server300through various communication methods. Here, the display apparatus200may be allowed to perform the communication and connection through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server300may provide various contents and interactions to the display apparatus200. Exemplarily, the display apparatus200may send and receive information, for example: receive the Electronic Program Guide (EPG) data, receive the software updates, or access a remotely stored digital media library. FIG.1Bshows a block diagram of the configuration of the control device100. As shown inFIG.1B, the control device100includes a controller110, a memory120, a communicator130, a user input interface140, an output interface150, and a power supply160. The controller110includes a Random Access Memory (RAM)111, a Read Only Memory (ROM)112, a processor113, a communication interface and a communication bus. The controller110is configured to control the running and operations of the control device100, and the communication cooperation among internal components as well as the external and internal data processing functions. In some embodiments, when an interaction event like a user pressing a key on the remote controller100A or touching a touch panel on the remote controller100A is detected, the controller110may generate a signal corresponding to the detected event and send the signal to the display apparatus200. The memory120is used to store various operating programs, data and applications that drive and control the control device100under the control of the controller110. The memory120may store various kinds of control signal commands input from the user. The communicator130realizes the communications of control signals and data signals with the display apparatus200under control of the controller110. For example, the control device100sends a control signal (e.g., a touch signal or a button signal) to the display apparatus200via the communicator130, and the control device100may receive a signal sent from the display apparatus200via the communicator130. The communicator130may include an infrared signal interface131and a radio frequency signal interface132. For example, in the case of infrared signal interface, the command input from the user is converted into an infrared control signal according to the infrared control protocol and then sent to the display apparatus200via the infrared sending module. As another example, in the case of radio frequency signal interface, the command input from the user is converted into a digital signal, modulated according to the radio frequency control signal modulation protocol, and then the modulated signal is sent to the display apparatus200via the radio frequency sending module. The user input interface140may include at least one of a microphone141, a touch pad142, a sensor143, a key144, etc., so that a user may input the command for controlling the display apparatus200to the control device100through voice, touch, gesture, pressing, etc. The output interface150outputs a user command received from the user input interface140to the display apparatus200, or outputs the image or voice signal received from the display apparatus200. Here, the output interface150may include an LED interface151, a vibration interface152that generates vibration, a sound output interface153that outputs sound, and a display154that outputs image, etc. For example, the remote controller100A may receive output signal such as audio, video or data from the output interface150, and display the output signal in the form of image on the display154, in the form of audio on the sound output interface153, or in the form of vibration on the vibration interface152. The power supply160is used to provide operating power support for all the elements of the control device100under control of the controller110. The power supply may include battery and related control circuit. FIG.1Cshows a block diagram of a hardware configuration of the display apparatus200. As shown inFIG.1C, the display apparatus200may include a modem210, a communicator220, a detector230, an external device interface240, a controller250, a memory260, a user interface265, a video processor270, a display275, an audio processor280, an audio output interface285, and a power supply290. The modem210receives the broadcast television signals in a wired or wireless manner, and may perform the amplification, frequency mixing, resonance and other modulation/demodulation processing, to demodulate the television audio/video signals carried in the frequency of the television channel selected by the user from multiple wireless or wired broadcast television signals, as well as additional information (e.g., EPG data). The modem210may respond to the television signal frequency selected by the user and the television signal carried by the frequency according to the user's selection under control of the controller250. According to different television signal broadcasting formats, the modem210may receive signals in many forms, such as: terrestrial digital television, cable broadcasting, satellite broadcasting, or Internet broadcasting or the like; according to different modulation types, digital modulation mode or analog modulation mode may be used; and according to different types of television signals, analog signals and digital signals may be used. In other exemplary embodiments, the modem210may also be in an external device, such as an external set-top box. In this way, the set-top box outputs television audio/video signals after modulation and demodulation, which are input to the display apparatus200through the input/output interface240. The communicator220is a component in communication with an external device or an external server according to various types of communication protocols. For example, the display apparatus200may send the content data to an external device connected via the communicator220, or browse and download the content data from an external device connected via the communicator220. The communicator220may include a WIFI module221, a Bluetooth communication protocol module222, a wired Ethernet communication protocol module223, and other network communication protocol modules or near-field communication protocol modules, so that the communicator220can receive control signals of the control device100under the control of the controller250and implement the control signals as WIFI signals, Bluetooth signals, radio frequency signals, etc. The detector230is a component configured for the display apparatus200to collect the external environment signal or the signal interacted with the outside. The detector230may include a sound collector231; or may collect the environment sound for identifying the environment scene type. In some embodiments, the detector230may further include an image collector232. In some embodiments, the detector230may further include a light receiver configured to collect the ambient light intensity to allow the display apparatus200to adjust display parameters, etc. In other embodiments, the detector230may further include a temperature sensor. For example, by sensing the ambient temperature, the display apparatus200may adjust the display color temperature of the image accordingly. In some embodiments, when the environment has a high temperature, the color temperature of the image presented on the display apparatus200may be adjusted to cold color tone; when the environment has a low temperature, the image presented on the display apparatus200may be adjusted to warm color tone. The external device interface240is a component that provides for the controller250to control the data transmission between the display apparatus200and external devices. The external device interface240may be connected to external devices such as set-top box, game device, laptop, etc. in a wired/wireless manner, and may receive the data such as video signals (e.g., moving images), audio signals (e.g., music), additional information (e.g., EPG), etc. of the external devices. Here, the external device interface240may include: any one or more of a High-Definition Multimedia Interface (HDMI)241, a Composite Video Blanking Synchronization (CVBS) interface242, an analog or digital component interface243, a Universal Serial Bus (USB) interface244, a Component interface (not shown in the figure), a Red-Green-Blue (RGB) interface (not shown in the figure), etc. The controller250controls the operations of the display apparatus200and responds to the user's operations by running various software control programs (such as operating system and various applications) stored on the memory260. For example, the controller may be implemented as a System-on-a-Chip (SOC). As shown inFIG.1C, the controller250includes a Random Access Memory (RAM)251, a Read Only Memory (ROM)252, a graphics processor253, a CPU processor254, a communication interface255, and a communication bus256, wherein the RAM251, the ROM252, the graphics processor253, the CPU processor254and the communication interface255are connected through the communication bus256. The graphics processor253is used to generate various graphics objects, such as icons, operation menus, display graphics of user input commands, etc. The graphics processor253may include: an arithmetic unit configured to perform the operations by receiving various interactive instructions input from users and then display various objects according to the display attributes; and a renderer configured to generate the result of rendering various objects obtained based on the arithmetic unit and display it on the display275. The communication interface255may include a first interface to an nthinterface. These interfaces may be network interfaces connected to external devices via a network. The controller250may control the overall operation of the display apparatus200. For example, in response to receiving a user command for selecting a GUI object presented on the display275, the controller250may perform the operations related to the object selected by the user input command. For example, the controller may be implemented as an SOC (System on Chip) or an MCU (Micro Control Unit). Here, the object may be any one of objects available for selection, such as a hyperlink or an icon. The operations related to selected objects for example include: operations for displaying a hyperlinked page, document or image, or operations for launching applications corresponding to icons. User commands for selecting a GUI object can be commands input from various input devices (for example, a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus200or voice commands corresponding to voices from the user. The memory260is used to store various types of data, software programs or applications for driving and controlling the operations of the display apparatus200. The memory260may include a volatile and/or non-volatile memory. The term “memory” includes the memory260, the RAM251and ROM252of the controller250, or a memory card in the display apparatus200. In some embodiments, the memory260is further configured to store an application for driving the controller250in the display apparatus200; store various applications built in the display apparatus200and downloaded by the user from external devices; and store data for configuring various GUIs provided by the display275, various GUI-related objects, and visual effect images of a selector for selecting GUI objects. In some embodiments, the memory260is further configured to drive programs and related data of the modem210, communicator220, detector230, external device interface240, video processor270, display275and audio processor280, etc., for example, the external data (such as audio and video data) received from the external device interface or the user data (such as key information, voice information, touch information, etc.) received from the user interface. In some embodiments, the memory260specifically stores software and/or programs for representing the Operating System (OS), where these software and/or programs may include, for example, kernel, middleware, Application Programming Interface (API), and/or applications. FIG.1Dshows a block diagram of the architecture configuration of the operating system in the memory of the display apparatus200. The operating system architecture includes an application layer, a middleware layer and a kernel layer from top to bottom. The middleware layer may provide some standard interfaces to support the operations of various environments and systems. For example, the middleware layer may be implemented as Multimedia and Hypermedia Information Coding Expert Group (MHEG) for data broadcast-related middleware, or may be implemented as DLNA middleware for external device communication-related middleware, or may be implemented as a middleware for providing the browser environment in which each application in the display apparatus runs, etc. The kernel layer provides core system services, for example, file management, memory management, process management, network management, system security authority management, and other services. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on a Linux operating system. The kernel layer also provides the communication between system software and hardware, and provides device drive services for various hardware. The user interface265receives various user interactions. Specifically, it is used to send the user's input signal to the controller250or transmit the output signal from the controller250to the user. In some embodiments, the user may input a user command on the Graphical User Interface (GUI) presented on the display275, and then the user interface265receives a command input from the user through the GUI. Specifically, the user interface265may receive the user command for controlling the position of a selector in the GUI to select different objects or items. Alternatively, the user may input a user command by inputting particular speech or gesture, and then the user interface265recognizes the speech or gesture through the sensor to receive the user input command. The video processor270is used to receive the video signal and perform the video data processing according to the standard codec protocol of the input signal, to obtain the video signal that can be displayed or played directly on the display275. The video processor270includes a de-multiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, etc. The display275is used to receive image signals input from the video processor270, and display the video content, images and the menu control interface. The displayed video content may be the video content from the broadcast signal received by the modem210, or may be the video content input from the communicator220or external device interface240. The display275also presents a user control interface (UI) generated in the display apparatus200for controlling the display apparatus200. Also, the display275may include a panel for presenting images and a drive component that drives the image display. Alternatively, if the display275is a projection display, it may further include a projection device and a projection screen. The audio processor280is used to receive an external audio signal, and perform the audio data processing according to the standard codec protocol of the input signal, to obtain an audio signal that can be played in the speaker286. In some embodiments, the audio processor280may support various audio formats. For example, MPEG-2, MPEG-4, Advanced Audio Coding (AAC), High Efficiency AAC (HE-AAC) and other formats. The audio output interface285is used to receive the audio signal output from the audio processor280under the control of the controller250, and the audio output interface285may include a speaker286, or an external audio output terminal287output to a sound device of an external device. In other embodiments, the video processor270may include one or more chips. The audio processor280may also include one or more chips. In other embodiments, the video processor270and the audio processor280may be separate chips, or may be integrated into one or more chips together with the controller250. The power supply290is used to provide the power supply for the display apparatus200through the power input from an external power supply under the control of the controller250. The power supply290may be a built-in power supply circuit installed inside the display apparatus200, or may be a power supply installed outside the display apparatus200. It should be noted that, in order to select and perform related functions (e.g., image setting, sound setting, etc.) required by the image content service provided by the display apparatus200, the display apparatus200provides a plurality of menu items for selecting functions. Meanwhile, referring toFIG.1E, the control device100configured to control the display apparatus200may include a plurality of keys for selecting functions, e.g., one or more color keys, to provide indications for a user that the display apparatus200can perform a menu item function matching with a color key by operating the color key on the control device100. Here, there is a mapping relationship between the function corresponding to the above-mentioned menu items and the color key on the control device100. Specifically, when the display apparatus200receives a key event value corresponding to a color key on the control device100input from the user, the display apparatus200can perform a function operation corresponding to the key event value based on the mapping relationship. As such it is convenient for the user to visually match the color keys on the control device100with the functions provided by the menu items. For example, the GUI shown inFIG.2Bprovides menu items including a color key guide, where the menu item icon corresponding to the menu item 24 h+ is presented as a blue square, the menu item icon corresponding to the menu item 24 h− is presented as a yellow square, and the menu item icons corresponding to the menu items channel up and channel down are presented as CH, to allow implementation or selection of predetermined functions according to the content service provided by the GUI. On the control device100shown inFIG.1E, the color keys are arranged in the order of colors R, G, Y and B, where R represents the red key, G represents the green key, Y represents the yellow key, and B represents the blue key. The user operates a color key on the control device100(e.g., press the blue key) so that the display apparatus200performs the desired function in the menu item with blue indication (e.g., color key guide). Referring to bothFIGS.2B and1E, when the user wants to see the program menu after 24 h, the user can press the blue key (whose color indication corresponds to the blue color indication on the menu item icon) on the control device100; and the display apparatus200can receive the key event value of this key, extract the information on the menu item“24 h+” corresponding said key event value, and then implement the function of displaying the program menu after 24 h. For example, the program menu after 24 h is displayed in the GUI as shown inFIG.2C. When the user wants to see the program menu before 24 h, the user can press the yellow key (whose color indication corresponds to the yellow color indication on the menu item icon) on the control device100; and the display apparatus200can receive a key event value of this key, extract the information associated with the menu item icon with yellow color indication“24 h−”, and then implement the function of displaying the program menu before 24 h. The program menu before 24 h is displayed in the GUI as shown inFIG.2D. When the user wants to see the program menu of the next page, the user can press the key “CH∨” on the control device100; and the display apparatus200can receive a key event value of this key, extract information corresponding to said key “channel down”, and then implement the function of displaying the program menu in the next page. The program menu in the next page is displayed in the GUI as shown inFIG.2E. When the user wants to see the program menu in the previous page, the user can press the key “CH∧” on the control device100; and the display apparatus200can receive a key event value of this key, extract information corresponding to said key “channel up”, and then implement the function of displaying the program menu in the previous page. The program menu in the previous page is displayed in the GUI as shown inFIG.2F. It should be noted that the menu item icons corresponding to the menu items “channel up” and “channel down” may also be presented as color keys. The user presses the color key with the desired function on the control device100, and the display apparatus200can receive a key event value of the color key, searches for the corresponding information and implements the function corresponding to the key event value received. In this way, when the menu item content is provided on the display, the user presses a color key with a color indicated function on the control device100, and the display apparatus200can receive a key event value of this color key, search for information corresponding to the color key in this context and implement the function corresponding to the key event value. It is intuitive for a user that a color key on the control device100corresponds to a menu item with this color indication to implement a desired function. In view of the aboveFIGS.1A-1E, it should be noted that the display apparatus usually displays the EPG thereon, so that the user can use the menu provided by the EPG to view the programs of channels (such as profiles of the program content, profiles of the actors and directors, etc.) or schedule to record future programs, etc. The EPG user interface arranges and presents program information in the form of matrix, and it usually presents channel information and time periods in a two-dimensional pattern and displays the program information in the EPG in size proportional to the length of the program playing time period. The process of the EPG presentation includes: referring toFIG.1C, the modem210of the display apparatus receives a broadcast signal, and the decoder (not shown in the figure) extracts the EPG information from the received broadcast signal and outputs the extracted EPG information to the internal bus. The controller250stores the EPG information output to the internal bus in the memory260, for displaying the EPG user interface. When the program image shown inFIG.2Ais currently shown on the display275of the display apparatus, an EPG display request signal input from the user via the user interface265is input to the controller250. For example, the user presses a key for display EPG interface on the remote controller, then in response to the request sent from the user, the EPG user interface inFIG.2Bis presented on the display275through the video processor270. FIG.2Ashows a schematic diagram of a program image in the display apparatus200. As shown inFIG.2A, the display apparatus may provide a display image to the display, where the display image may include at least one of image, text, video content. For example, the display image shown inFIG.2Ais a program image. FIGS.2B-2Fshow schematic diagrams of the EPG user interfaces presented on the display apparatus200by operating the control device100. InFIG.2A, while the program image is displayed on the display of the display apparatus, when the user inputs a command for displaying the EPG by operating the control device, the display apparatus presents the EPG user interface in response to the command input. For example, as shown inFIG.2B, the user presses a key for displaying EPG on the control device, and the display apparatus may provide the display with the EPG user interface shown inFIG.2Bin response to a command associated with the key for displaying EPG. As shown inFIG.2B, the vertical direction is the channel axis direction, where different channels are displayed in different rows. The horizontal direction is the time axis direction, where information of the programs in each row are arranged in the order of playing duration, the size of the display area for each program represents its relative playing length, the start position of the display area represents the start playing point, the end position of the display area represents the end playing point, and the program title is shown in the display area. InFIG.2B, five channels (channels a-e) are shown in the EPG user interface, and the programs of each channel are displayed in a row according to the program playing time sequence. For example,FIG.2Bshows the program menu from 15:00 to 18:00 on February 23: programs a0-a2 of channel a, programs b0-b1 of channel b, programs c0-c1 of channel c, programs d0-d2 of channel d, and programs e0-e3 of channel e. When the selector (for example, it may be implemented as a focus) lands on a program, the EPG user interface shows the Guide of this program, where the Guide may include channel name, play time period, program type and description of this program. As shown inFIG.2B, when the selector lands on the program d0, the channel d to which the program d0 belongs, play time period of the program d0, program type of the program d0 and description of the program d0 are displayed in the guide area located on the upper side of the EPG user interface. In some embodiments, when a user needs to view the program menu of a channel in the next few days, the user needs to operate the directional keys on the remote controller to move the focus to an edge of the EPG user interface. For example, referring toFIG.2B, if a user needs to view the programs after 24 hours, firstly the user needs to operate the directional keys on the remote controller to move the focus to the edge of the EPG user interface according to the operation path of d0-d1-d2, and then operate the directional keys on the remote controller again, where the EPG user interface will present the program menu between 18:00-21:00 on February 23. After that, the user operates the directional keys on the remote controller many times, and the EPG user interface will present the program menu between 15:00-18:00 on February 24. Obviously, the above operations are complicated and cumbersome, affecting user's experience. In view of the above issue, according to an embodiment, the user presses a color key on the remote controller after entering the EPG user interface, that is, in response to the user input, the EPG user interface is switched from a first presentation page to a second presentation page. In this way, the EPG user interface can directly present the program menu after the preset time unit, which allows for a user to quickly find the broadcast program to be watched, and improving user's experience. Specifically, after receiving an EPG display request signal input from the user via the remote controller, the controller controls the display to enter the EPG user interface as shown inFIG.2Bin response to the user input. Meanwhile, the first presentation page shows channels a-e and the program menu of the channels a-e from 15:00 to 18:00 on February 23. After receiving the key event value associated with the color key sent from the user via the remote controller, referring toFIG.2C, the controller switches the EPG user interface from the first presentation page to the second presentation page, that is, switches the EPG user interface as shown inFIG.2Bto the EPG user interface as shownFIG.2C, based on the first presentation page presented by the EPG user interface and the preset time unit. FIG.2Cis an exemplary EPG user interface presented after the key event of a color key. InFIG.2C, five channels (channels a-e) are shown in the EPG user interface, and the programs of each channel are displayed in a row according to the program playing time sequence.FIG.2Cshows the program menu from 15:00 to 18:00 on February 24: programs a3-a5 of channel a, programs b2-b4 of channel b, programs c2-c3 of channel c, programs d3-d5 of channel d, and programs e4-e5 of channel e. FIG.2Dis another exemplary EPG user interface presented after the key event of a color key. InFIG.2D, five channels (channels a-e) are shown in the EPG user interface, and the programs of each channel are displayed in a row according to the program playing time sequence.FIG.2Cshows the program menu from 15:00 to 18:00 on February 22: programs a6-a7 of channel a, programs b5-b7 of channel b, programs c4-c5 of channel c, programs d6-d7 of channel d, and programs e6-e7 of channel e. Furthermore, when the user presses a shortcut key on the remote controller, that is, the controller determines to trigger the key event of the shortcut key in response to a user input, the controller changes the EPG user interface. Exemplarily, if the presentation page currently presented on the EPG user interface is that as shown inFIG.2B, the EPG user interface is switched from the user interface as shown inFIG.2Bto the user interface as shown inFIG.2EorFIG.2F. FIG.2Eis an exemplary EPG user interface presented after a key event of the shortcut key. InFIG.2E, five channels (channels f-j) are shown in the EPG user interface, and the programs of each channel are displayed in a row according to the program playing time sequence.FIG.2Eshows the program menu from 15:00 to 18:00 on February 23: programs f0-f2 of channel f, programs g0-g1 of channel g, programs h0-h2 of channel h, programs i0-i2 of channel i, and programs j0-j3 of channel j. FIG.2Fis another exemplary EPG user interface presented after a key event of the shortcut key. InFIG.2F, five channels (channels k-o) are shown in the EPG user interface, and the programs of each channel are displayed in a row according to the program playing time sequence.FIG.2Fshows the program menu from 15:00 to 18:00 on February 23: programs k0-k1 of channel k, programs l0-l2 of channel l, programs m0-m2 of channel m, programs n0-n1 of channel n, and programs o0-o1 of channel o. The embodiments of the disclosure will be further described in detail below with reference to the accompanying drawings. Referring toFIG.3A, a presentation method of an EPG user interface in a display apparatus according to an embodiment may include the following process. Step S301: displaying a program image on a display of the display apparatus. Step S302: receiving a first user input for displaying an EPG user interface. Step S303: in response to the first user input, displaying, by the display apparatus, the EPG user interface, and presenting a first presentation page on the EPG user interface, where the first presentation page includes a first set of channels and programs corresponding to the first set of channels in a first time period;in response to a second user input, determining a time skip event associated with the second user input and determining a second time period corresponding to the time skip event based on the first time period and a preset time unit; and switching the EPG user interface from the first presentation page to a second presentation page, where the second presentation page includes the first set of channels and programs corresponding to the first set of channel in the second time period. Specifically, as shown inFIG.3B, when the step S303is performed, the following steps may be specifically performed but not limited to the following. Step S3031: in response to the first user input, showing, by the display apparatus, the EPG user interface, and presenting a first presentation page on the EPG user interface, where the first presentation page includes a first set of channels and programs corresponding to the first set of channels in a first time period. Specifically, while displaying a program image, the display apparatus responds to the first user input to show the EPG user interface. According to an embodiment, the first user input includes, but is not limited to, an input for instructing to present the EPG user interface, such as an EPG key on a remote controller, a voice command, a shortcut key for showing EPG, etc. For example, referring toFIG.2A, the display apparatus displays a program image of the program d0, and determines to present the EPG user interface as shown inFIG.2Bin response to a command associated with a key for showing EPG on the remote controller. In some embodiments, the display apparatus may determine the first presentation page on the EPG user interface by, but not limited to, the following steps. A1. The display apparatus determines a first set of channels to be presented based on a channel corresponding to the program image, a pre-stored channel list and a preset number of channels for presentation. According to an embodiment, the first set of channels to be presented may be determined by, but not limited to, the following method:before determining the first set of channels to be presented, dividing the channel list into multiple sets of channels based on the preset number of channels for presentation. For example, assuming that the number of channels for presentation is 5 and the channel list is as shown in Table 1, the channel list is divided into a channel set 1, a channel set 2 and a channel set 3 based on the number of channels for presentation being 5, where the channel set 1 includes channels a-e, the channel set 2 includes channels f-j, and the channel set 3 includes channels k-o. Further, based on the channel corresponding to the program image and the channel list, the display apparatus determines an index of this channel in the channel list, and then determines the first set of channels to be presented according to the index and the preset number of channels for presentation. Assuming that the number of channels for presentation is represented as pageSize, then the serial number (currentPage) of the channel set is calculated by the following formula: currentPage=floor(indexpageSize)+1focusChannel=index⁢%⁢pageSizewhere the floor function is used to round down. For example, the display apparatus displays the program image of the program d0, and the program d0 corresponds to the channel d. Based on the channel d and the channel list, it is determined that the index of the channel d in the channel list is 3. According to the index 3 and the number of channels for presentation being 5, the serial number of the channel set is 1 calculated by the above formula, that is, the first set of channels to be presented is determined as the channel set 1. TABLE 1Channel ListIndexChannel NameChannel Number0Channel a11Channel b22Channel c33Channel d44Channel e55Channel f66Channel g77Channel h88Channel i99Channel j1010Channel k1111Channel l1212Channel m1313Channel n1414Channel o15 A2. The display apparatus determines the first time period based on the current time and a preset time period range. For example, assuming that the preset time period range is 3 hours and the current time is 15:40 on Feb. 23, 2019, the display apparatus determines the first time period based on the current time and the time period range, where the first time period is 15:00 to 18:00 on Feb. 23, 2019. A3. The display apparatus determines the first presentation page based on the preset channel program information, the first set of channels to be presented and the first time period, where the first presentation page includes the first set of channels and programs corresponding to the first set of channels in the first time period. As shown inFIG.2B, the display apparatus determines the first presentation page based on the channel program information, the first set of channels to be presented and the first time period. The first presentation page includes the first set of channels and programs corresponding to the first set of channels in the first time period, where the first set of channels includes channels a-e; and the programs of the channel a are programs a0-a2, the programs of the channel b are programs b0-b1, the programs of the channel c are programs c0-c1, the programs of the channel d are programs d0-d2, and the programs of the channel e are programs e0-e3 from 15:00 to 18:00 on Feb. 23, 2019. The display apparatus displays the EPG user interface on the program image, and presents the first presentation page on the EPG user interface. For example, referring toFIG.2B, the display apparatus presents the first presentation page on the EPG user interface, where the first presentation page includes five channels (channels a-e), and the programs of each channel are arranged and displayed as one line in time order of program playing time period. Step S3032: in response to a second user input, determining a time skip event and switching, by the display apparatus, the EPG user interface from the first presentation page to a second presentation page associated with the time skip event, where the second presentation page comprises the first set of channels and programs corresponding to the first set of channels in the second time period. It should be noted that, the step S3032may be performed right after the step S3031, or the step S3032may be performed after the step S3031is performed and the channel presentation page is switched (for example, the channel presentation page is switched by the left and right directional keys on the remote controller), which is not limited in the disclosure. But for convenience of description, the case where the step S3032is performed right after the step S3031is taken as an example for illustration. According to an embodiment, when the display apparatus determines to trigger a time skip event in response to the second user input, since the second user input corresponds to a menu item 24 h+ or 24 h−, the specified time skip event may include two cases. Case 1: The specified time skip event responds to an operation of a key associated with the menu item 24 h+. Specifically, the display apparatus determines a first start point based on the first time period, where the first start point is the start point of the first time period. For example, the first time period is 15:00 to 18:00 on Feb. 23, 2019, and the display apparatus determines that the first start point is 15:00 on Feb. 23, 2019. After determining the first start point, the display apparatus determines a second start point based on the first start point and a preset time unit. For example, assuming that the preset time unit is 24 hours, the display apparatus determines that the second start point is 15:00 on Feb. 24, 2019 based on the first start point at 15:00 on Feb. 23, 2019 and the preset time unit. After determining the second start point, a second end point of the second time period is determined based on the second start point and the preset time period range. For example, based on the second start point at 15:00 on Feb. 24, 2019 and the preset time period range, the display apparatus determines that the second end point is 18:00 on Feb. 24, 2019. In some embodiments, when the display apparatus determines that the second start point is within a preset valid range, the time period from the second start point to the second end point is used as the second time period. For example, assuming that a preset valid range is from 15:00 on Feb. 23, 2019 to 24:00 on Feb. 28, 2019, the display apparatus uses the time period from 15:00 on Feb. 24, 2019 to 18:00 on Feb. 24, 2019 as the second time period when determining that the second start point is within the range from 15:00 on Feb. 23, 2019 to 24:00 on Feb. 28, 2019, that is, the second time period is 15:00 to 18:00 on Feb. 24, 2019. If the display apparatus determines that the second start point is not within the preset valid range, it alerts the user on the EPG user interface. For example, assuming that the preset valid range is from 15:00 on Feb. 23, 2019 to 24:00 on Feb. 28, 2019 and the second start point is 15:00 on Mar. 1, 2019, then the second start point is not within the valid range in this case, and the display apparatus may alert the user through a pop-up window on the EPG user interface. In some embodiments, the display apparatus switches the EPG user interface from the first presentation page to the second presentation page, where the second presentation page includes the first set of channels and programs corresponding to the first set of channels in the second time period. For example, the display apparatus switches the EPG user interface fromFIG.2BtoFIG.2C, and the second presentation page includes the first set of channels and programs corresponding to the first set of channels in the second time period. Referring toFIG.2C, the first set of channels includes channels a-e, where the programs of the channel a are programs a3-a5, the programs of the channel b are programs b2-b4, the programs of the channel c are programs c2-c3, the programs of the channel d are programs d3-d5, and the programs of the channel e are programs e4-e5 from 15:00 to 18:00 on Feb. 24, 2019. Case 2: The time skip event responds to an operation of a key associated with the menu item 24 h−. Case 2 is similar to Case 1, and the Case 2 is only described here in brief. Specifically, the display apparatus determines a first start point based on the first time period, where the first start point is the start point of the first time period. For example, the first time period is 15:00 to 18:00 on Feb. 23, 2019, and the display apparatus determines that the first start point is 15:00 on Feb. 23, 2019 based on the first time period. After determining the first start point, the display apparatus determines a second start point based on the first start point and a preset time unit. For example, assuming that the preset time unit is 24 hours, the display apparatus determines that the second start point is 15:00 on Feb. 22, 2019 based on the first start point at 15:00 on Feb. 23, 2019 and the preset time unit. After determining the second start point, a second end point of the second time period is determined based on the second start point and the preset time period range. For example, based on the second start point at 15:00 on Feb. 22, 2019 and the preset time period range, the display apparatus determines that the second end point is 18:00 on Feb. 22, 2019. In some embodiments, when the display apparatus determines that the second start point is within a preset valid range, the time period from the second start point to the second end point is used as the second time period. For example, the preset valid range is from 00:00 on Feb. 22, 2019 to 24:00 on Feb. 28, 2019, and the display apparatus uses the time period from 15:00 on Feb. 22, 2019 to 18:00 on Feb. 22, 2019 as the second time period when determining that the second start point is within the range from 00:00 on Feb. 22, 2019 to 24:00 on Feb. 28, 2019, that is, the second time period is 15:00 to 18:00 on Feb. 22, 2019. In some embodiments, the display apparatus switches the EPG user interface from the first presentation page to the second presentation page, where the new channel presentation page includes the first set of channels and programs corresponding to the first set of channels in the second time point. For example, the display apparatus switches the EPG user interface from the first presentation page to the second presentation page, as shown inFIGS.2B and2D, that is, the display apparatus changes the EPG user interface fromFIG.2BtoFIG.2D. Referring toFIG.4A, a presentation method of an EPG user interface according to an embodiment may include the following process. Step S401: displaying a program image on a display of the display apparatus. Step S402: receiving a first user input for displaying an EPG user interface. Step S403: in response to the first user input, displaying the EPG user interface, where the EPG user interface displays a first presentation page that includes a first set of channels and programs corresponding to the first set of channels in a first time period; receiving a third user input, in response to the third user input, determining a channel skip event, and determining a third set of channels based on the first set of channels and a preset number of channels for presentation; and switching the EPG user interface from the first presentation page to a third presentation page, where the third presentation page includes the third set of channels and programs corresponding to the third set of channels in the first time period. Specifically, as shown inFIG.4B, when the step S403is performed, the following steps may be specifically performed but not limited to the following. S4031: in response to a first user input, presenting the EPG user interface, and presenting a first presentation page on the EPG user interface, where the first presentation page includes a first set of channels and programs corresponding to the first set of channels in a first time period. The step S4031is performed in similar manner as the step S3031, and will not be repeated here. Step S4032: in response to a third user input, determining to trigger a channel skip event, and switching the EPG user interface from the first presentation page to the third presentation page based on the channel skip event. According to an embodiment, when the display apparatus determines to trigger the channel skip event as a response to the third user input, since the third user input corresponds to operation of a channel up key or channel down key on the remote controller, the channel jump skip specifically includes two cases. Case 1: The channel skip event responds to an operation on the channel down key. Specifically, when presenting the first presentation page on the EPG user interface, the display apparatus determines to trigger a channel skip event in response to an operation on the channel down key, where the first presentation page comprises the first set of channels and programs corresponding to the first set of channels in the second time period. For example, the display apparatus presents the first presentation page on the EPG user interface. In this case, the first presentation page is the first presentation page shown inFIG.2B, and the first presentation page includes channels a-e and programs corresponding to the channels a-e from 15:00 to 18:00 on Feb. 23, 2019. The display apparatus determines to trigger a channel skip event in response to an operation of the channel down key. The display apparatus may determine the third presentation page by, but not limited to, the following steps. B1. The display apparatus determines the first set of channels based on the first presentation page. For example, the first presentation page includes channels a-e and programs corresponding to the channels a-e from 15:00 to 18:00 on Feb. 23, 2019, and the first set of channels is determined as the set including channels a-e. B2. The display apparatus determines the third set of channels based on the first set of channels, a pre-stored channel list, a preset number of channels for presentation, and a preset channel update parameter. For example, assuming that the preset channel update parameter is 5, the display apparatus determines that the third set of channels includes channels f-j based on the first set of channels, Table 1, the number of channels for presentation being 5, and the channel update parameter being 5. B3. The display apparatus determines programs corresponding to the third set of channels in the first time period based on the first time period and the preset channel program information. For example, the first time period is 15:00 to 18:00 on Feb. 23, 2019, and the preset channel program information includes programs corresponding to the channels f-j from 15:00 to 18:00 on Feb. 23, 2019. The display apparatus determines, based on the first time period and the preset channel program information, that the programs of the channel f are programs f0-f2, the programs of the channel g are programs g0-g1, the programs of the channel h are programs h0-h2, the programs of the channel i are programs i0-i2, and the programs of the channel j are programs j0-j3 from 15:00 to 18:00 on Feb. 23, 2019. B4. The display apparatus determines a third presentation page, which includes the third set of channels and the programs corresponding to the third set of channels in the first time period. Referring toFIG.2E, the display apparatus determines the third presentation page, which includes channels f-j, and the programs f0-f2 of the channel f, the programs g0-g1 of the channel g, the programs h0-h2 of the channel h, the programs i0-i2 of the channel i, and the programs j0-j3 of the channel j from 15:00 to 18:00 on Feb. 23, 2019. In some embodiments, the display apparatus switches the EPG user interface from the first presentation page to the third presentation page. Referring toFIGS.2B and2E, the display apparatus switches the EPG user interface from the first presentation page to the third presentation page, that is, the display apparatus switches the interface as shown inFIG.2Bto that as shown inFIG.2E. Case 2: The channel skip event responds to an operation of the channel up key. Specifically, when presenting the first presentation page on the EPG user interface, the display apparatus determines to trigger a channel skip event in response to an operation of the channel up key, where the first presentation page includes the first set of channels and programs corresponding to the first set of channels in the first time period. For example, the display apparatus presents the first presentation page on the EPG user interface. In this case, as shown inFIG.2B, the first presentation page includes channels a-e and programs corresponding to the channels a-e from 15:00 to 18:00 on Feb. 23, 2019. The display apparatus determines to trigger a channel skip event in response to an operation of the channel up key. The display apparatus may determine a third presentation page by, but not limited to, the following steps. C1. The display apparatus determines the first set of channels based on the first presentation page. For example, the first presentation page includes channels a-e and programs corresponding to the channels a-e from 15:00 to 18:00 on Feb. 23, 2019, and the first set of channels is determined as the set including channels a-e. C2. The display apparatus determines the third set of channels based on the first set of channels, a pre-stored channel list, a preset number of channels for presentation, and a preset channel update parameter. For example, assuming that the preset channel update parameter is 5, the display apparatus determines that the third set of channels includes channels k-o based on the first set of channels (channels a-e), Table 1, the number of channels for presentation and the channel update parameter. C3. The display apparatus determines programs corresponding to the third set of channels in the first time period based on the first time period and the preset channel program information. For example, the first time period is 15:00 to 18:00 on Feb. 23, 2019, and the preset channel program information includes programs corresponding to the channels k-o from 15:00 to 18:00 on Feb. 23, 2019. The display apparatus determines, based on the first time period and the preset channel program information, that the programs of the channel k are programs k0-k1, the programs of the channel l are programs l0-l2, the programs of the channel m are programs m0-m2, the programs of the channel n are programs n0-n1, and the programs of the channel o are programs o0-o1 from 15:00 to 18:00 on Feb. 23, 2019. C4. The display apparatus determines the third presentation page, which includes the third set of channels and the programs corresponding to the third set of channels in the first time period. Referring toFIG.2F, the display apparatus determines the third presentation page, which includes channels k-o, and the programs k0-k2 of the channel k, the programs l0-l1 of the channel l, the programs m0-m2 of the channel m, the programs n0-n2 of the channel n, and the programs o0-o3 of the channel o from 15:00 to 18:00 on Feb. 23, 2019. In some embodiments, the display apparatus switches the EPG user interface from the first presentation page to the third presentation page. Referring toFIGS.2B and2F, the display apparatus switches the EPG user interface from the first presentation page to the third presentation page, that is, the display apparatus switches the interface shown inFIG.2Bto that as shown inFIG.2F. A full implementation scenario is described below. In response to an operation of a key for showing EPG on the remote controller, a smart TV displays the EPG user interface and presents a first presentation page on the EPG user interface. Referring toFIG.2B, the first presentation page includes channels a-e, and programs corresponding to the channels a-e from 15:00 to 18:00 on Feb. 23, 2019. If the smart TV determines to trigger a time skip event in response to an operation of a key associated with the menu item 24 h+ on the remote controller, it determines a first start point as 15:00 on Feb. 23, 2019 based on the first time period from 15:00 to 18:00 on Feb. 23, 2019 in the first presentation page, and then determines a second start point as 15:00 on Feb. 24, 2019 based on 15:00 on Feb. 23, 2019 and the preset time unit of 24 hours. In some embodiments, when determining that 15:00 on Feb. 23, 2019 is within the valid range from 15:00 on Feb. 23, 2019 to 24:00 on Feb. 28, 2019, the display apparatus determines the second time period as 15:00 to 18:00 on Feb. 24, 2019. Further, referring toFIG.2C, the EPG user interface is switched from the first presentation page to the second presentation page, which includes channels a-e, and programs corresponding to the channels a-e from 15:00 to 18:00 on Feb. 24, 2019. If the smart TV determines to trigger a channel skip event in response to an operation of the channel down key on the remote controller, the display apparatus determines the first set of channels as channels a-e based on the first presentation page. Based on the channels a-e, the first set of channels, Table 1, the number of channels for presentation, and the channel update parameter, it is determined that the third set of channels includes channels f-j. Then, the display apparatus determines programs corresponding to the channels f-j from 15:00 to 18:00 on Feb. 23, 2019 based on the first time period from 15:00 to 18:00 on Feb. 23, 2019 and the preset channel program information. Finally, the EPG user interface is switched from the first presentation page to the third presentation page, which contains channels f-j and the programs corresponding to the channels f-j from 15:00 to 18:00 on Feb. 23, 2019. An embodiment of the disclosure provides a non-transitory memory storage medium for storing computer instructions which are configured to cause a computer to perform the above method embodiments. When determining to trigger a time skip event or a channel skip event, the display apparatus switches the EPG user interface from the first presentation page to another presentation page, so that the display apparatus can switch the presentation page in the EPG user interface directly based on the user input, without the need for receiving the directional key commands many times to control the focus move multiple times and switch the presentation page in the EPG user interface, reducing inconvenient repetitive operations, realizing the rapid response of presentation page switch, and thus improving user's experience. The foregoing embodiments are provided for purpose of illustration and description, and are not intended to list or limit the disclosure. Individual elements or features in a specific embodiment are generally not limited to this specific embodiment, but, where applicable, may be used or interchanged in a selected embodiment even if not specifically shown or described. Likewise, many forms of variations are possible, these variations are not to be considered as departing from the scope of the appended claims of the disclosure, and all such modifications are encompassed within the scope of the appended claims of the disclosure.
60,251
11943515
DETAILED DESCRIPTION In accordance with various embodiments, mechanisms (which can include methods, systems, and media) for presenting media content are provided. In some embodiments, the mechanisms described herein can be used to cause media content to be presented on a display device connected to a streaming media device. In some embodiments, the streaming media device can be any suitable device, such as a storage device or dongle connected to the display device in any suitable manner (e.g., via an HDMI connection, and/or in any other suitable manner). In some embodiments, the media content presented on the display device can be any suitable type of media content, such as videos, television programs, movies, photos, slideshows, documents, audio content, and/or any other suitable type of media content. In some embodiments, the media content can be presented on the display device connected to the streaming media device via a mobile device (e.g., a mobile phone, a tablet computer, a wearable computer, a laptop computer, and/or any other suitable type of mobile device) that is paired with the streaming media device. For example, in some embodiments, the mobile device and the streaming media device can be paired using any suitable peer-to-peer networking connection (e.g., WiFi Direct, WiFi Aware, and/or any other suitable type of peer-to-peer networking connection). In some such embodiments, media content can be transferred to the streaming media device from the mobile device via the peer-to-peer networking connection. For example, in some embodiments, the media content can be media content that has been stored on the mobile device and is transferred to the streaming media device for presentation on the connected display device. As a more particular example, in some embodiments, the media content can be user-generated media content stored on and/or media content generated on the mobile device, such as photos and/or videos captured on the mobile device, documents generated on the mobile device, and/or any other suitable media content. As another more particular example, in some embodiments, the media content can be media content that was downloaded to the mobile device from any suitable source, such as a media content sharing and/or streaming service, from another user device, and/or from any other suitable source. In some embodiments, by transferring media content stored on the mobile device and/or previously downloaded to the mobile device, the mechanisms described herein can allow a user to cause presentation of media content on the display device connected to the streaming media device without requiring a WiFi access point for the streaming media device. In particular, in some embodiments, the mechanisms can allow a user to download media content to the mobile device (e.g., using a cellular network, using a WiFi network, and/or in any other suitable manner) and transfer the downloaded media content to the streaming media device for presentation on the display device at a later time. In some embodiments, any internet connection to be used by the streaming media device (e.g., to request keys and/or permission to present protected or encrypted content, to stream media content from a server, and/or for any other suitable purpose) can be provided by the mobile device. For example, in some embodiments, the mobile device can be configured to execute an HTTP proxy server, which can be used by the streaming media device as a hotspot to connect to a server (e.g., a server associated with a media content sharing and/or streaming service). In some such embodiments, access to a network via an HTTP proxy server executing on the mobile device can allow the streaming media device to request and/or receive any suitable information or media content for rendering the media content stored on the mobile device without access to a WiFi access point. For example, the streaming media device may request and/or receive permissions or access credentials for media content downloaded or streamed from certain media content streaming or sharing services, and/or information regarding the relevant codecs for playing the media content item. Access to content may therefore be improved. Turning toFIG.1, an example 100 of an information flow diagram for transferring media content to a streaming media device from a mobile device is shown in accordance with some embodiments of the disclosed subject matter. As illustrated, in some embodiments, blocks of process100can be executed on a mobile device, a streaming media device connected to a display device, and/or a server. Note that, in some embodiments, the streaming media device can be a device that is connected to any suitable display device (e.g., a television, a projector, and/or any other suitable type of display device) in any suitable manner (e.g., via an HDMI connection, and/or in any other suitable manner). At102, the mobile device can initialize a connection to the streaming media device. For example, in some embodiments, the mobile device can initialize a communication channel that communicatively couples the mobile device and the streaming media device, such as a peer-to-peer connection (e.g., a WiFi Direct connection, a WiFi Aware connection, and/or any other suitable type of connection). As a more particular example, in some embodiments, the mobile device can initialize a WiFi Direct or WiFi Aware connection with the streaming media device using any suitable API(s) to initialize the WiFi Direct or WiFi Aware connection and retrieve a Service Set Identifier (SSID) and passphrase associated with the WiFi Direct or WiFi Aware connection. In some embodiments, the mobile device can then transmit the SSID (e.g., transmitted by the mobile device using beacons, and/or in any other suitable manner), which can be detected by the streaming media device. As another example, in some embodiments the mobile device can initialize an HTTP proxy server that can execute on the mobile device. As a more particular example, in some such embodiments, the mobile device can transmit a multicast DNS (mDNS) message that can indicate any suitable information, such as an IP address associated with the mobile device, and/or any other suitable information. In some such embodiments, the streaming media device can access the internet via the HTTP proxy server executing on the mobile device, as described below in more detail. Note that, in some embodiments, the mobile device can initialize the connection to the streaming media device based on any suitable information. For example, in some embodiments, the mobile device can initialize the connection in response to receiving an indication that the connection to the streaming media device is to be initialized via a user interface presented on the mobile device. Turning toFIGS.3A and3B, examples 300 and 320 of user interfaces for initializing connections to the streaming media device are shown in accordance with some embodiments of the disclosed subject matter. As illustrated inFIG.3A, user interface300can include a selectable input302that, when selected, can begin a device discovery process that identifies nearby display devices connected to nearby streaming media devices. In some embodiments, user interface320can be presented in response to the mobile device detecting one or more nearby display devices each connected to one or more nearby streaming media devices. For example, as illustrated inFIG.3B, user interface320can indicate detected display devices (e.g., “living room television,” “bedroom television,” and/or any other suitable detected devices). In some such embodiments, an indication322of a detected device can be a selectable input that, when selected, causes the mobile device to begin transmitting any suitable information (e.g., an SSID and/or passphrase, an mDNS message, and/or any other suitable information as described above) that can be detected by a streaming media device connected to an indicated display device. Referring back toFIG.1, at104, the streaming media device can connect to the mobile device and/or to the internet via the initialized connection. For example, in some embodiments, the streaming media device can establish a WiFi Direct or WiFi Aware connection with the mobile device using the SSID and/or passphrase transmitted by the mobile device. As another example, in some embodiments, the streaming media device can connect to the internet using the HTTP proxy server executing on the mobile device using information included in the mDNS message. At106, the mobile device can present any suitable indication that the mobile device and the streaming media device and/or a display device connected to the streaming media device have been paired. For example, in some embodiments, the mobile device can present a user interface (not shown) that states “you are connected to the living room television,” and or any other suitable message. In some embodiments, the indication can include information indicating that the mobile device and the streaming media device have successfully established a communication channel between the mobile device and the streaming media device, that the streaming media device can now access the internet via the mobile device (e.g., via an HTTP proxy server executing on the mobile device, and/or in any other suitable manner), and/or any other suitable information. At108, the mobile device can present indications of media content available for transfer to the streaming media device for presentation on a display device connected to the streaming media device. In some embodiments, the available media content can include any suitable type of media content. For example, in some embodiments, the media content can include user-generated media content that is stored on the mobile device. As a more particular example, in some embodiments, the user-generated media content can include photos captured by a camera associated with the mobile device, documents (e.g., text documents, slideshows, and/or any other suitable type(s) of documents) stored on the mobile device, videos captured by a camera associated with the mobile device, animations or other graphics generated by the mobile device, and/or any other suitable type of content. As another example, in some embodiments, the media content can include media content previously downloaded to the mobile device. As a more particular example, in some embodiments, the media content can include video content (e.g., videos, movies, television programs, and/or any other suitable type of video content) and/or audio content (e.g., music, audiobooks, radio programs, podcasts, and/or any other suitable type of audio content) downloaded to the mobile device from any suitable source (e.g., another user device, from a media content sharing or streaming service, and/or from any other suitable source). In some embodiments, the mobile device can present the indications of available content in any suitable manner. For example, in some embodiments, the mobile device can present an indication of a number of media content items available on the mobile device (e.g., a number of items available for transfer to the streaming media device, and/or available in any other suitable manner). Turning toFIG.3C, an example 350 of a user interface for presenting media content available for transfer to the streaming media device is shown in accordance with some embodiments of the disclosed subject matter. As illustrated, in some embodiments, user interface350can include a first selectable input352that, when selected, causes all media content items stored on the mobile device and/or all media content items stored in a particular folder of the mobile device to be transferred to the streaming media device. In some embodiments, first selectable input352can include an indication of a total number of media content items available for transfer to the streaming media device. Additionally or alternatively, in some embodiments, user interface350can include a second selectable input354that, when selected, can allow a user of user interface350to select a subset of the media content items available for transfer to the streaming media device. Note that, in some embodiments, the indications of available content can correspond to media content that is available for streaming from a media content sharing and/or streaming service by the streaming media device rather than media content that is currently stored on the mobile device. In some such embodiments, the indications of available content can be presented in any suitable manner. For example, in some embodiments, the indications of available content can be indications of media content items that are available for streaming by the streaming media device to the display device connected to the streaming media device via a media content sharing and/or streaming service. As a more particular example, in some embodiments, the indications of available media content can be presented within an application associated with the media content sharing and/or streaming service that hosts the available media content. Note that, in instances where the indications of available media content corresponds to media content that is to be streamed from a media content sharing and/or streaming service, the media content can be streamed via the mobile device to the streaming media device, for example, by the streaming media device connecting to the internet to receive content from the service via an HTTP proxy server executing on the mobile device, as described below in more detail in connection with block112. In other words, portions of the media content item to be streamed are downloaded and stored on the mobile phone temporarily as part of a buffering process of the during the streaming operation. In some embodiments, selection of selectable input354can cause user interface370ofFIG.3Dto be presented. In some embodiments, user interface370can include any suitable indications of available media content items, such as a group of thumbnail images (such as thumbnail372) each representing an available media content item, as illustrated inFIG.3D. In some embodiments, each thumbnail can be selected, and an indication of selection (e.g., a check mark374) can be presented in response to a user selecting a particular thumbnail. In some embodiments, an indication of an available media content item can include any suitable information, such as an image associated with the media content item, a name of the media content item, a date associated with the media content item (e.g., a date of creation, a date the media content item was downloaded, and/or any other suitable date), episode information, a size of the media content item, and/or any other suitable information. Note that, although the indications of available media content item are shown inFIG.3Das thumbnail images, in some embodiments, indications can be presented in any suitable manner, such as a list of file names of the media content items, and/or any other suitable manner. Referring back toFIG.1, at110, the mobile device can cause selected media content items to be transferred to the streaming media device. For example, referring toFIG.3D, the mobile device can cause media content items corresponding to thumbnail372and thumbnail376to be transferred to the streaming media device. In some embodiments, the mobile device can cause the selected media content items to be transferred to the streaming media device in any suitable manner and using any suitable technique(s). For example, in some embodiments, files corresponding to the selected media content items can be transferred via the peer-to-peer networking connection (e.g., a WiFi Direct connection, a WiFi Aware connection, and/or any other suitable peer-to-peer networking connection) established between the mobile device and the streaming media device, as described above in connection with blocks102and104. Note that, in some embodiments, the mobile device can be configured to automatically synchronize and transmit particular media content items to the streaming media device. For example, in some embodiments, the mobile device can be configured to automatically transmit user-generated videos, downloaded media content items associated with a particular media content streaming service, and/or any other suitable type of media content items to the streaming media device in response to detecting that the streaming media device has been detected by the mobile device. In some such embodiments, the mobile device can transmit media content items that have not previously been transmitted to the streaming media device at110rather than transmitting selected media content items. At112, the streaming media device can receive the selected content over the peer-to-peer networking protocol. In some embodiments, streaming media device can store the received content in any suitable manner. For example, in some embodiments, the streaming media device can store the received content in memory of the streaming media device for later presentation on the display device connected to the streaming media device. In some embodiments, the streaming media device can present any suitable indication that the selected content has been received. For example, in some embodiments, the streaming media device can cause a user interface to be presented on the display device that indicates the received media content items, as shown in user interface400ofFIG.4. As illustrated inFIG.4, user interface400can include indications of received media content items402, each of which can correspond to a media content item transferred from the mobile device at block110. In some embodiments, the indications of received media content items can include any suitable information or content, such as a thumbnail image associated with the media content item, a name of the media content item, a name of a creator of the media content item, and/or any other suitable information. In some embodiments, each indication of a received media content item can be a selectable input that, when selected, causes presentation of the corresponding media content item to begin on the display device, as described below in connection with blocks114and118. Note that, in instances where the media content to be presented on the display device connected to the streaming media device is to be streamed from a media content sharing and/or streaming service, the streaming media device can receive the media content item to be streamed from the service via an HTTP proxy server executing on the mobile device. That is, rather than communicating with the service via a WiFi access point, the streaming media device can communicate with the service via the HTTP proxy server to request and/or receive the media content item. In some such embodiments, the streaming media device can receive data corresponding to the media content from the mobile device (that is, via the HTTP proxy server) at112and can store the received data in a buffer or other temporary location on the streaming media device as the media content item is being presented on the display device connected to the streaming media device. At114, the streaming media device can receive a selection of a media content item stored on the streaming media device (e.g., a media content item received by the streaming media device at112and/or a media content item previously received by the streaming media device) and can request permission to present the selected media content item. In some embodiments, the streaming media device can receive the selection of the media content item in any suitable manner. For example, in some embodiments, the streaming media device can receive the selection of the media content item via a remote control device associated with the streaming media device. As another example, in some embodiments, the streaming media device can receive the selection of the media content item via the mobile device. In some embodiments, the streaming media device can request permission to present the selected media content item. For example, in instances where the selected media content item is a media content item downloaded by the mobile device from a media content sharing or streaming service, the streaming media device can request permission to present the selected media content item on the display device connected to the streaming media device from the media content sharing or streaming service. In some embodiments, the streaming media device can request permission using any suitable information and using any suitable technique(s). For example, in some embodiments, the streaming media device can transmit a message to a server associated with the media content sharing or streaming service that includes any suitable information, such as information associated with a user account corresponding to the service that was used by the mobile device to download the media content item (e.g., a username associated with the user account, a password associated with the user account, and/or any other suitable information), an identifier of the selected media content item, an identifier of the mobile device, and/or any other suitable information. As another example, in some embodiments, the streaming media device can receive any suitable keys required for decryption of the selected media content item from the server in response to a request transmitted to the server. Note that, in instances where a message is transmitted to the server associated with the media content sharing or streaming service and/or in instances where a message is received from the server associated with the media content sharing or streaming service, the messages can be transmitted and/or received in any suitable manner. In some embodiments, the mobile phone can receive a request from the streaming media device and transmit said request to the media content sharing/streaming service. For example, in some embodiments, the messages can be transmitted between the streaming media device and the server using a communication network associated with the HTTP proxy server executing on the mobile device, as described above in connection with blocks102and104. At116, the server associated with the media content sharing or streaming service can grant permission to the streaming media device to cause the selected media content item to be presented on the display device. In some embodiments, as described above in connection with block114, in some embodiments, the server can transmit any suitable key(s) required to decrypt the selected media content item. Note that, in some embodiments, the streaming media device may not require permission to present the selected media content item. For example, in instances where the selected media content item corresponds to user-generated media content (e.g., a video recorded by the mobile device, a document created on the mobile device, and/or any other suitable user-generated content), or content generated by another user which is hosted on a media content sharing service, the streaming media device may not require permission to present the selected content. In some such embodiments, blocks114and116can be omitted. At118, the streaming media device can cause the selected media content item to be presented on the display device connected to the streaming media device. In some embodiments, the streaming media device can cause the selected media content item to be presented in any suitable manner. For example, in some embodiments, video content and/or audio content associated with the media content item can be presented by a display of the display device and/or speakers of the display device, respectively. In some embodiments, playback of the selected media content item can be manipulated in any suitable manner, for example, via a remote control device associated with the streaming media device. As a more particular example, in some embodiments, a playback position of the media content item can be changed (e.g., a viewer of the media content item can fast-forward or rewind the media content item, and/or change the playback position in any other suitable manner). More detailed techniques for manipulating a playback position of the media content item are described below in connection with block212ofFIG.2. Turning toFIG.2, an example 200 of a process for receiving media content by a streaming media device and causing the media content to be presented on a display device connected to the streaming media device is shown in accordance with some embodiments of the disclosed subject matter. In some embodiments, process200can be executed by a streaming media device. Process200can begin at202by receiving information to be used to connect to a mobile device and/or to connect to the internet via the mobile device. For example, as described above in connection with blocks102and104ofFIG.1, in some embodiments, the information can include information for establishing a peer-to-peer networking connection between the streaming media device and the mobile device (e.g., a WiFi Direct connection, a WiFi Aware connection, and/or any other suitable type of connection), such as an SSID and/or passphrase, and/or any other suitable information. As another example, as described above in connection with blocks102and104ofFIG.1, in some embodiments, the information can include information for connecting to the internet via an HTTP proxy server executing on the mobile device, such as a mDNS message that indicates an IP address of the mobile device, and/or any other suitable information. In some embodiments, the streaming media device can, in response to receiving the information, establish a connection to the mobile device and/or to the internet via the mobile device, as described above in connection with block104ofFIG.1. At204, process200can receive, from the mobile device, an indication or instruction to launch an application for presenting media content. In some embodiments, the indication can be received in any suitable manner, for example, via a peer-to-peer networking connection established between the mobile device and the streaming media device, as described above in connection with block202. In some embodiments, the application for presenting media content can be any suitable application. For example, in some embodiments, the application can be a default video player, image viewer, or document viewer suitable for viewing a particular file type or type of media content (e.g., videos, images, text documents, and/or any other suitable type of media content). As another example, in some embodiments, the application can be associated with a particular media content sharing or streaming service. As a more particular example, in some embodiments, the application can be an application suitable for viewing media content hosted by the media content sharing or streaming service that has been previously downloaded and/or that is to be streamed by the streaming media device. In some embodiments, the application can be determined in any suitable manner, for example, based on the media content item selected for presentation on a display device connected to the streaming media device by a user of the mobile device, as described above in connection with block110ofFIG.1. At206, process200can request content associated with the indicated application from a server associated with the indicated application. The request can be in response to receiving the instructions, and can be transmitted to the media content sharing service. For example, in instances where the application is an application for presenting media content associated with a particular media content sharing and streaming service, process200can request the content associated with the indicated application from a server associated with the service. In some embodiments, the content can include any suitable content, such as data and/or instructions for rendering a video player window associated with the application, and/or any other suitable type of content. The requested content may also include a request for one or more permissions for presenting the media content. In some embodiments, process200can request the content associated with the application from the server using any suitable technique. The request may be sent directly to the server by the streaming media device; the bandwidth requirements associated with such a request can be small as compared to the bandwidth requirements for downloading the media content item, so such a request may be transmitted directly from the streaming media device, even if the network connection is slow and/or unreliable. In other embodiments, the streaming media device may transmit a request to the mobile phone, in response to receiving the instruction or indication, to cause the mobile phone to transmit the request to the media content sharing service to render a video. In this case, the request to the media content sharing service is transmitted via the mobile phone, which may help provide access to content when there is no network connection associated with the streaming media device. For example, in some embodiments, process200can transmit a message to the server via a communication network corresponding to the HTTP proxy server executing on the mobile device, as described above in connection with block202. Note that, in some embodiments, block206can be omitted. For example, in instances where the application has been previously launched by the streaming media device, process200can access previously received content associated with the application. At208, process200can receive one or more media content items from the mobile device. For example, as described above in connection with blocks110and112ofFIG.1, in some embodiments, the received one or more media content items can be media content items that were previously downloaded to the mobile device from the media content sharing or streaming service at an earlier time of time. As another example, in some embodiments, the received one or more media content items can be media content items that were stored on the mobile device and/or that were generated on the mobile device (e.g., videos or pictures captured by the mobile device, documents created on the mobile device, and/or any other suitable type of media content items). As yet another example, in some embodiments, the one or more media content items can include a media content item that is to be streamed from a media content sharing or streaming service to the streaming device (via the mobile phone) and presented on the display device connected to the streaming media device and so is downloaded and stored in a buffer at the mobile phone during the streaming process. In some such embodiments, data corresponding to the media content item that is to be streamed can be received by streaming media device via an HTTP proxy server executing on the mobile device from a server associated with the media content sharing or streaming service. In some embodiments, process200can receive the one or more media content items via the peer-to-peer networking connection (e.g., a WiFi Direct connection, a WiFi Aware connection, and/or any other suitable type of connection), as described above in connection with block202. In some embodiments, process200can cause the received one or more media content items to be stored in memory of the streaming media device. In some such embodiments, the one or more media content items can be presented on the display device connected to the streaming media device in an offline mode, for example, if the peer-to-peer networking connection between the mobile device and the streaming media device is terminated and/or if the HTTP proxy server executing on the mobile device used by the streaming media device to access the internet is terminated. Additionally or alternatively, in some embodiments, in instances where the received media content items include a media content item that is to be streamed, portions of the media content item to be streamed can be stored temporarily on the streaming media device (e.g., in a buffer, and/or in any other suitable location) as the media content item to be streamed is presented. At210, process200can receive a selection of a particular media content item (e.g., one of the one or more received media content items described above in connection with block208, and/or any other suitable particular media content item) to be presented on the display device connected to the streaming media device. In some embodiments, process200can receive the selection in any suitable manner. For example, in some embodiments, process200can receive a selection via a user interface, as described above in connection with block114ofFIG.1andFIG.4. In some embodiments, process200can receive the selection via a remote control associated with the streaming media device. At212, process200can begin causing the selected media content item to be presented on the display device. In some embodiments, process200can cause the selected media content item to be presented on the display device in any suitable manner. For example, in some embodiments, the media content item can be presented within a video player window associated with the application identified as described above in connection with block204. In some embodiments, process200can cause the selected media content item to be presented in a manner in which playback of the media content item can be modified in any suitable manner. For example, in some embodiments, a viewer of the media content item can adjust a volume of the media content item, a playback position of the media content item (e.g., by fast-forwarding or rewinding the media content item, and/or in any other suitable manner), and/or modify playback of the media content item in any other suitable manner. In some embodiments, playback of the media content item can be modified using a remote control device associated with the streaming media device. Note that, in some embodiments, in instances where a viewer of the media content item indicates that a playback position is to be changed (e.g., rewind the media content item, fast-forward the media content item, and/or change the playback position in any other suitable manner), process200can change the playback position using any suitable technique or combination of techniques. For example, in instances where process200received an entirety of the media content item at block208(e.g., in instances where the media content item was stored on the mobile device and transferred fully to the streaming media device), process200can skip to an indicated location by identifying a requested playback position and causing playback to shift to the request playback position. As another example, in instances where process200is streaming the media content item from a media content sharing and/or streaming service, process200can transmit a request to the service to transmit an updated portion of the media content item based on a requested playback position (e.g., an updated portion corresponding to a future playback position that has not yet been received by the streaming media device). As a more particular example, in some embodiments, process200can transmit the request to the service via an HTTP proxy server executing on the mobile device. It should be noted that, although the embodiments described herein generally relate to presenting media content on a media playback device connected to a streaming media device without requiring a WiFi access point for the streaming media device, this is merely illustrative. For example, an operating system executing on the streaming media device may require an update file that updates the operating system. In a more particular example, when a mobile device and a streaming media device are connected (e.g., via a peer-to-peer connection), the streaming media device can transmit an indication to the mobile device to check for updates to a current version of the operating system executing on the streaming media device when a suitable network connection is available to the mobile device. In continuing this example, in response to an associated mobile device having access to an update server (e.g., via an internet connection), the mobile device can download and/or other retrieve the update file and, in turn, can transmit the update file to the streaming media device upon connecting with the streaming media device. The streaming media device can store the update file in the memory or other suitable storage device and can execute the update file that causes the operating system of the streaming media device to be updated. Turning toFIG.5, an example 500 of hardware for presenting media content that can be used in accordance with some embodiments of the disclosed subject matter is shown. As illustrated, hardware500can include a server502, a communication network504, a mobile device506, a streaming media device508, and/or a display device510. Server502can be any suitable server(s) for storing media content, information, data, programs and/or any other suitable content. For example, in some embodiments, server502can be associated with a media content streaming or sharing service and can host any suitable media content items (e.g., videos, television programs, movies, audio content, and/or any other suitable type of media content items) that can be viewed on user devices. As a more particular example, in some embodiments, server502can transmit selected media content items to a user device, such as mobile device506. As another example, in some embodiments, server502can grant permission for display device510connected to streaming media device508to present a particular media content item using any suitable information and/or technique(s), such as by verifying user credentials associated with a user account corresponding to a media content sharing or streaming service provided by server502. Communication network504can be any suitable combination of one or more wired and/or wireless networks in some embodiments. For example, communication network504can include any one or more of the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a digital subscriber line (DSL) network, a frame relay network, an asynchronous transfer mode (ATM) network, a virtual private network (VPN), and/or any other suitable communication network. Mobile device506and streaming media device508can be connected by one or more communications links to communication network504that can be linked via one or more communications links to server502. For example, in some embodiments, mobile device506can be connected to server502via a mobile network or cellular network. As another example, in some embodiments, streaming media device508can be connected to server502via mobile device506. As a more particular example, in some embodiments, streaming media device508can be connected to mobile device506via a peer-to-peer networking protocol (e.g., WiFi Direct, WiFi Aware, and/or any other suitable protocol), and can thereby be connected to server502via a proxy server executing on mobile device506, as described in more detail in connection withFIGS.1and2. The communications links can be any communications links suitable for communicating data among mobile device506, streaming media device508, and server502such as network links, dial-up links, wireless links, hard-wired links, any other suitable communications links, or any suitable combination of such links. In some embodiments, mobile device506can be any suitable type of mobile device, such as a mobile phone, a tablet computer, a laptop computer, a wearable computer, and/or any other suitable type of computer. In some embodiments, streaming media device508can be any suitable type of device for storing media content and/or for causing media content to be presented on associated display device510. In some embodiments, streaming media device508can have any suitable type of storage that can store media content transferred to streaming media device508via mobile device506. In some embodiments, streaming media device508can be connected to display device510in any suitable manner, such as via an HDMI connection, and/or in any other suitable manner. In some embodiments, display device510can be any suitable type of display device, such as a television, a projector, and/or any other suitable type of display device. Although server502is illustrated as one device, the functions performed by server502can be performed using any suitable number of devices in some embodiments. For example, in some embodiments, multiple devices can be used to implement the functions performed by server502. Although one mobile device506, one streaming media device508, and one display device510are shown inFIG.5to avoid over-complicating the figure, any suitable number of devices, and/or any suitable types of user devices, can be used in some embodiments. Server502, mobile device506, streaming media device508, and/or display device510can be implemented using any suitable hardware in some embodiments. For example, in some embodiments, devices502and/or506-510can be implemented using any suitable general purpose computer or special purpose computer. For example, a mobile phone may be implemented using a special purpose computer. Any such general purpose computer or special purpose computer can include any suitable hardware. For example, as illustrated in example hardware600ofFIG.6, such hardware can include hardware processor602, memory and/or storage604, an input device controller606, an input device608, display/audio drivers610, display and audio output circuitry612, communication interface(s)614, an antenna616, and a bus618. Hardware processor602can include any suitable hardware processor, such as a microprocessor, a micro-controller, digital signal processor(s), dedicated logic, and/or any other suitable circuitry for controlling the functioning of a general purpose computer or a special purpose computer in some embodiments. In some embodiments, hardware processor602can be controlled by a server program stored in memory and/or storage of a server, such as server502. For example, in some embodiments, the server program can cause hardware processor602to transmit a media content item to mobile device506, verify user credentials to allow presentation of a media content item on display device510, and/or perform any other suitable functions. In some embodiments, hardware processor602can be controlled by a computer program stored in memory and/or storage604of streaming media device508. For example, the computer program can cause hardware processor602to establish a connection to mobile device506, receive and store media content transferred from mobile device506, cause presentation of a selected media content item on display device510, and/or perform any other suitable functions. Memory and/or storage604can be any suitable memory and/or storage for storing programs, data, and/or any other suitable information in some embodiments. For example, memory and/or storage604can include random access memory, read-only memory, flash memory, hard disk storage, optical media, and/or any other suitable memory. Input device controller606can be any suitable circuitry for controlling and receiving input from one or more input devices608in some embodiments. For example, input device controller606can be circuitry for receiving input from a touchscreen, from a keyboard, from one or more buttons, from a voice recognition circuit, from a microphone, from a camera, from an optical sensor, from an accelerometer, from a temperature sensor, from a near field sensor, from a pressure sensor, from an encoder, and/or any other type of input device. Display/audio drivers610can be any suitable circuitry for controlling and driving output to one or more display/audio output devices612in some embodiments. For example, display/audio drivers610can be circuitry for driving a touchscreen, a flat-panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Communication interface(s)614can be any suitable circuitry for interfacing with one or more communication networks (e.g., computer network504). For example, interface(s)614can include network interface card circuitry, wireless communication circuitry, and/or any other suitable type of communication network circuitry. Antenna616can be any suitable one or more antennas for wirelessly communicating with a communication network (e.g., communication network504) in some embodiments. In some embodiments, antenna616can be omitted. Bus618can be any suitable mechanism for communicating between two or more components602,604,606,610, and614in some embodiments. Any other suitable components can be included in hardware600in accordance with some embodiments. In some embodiments, at least some of the above described blocks of the processes ofFIGS.1and2can be executed or performed in any order or sequence not limited to the order and sequence shown in and described in connection with the figures. Also, some of the above blocks ofFIGS.1and2can be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. Additionally or alternatively, some of the above described blocks of the processes ofFIGS.1and2can be omitted. In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as non-transitory forms of magnetic media (such as hard disks, floppy disks, and/or any other suitable magnetic media), non-transitory forms of optical media (such as compact discs, digital video discs, Blu-ray discs, and/or any other suitable optical media), non-transitory forms of semiconductor media (such as flash memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and/or any other suitable semiconductor media), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. Accordingly, methods, systems, and media for presenting media content are provided. Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed embodiments can be combined and rearranged in various ways.
47,420
11943516
DETAILED DESCRIPTION OF THE DRAWINGS In describing exemplary embodiments of the present disclosure illustrated in the drawings, specific terminology is employed for sake of clarity. However, the present disclosure is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents which operate in a similar manner. Exemplary embodiments of the present invention provide an approach for interactive web-browsing that utilizes a specialized browser application running on a computer or mobile device, such as a smartphone or tablet, that is able to connect to one or more live-streaming sessions, over multiple live streaming platforms, and provide a streamlined and specialized user interface therein for viewing the live-stream, participating in a chat room associated with the live-stream, playing a video game associated with the live-stream and/or remotely controlling the operation of a sex toy device in the possession of a content creator of the live-stream and/or having the operation of a sex toy device in the user's possession controlled by the content creator and/or another participant in the live-stream. The specialized browser may either be downloaded from an application repository associated with a user's device, such as APP STORE provided by Apple Inc. or GOOGLE PLAY provided by Alphabet Inc. or may be independently downloaded and installed (e.g., sideloaded) into the user's device. The specialized browser may combine features typical of a mobile browser, such as accepting URLs and rendering webpages, but the specialized browser may add to this capability, certain specialized capabilities for managing live-streaming sessions over multiple platforms, including providing customized user interfaces and consolidated alerts, and managing interactions and control over remotely operated sex toy devices. The specialized browser may be referred to herein as the Consolidated Browser. FIG.1is a schematic diagram illustrating a user interface (UI) element of the Consolidated Browser in accordance with exemplary embodiments of the present disclosure.FIG.5is a flowchart illustrating a method for interactive web browsing in accordance with an exemplary embodiment of the present disclosure, referring toFIGS.1and5, using a viewer device, such as a smartphone, a user may first initiate the installation of the Consolidated Browser from an application repository (e.g., an app store) (Step S101). The user may then execute the Consolidated Browser app after it has been installed (Step S102). Exemplary embodiments of the present disclosure may thereby provide a user an address bar where the user can enter a URL or search criteria for searching for a particular URL. The user may input a URL within the address bar (Step S103). The Consolidated Browser may determine whether the input URL corresponds to a live-streaming platform or another form of website, for example, by consulting a whitelist of supported live-streaming platforms (Step S104). If the Consolidated Browser determines that the URL corresponds to another website, then the URL will be rendered as a website, for example, using an available rendering engine such as WEBKIT provided by Apple Inc., BLINK, provided by Alphabet Inc., GECKO provided by MOZILLA, or another suitable rendering engine (Step S105). Where the Consolidated Browser determines that the URL corresponds to a supported live-streaming platform or another type of entertainment-enabled website for delivering content pre-registered in a specially supported website white list, rather than rendering the URL website, the Consolidated Browser will interface directly with the live-streaming platform to provide its livestreams to the Consolidated Browser's consolidated UI in which multiple live-streams across multipole different platforms may be accessed and participated in using a unified UI (Step S106) or to the Consolidated Browser's specialized UI such as a broadcast room page re-rendered likeFIG.2. Thus, exemplary embodiments of the present invention may determine whether an entered website URL corresponds to a specially supported website, and where it does, rather than rendering the website in a traditional manner, the website may be re-rendered in such a way as to provide enhanced interactive functionality within the Consolidated Browser app itself, such that the viewer/user can interact through a means of communication that is independent of the specially supported website and is handled directly by the Consolidated Browser and its assisting webservices to facilitate a more direct and engaging line of communication between and among viewer/users and the content provider/streamer, either by text chat, voice chat, video chat, direct control of each other's sex toys, and/or by the control of cameras or other streaming accessories corresponding to the content provider/streamer or any other device that contributes to the content creation while the viewer/user simultaneously participates in the live-stream via the associated streaming platform. FIG.2is a schematic diagram illustrating a live-stream UI of the Consolidated Browser in accordance with exemplary embodiments of the present disclosure. As can be seen from this figure, the live-stream UI may display the live stream video, a chat window for participating in a chat associated with the live-stream, and various other elements which will be described in greater detail below. As can be seen fromFIG.1, the Consolidated Browser may even maintain a list of recently/frequently visited live-streaming platforms and/or content creator subscriptions, proximate to the address bar, for example, on a same starting page as the address bar. Clicking or pressing on an icon associated with a given recently/frequently visited live-streaming platform may lead the user directly to a page showing available live streams associated with that platform. The user may then click on an available live stream to begin participation therein. The user may also choose to subscribe/follow a particular content creator and the Consolidated Browser may send an alert to the user whenever that content creator has a livestream, regardless of the platform it is on. The Consolidated Browser may accomplish this task by interacting with a central server that checks for live streams across all supported platforms. As mentioned above, the Consolidated Browser may also provide a UI for the user to connect the Consolidated Browser to a local sex toy in the possession of or in proximity to the user for the purpose of offering access/control of this local sex toy to other viewers or content creators over live-streams.FIG.3is a schematic diagram illustrating a connected hardware UI of the Consolidated Browser in accordance with exemplary embodiments of the present disclosure. The Consolidated Browser may provide this access/control either by interfacing with access provided by the particular live-streaming platform or by a direct connection managed by the central server that bypasses the need to go through the live-stream platform, for example, where the user is using the Consolidated Browser and the content creator is using a compatible app or service (such as the Connect APP illustrated inFIG.7) that is similarly in communication with the central server supporting the Consolidated Browser. In this respect, the content creator may make use of the central server to register a sex toy for remote control and the user may make use of the control panel functionality, such as that illustrated inFIG.8, provided by the same central server to establish access/control of a sex toy of a content creator who is live streaming on a live streaming platform, without the control necessary being managed by the live streaming platform. FIG.7is a diagram representing the Connect APP and some of the functionality made available thereby. The Connect APP, for example, may provide the content creator with a UI for connecting hardware, a UI for controlling connected sex toys, a UI for establishing gating conditions for control of their toys and cameras, a UI for controlling their own camera, a UI for starting a livestream on a streaming platform, and a UI for linking an existing livestream to the Connect APP. Other UI elements may also be included, for example, there may be a UI element for each function of the Connect APP discussed and/or described herein. FIG.8is a diagram representing a connected device control panel within the Consolidated Browser App in accordance with exemplary embodiments of the present invention. One or more of these functions may be gated by the satisfaction of conditions established by the content creator, as described above, and this control panel may feature control buttons for initiating operational modes of one or more sex toys associated with the content creator and/or other viewers. The illustrated controls are symbolic but there is no limit to how detailed the control of the devices may be. In addition to controlling the sex toys, this control panel, or a similar control panel, may be used to control the operation of the camera of the content creator, including, but not limited to, directional panning and/or rotation, and zooming in/out. The Consolidated Browser may additionally offer a filtering function to the user so that a browsing page of the Consolidated Browser may list, or graphically display, multiple broadcast rooms across different broadcast platforms. The respective content creators/models for these displayed broadcast rooms may use their devices to run software (locally or over one or more web services) to interface with the central server and locally connect their sex toys thereto so as to stream their content via different broadcast servers that correspond to broadcast platforms. As discussed above, the Consolidated Browser may provide a similar UI for each live stream regardless of which platform that live stream is hosted on. Referring back toFIG.2, this live stream UI may provide a window for playing the live-stream video, a window for showing chats, and a window for entering text to send to the chat. The live stream UI may also prominently show a quick tip icon that may float over the video window (although it may be moved) and may permit the user a quick way to send a monetary tip or some other transfer of points to the content creator, with the particulars of the transaction being managed by the Consolidated Browser app or its central server. Thus, the Consolidated Browser may be pre-programed with information on how to consummate a tip transaction for each of the supported platforms so as to handle tipping seamlessly to the user. In another embodiment, the quick tip icon may be used to autopilot an original html element (e.g., a tip sending button native to the live-stream platform) of the website. Another quick icon may be similarly displayed along with the quick tip icon. This icon may be a “sync to streamer” icon that allows for the operation of a user's sex toy to be synchronized to the operation of the content creator's sex toy. When activated, the Consolidated Browser may monitor sex toy activity of the content creator and then replicate this activity to the user's sex toy that is linked to the device that the Consolidated Browser is running on, for example, via Bluetooth. The live stream UI may also provide UI elements for performing all functions that are performable by directly connecting to the live stream platforms, for example, by accessing using a conventional web browser, however, each of these functionalities may be represented in a unified way so as to provider the user with a seamless experience that is consistent across diverse platforms. For example, microphone functionality (“Mic”) may be provided to allow the user to activate the microphone of the user's device so as to contribute audio to the live-stream or directly to the content creator. The microphone functionality may be used either to send a recorded message that the content creator can listen to when ready, or to send audio in real-time to be heard by the content creator and/or other viewer participants. When the Consolidated Browser is rendering a live-stream from information derived from a live stream platform, the Consolidated Browser may continue to monitor for special operating instructions of the viewer (Step S107) that may be used to perform operations specific to the Consolidated Browser, as opposed to engaging with features of the live-stream platform. These operating instructions may relate to, for example, the games played within the live stream chat room or the control of connected devices such as cameras and sex toys. The microphone feature, as well as other features described herein, may be restricted from use until a tip of a predetermined value is transferred to the content creator, or by other conditions set by the content creator. In this way, the special functionality of the Consolidated Browser may be gated by various conditions, such as a tipping condition. For example, the icons representing these features may be hidden or deactivated (e.g., grayed out) prior to the predetermined conditions being met. The predetermined conditions are not necessarily limited to tipping conditions as predetermined conditions may include, for example, earning a VIP designation within the Consolidated Browser app, purchasing an NFT privilege, connecting to a specific toy via the Consolidated Browser, etc. The Consolidated Browser may therefore check for the satisfaction of conditions when an instruction has been detected (Step S108). The Consolidated Browser may receive these tip conditions from the content creator and then gate access to these features in a manner consistent with the requests of the content creator submitted either through the Consolidated Browser or through a control panel accessible by a conventional browser, for example, the control panel being maintained by the central server, or though functionality of the individual platforms. Thus, when instructions have been detected (in Step S107) and it is determined that the necessary conditions have been satisfied (in Step108) then the Consolidated Browser may extend control of the interactive element to the viewer (Step S109), for example, to play a game, control a sex toy of the content creator, and/or control a camera of the content creator. The Consolidated Browser, for example through its associated central server, can directly handle text chats, interactive video games, and the sending of multimedia, among the user, the content creator, and other users accessing a same livestream using the Consolidated Browser, without going through the platform. In this way, communications associated with a live stream may circumvent the live stream platform to provide a greater level of engagement that is available only to users of the Consolidated Browser and not to other users that may be directly accessing the live stream through the platform's own web portal. The Consolidated Browser may also provide to the user a control panel to control a mode of operation of a content creator's connected sex toy. This feature may also be gated by the satisfaction of a predetermined tip. The control panel, by having both the user and the content creator utilize the Consolidated Browser, may provide a very high level of control over the sex toy that might not otherwise be possible when having to go through the live streaming platform alone, and as discussed above, this control may be managed by the central server rather than the live streaming platform. Moreover, neither the user nor the content creator need make use of browser plugins and the like. However, where the content creator is not utilizing the Consolidated Browser, the Consolidated Browser may still manage a connection between the user's control panel and a browser plugin of the content creator designed for providing remote control of the content creator's sex toy. The user may jump back and forth between the control panel and the live stream UI, for example, by the use of a fast switch icon displayed on each UI or the control panel may be implemented as a floating element on top of the live stream UI. The Consolidated Browser may also grant the content creator access to the connected sex toys of the user in a similar manner and the user can select among his registered connected sex toys to grant remote access to. According to some exemplary embodiments of the present disclosure, and as mentioned above, the Consolidated Browser may provide to the user, for example, upon the satisfaction of a predetermined tip, control over the content creator's camera. For example, pan, tilt, zoom, and camera switch functionality may be transferred to the user. This may be implemented by the content creator registering one or more cameras with the Consolidated Browser and then control over these cameras may be placed under the direction of the user within a control panel, in a manner similar to how the user may gain control to a sex toy of the content creator. The central server of the Consolidated Browser may be used to negotiate camera/sex toy control in a secure environment without having to open control over a website such as the web portal of the live stream platform. As discussed above, the Consolidated Browser may manage the play of interactive games between one or more users and the content creator. The Consolidated Browser may be used to render advanced graphics and/or sound associated with an interactive game occurring on the live stream platform, or the Consolidated Browser may host its own games that are played among live stream participants (e.g., the user, other viewers, and the content creator) while circumventing interaction with the platform. The user may select a game play icon from the live stream UI and may bring up a game play UI where the user can create a new game or participate in a game being played. The game may be participated in by all live stream participants utilizing the Consolidated Browser and the Consolidated Browser may also provide a rendering for other games hosted on the platform. In this way, each participant may see and interact with the game being played. Rewards for winning games may similarly be used to grant access to the sort of controls discussed above that are gated. The central server may monitor live streams, even when the user is not, so that the Consolidated Browser may be able to present to the user playbacks of prior live streams that were missed, or provide instant-replay type functionality where the user is participating in a live stream. The Consolidated Browser may even be able to provide playback of synchronized sex toy control, in addition to playback of live stream audio/video by recording toy control commands embedded within the recording of the live stream. Recordings may be maintained on the central server for a time, in a manner consistent with the requirements of the platform and content creator, and once again, this functionality may be gated by a tipping requirement. The Consolidated Browser may provide other features such as matching sex toy control to website content accessed by the Consolidated Browser as a standard website, for example, by matching sex toy control to music, movies, audiobooks, etc. The user may be able to initiate, end, and adjust this sex toy control rendering by accessing a browsing control panel. FIG.4is a schematic diagram illustrating a system for interactive web browsing in accordance with exemplary embodiments of the present disclosure. As can be seen from this figure, a creator device201may run an application or web service for connecting to a central server206of the consolidated browser. This application or web service may be, for example, a version of the Consolidated Browser, but may alternatively be a custom application developed for granting content creators direct access to the functionality of the Central Server206. The creator device201may be connected to a sex toy202via a local connection such as Bluetooth or wired USB. This connection may be managed by the application or web service discussed above. A video camera203may also be connected to the creator device201via a wired and/or wireless connection, or the video camera203may be part of the creator device201. Control of the sex toy202and/or the camera203may be managed by the creator device201and this control may be granted remotely in accordance with the Consolidated Browser. The creator device201may be connected to a wide area computer network204, such as the Internet. The central server associated with the Consolidated Browser app206may also be connected to the computer network204so that the content creator app/web service (referred to herein as the “Connect APP”) may contact the central server206over the computer network204so that the central server206may negotiate functionality of the Consolidated Browser, given to the viewers using the Consolidated Browser, that is not handled by the streaming platforms. While there may be any number of streaming platforms whitelisted within the Consolidated Browser, each streaming platform may operate its own server(s) for hosting the live streams on its platform. A first streaming platform server207and a second streaming platform server208are shown for simplicity and each of these servers connects to the computer network204and in that way, the central server of the consolidated browser app206is able to observe data from the various streaming platforms. There may be any number of viewers running the Consolidated Browser and each may have a viewer device204, which may also be a smartphone, etc. The viewer device204may also have a sex toy205connected thereto, for example, for performing the synchronization discussed above. As discussed above, exemplary embodiments of the present disclosure may utilize the Consolidated Browser to manage various interactions between the viewer and the content creator (“streamer”) outside of the integration of the streaming platform. An example of one such interaction is the “sync with streamer” functionality discussed above in which the Connect APP manages the synchronization of operation of a sex toy of the content creator and a sex toy of the viewer.FIG.6is a signal diagram illustrating an approach for performing such an interaction. This discussion is provided as an example of how the Consolidated Browser and related elements may perform many such interactions. First, the streamer may log in to the streaming platform using a computer and display a QR code for the purpose of allowing the streamer's smartphone running the Connect APP to engage with the streaming session (1). Next, the streamer's smartphone running the Connect APP may scan the displayed QR code to engage (2). Account information may then be sent from the streamer's PC to the Connect APP running on the smartphone (3). The streamer's PC to the Connect APP running on the smartphone may then connect to the streamer's sex toy (4) and the streamer's sex toy may return toy data pertaining to the operation thereof (5). The Connect APP running on the smartphone may then send the toy data to the streamer's PC (6). The streamer may then begin the live-streaming session with the server of the streaming (“broadcast”) platform (9) (“the first server”). The first server may interact with the streamer's PC to manage native interactions of the broadcast platform (10). Native interactions may include those interactions that the broadcast platform is known to manage, such as receiving streaming signals and relaying those signals to the viewers of the live-stream and managing basic chat room functionality such as sending and broadcasting text messages amongst the viewers and streamer and facilitating the transaction of tipping, which is the sending of points, tokens, and other representations of monetary value from the viewers to the broadcaster. The streamer's PC may then interact with a server of the Consolidated Browser (“second server”), via the Connect APP, to engage the second server to handle various interactions associated with the live stream that are not managed by the broadcast platform (i.e., to manage interactions that are not native to the broadcast platform) (11). The viewer, running the Consolidated Browser, may then engage with the second server to join the live stream (12). The second server may interact with the viewer's smart phone running the Consolidated Browser to negotiate interactions and to participate in those functions provided by the Consolidated Browser that are not native to the streaming platform (13&14). The Consolidated Browser may also intermediate the interaction of the viewer with the native functionality of the live stream so as to provide the unified user experience discussed above. This may include the rendering of the live stream within the Consolidated Browser and the sending of a tip from the viewer to the streamer, which may be performed though the first server managed by the broadcast platform (15). The broadcast platform may thereafter generate and/or pass instructions for sex toy control to the second server (17) whose responsibility it is to implement toy control commands. The second server may send the toy control commands to the streamer's smartphone running the Consolidated Browser (18) where the commands may then be passed back to the streamer's sex toy (19) thereby allowing the viewer to control the operation of the streamer's sex toy. The second server, associated with the Consolidated Browser, may then forward sex toy instructions that are based on the instructions being implemented by the streamer's sex toy, back to the viewer's smart phone running the Consolidated Browser. These instructions may be referred to herein as “feedback” as they are instructions for the viewer's sex toy based on the operation of the streamer's sex toy. Here, this interaction is illustrated as including passing instructions between two implementations of the second server (20) and then forwarding instructions from the second implementation of the second server to the viewer's device (21). However, it is to be understood that the second server may be embodied as a distributed server having any number of actual server computers, as is also the case for the first server associated with the broadcast platform. For example, here, the first implantation of the Second Server may be associated with the Connect App of the streamer's PC while the second implementation of the Second Server may be associated with the Consolidated Browser (shown in this figure as the “VibeMate APP”). Alternatively, a single server may be relied upon to handle all functionality of the second server. Ultimately, however, the operation of the viewer's sex toy may be controlled according to the commands sent by the viewer's device. As the commands to control the viewer's sex toy are substantially synchronized with those of the streamer's sex toy, the “sync with streamer” functionality has been performed. There may be additional viewers running the Consolidated Browser and viewing the same streaming content. For these additional viewers, instructions and communications from the Second Server associated with the Connect App may also be passed to a Second Server associated with their implementation of the Consolidated Browser (20) and these instructions and communications may then be passed on from the Second Server associated with their implementation of the Consolidated Browser to their implementation of the Consolidated browser (21) and in response, a sex toy of that viewer may be controlled (22) according to the passed instructions, for example, to implement the feedback discussed above. FIG.9shows an example of a computer system which may implement a method and system of the present disclosure. The system and method of the present disclosure may be implemented in the form of a software application running on a computer system, for example, a smartphone, a server, and/or a personal computer (PC), etc. The software application may be stored on a recording media locally accessible by the computer system and accessible via a hard wired or wireless connection to a network, for example, a local area network, or the Internet. The computer system referred to generally as system1000may include, for example, a central processing unit (CPU)1001, random access memory (RAM)1004, a printer interface1010, a display unit1011, a local area network (LAN) data transmission controller1005, a LAN interface1006, a network controller1003, an internal bus1002, and one or more input devices1009, for example, a keyboard, mouse etc. As shown, the system1000may be connected to a data storage device, for example, a hard disk,1008via a link1007. Exemplary embodiments described herein are illustrative, and many variations can be introduced without departing from the spirit of the disclosure or from the scope of the appended claims. For example, elements and/or features of different exemplary embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.
29,356
11943517
DETAILED DESCRIPTION The DLNA Digital Media Service (DMS) specification provides methods for DLNA client devices (also referred to as “DLNA control points” or “DLNA clients”) to engage a media renderer to play content through the exchange of a URL and a control interface. Here, the abstraction is that the URL represents a video, photo, or audio asset that exists as a stored filed on some system available in the LAN. The present principles extend this abstraction to allow rendering of broadcast audio/video streams which are accessed through a tuning device. The DMS specification provides metadata definitions for EPG (Electronic Program Guide) events, allowing a client to query the DMS for information about times and channels for upcoming broadcast programming. A DMS client wishing to play the program is presented with a URL that can be used to request the program. However, broadcast programs are generally not streamed to an IP packet network, in preference for transmitting a signal on a specific frequency over terrestrial wiring, satellite transmission, or over the air RF transmission. Each of these transmission methods requires a tuning device, for example, a set top box (STB), that can lock onto the signal to receive the program. A DMS device wishing to provide a consumer with the ability to view content that is received on a tuning device must translate the program metadata available into the tuning frequency to monitor for the program requested, and present a URL to the DMS consumer that will enable the system to know which frequency to tune to for play the requested program. Note that a given frequency can have multiple programs encoded on it. A program ID, for example, an MPEG-2 program number, can be used to identify a program within a given frequency. The present principles solve this problem by providing a mechanism that correlates a media item, which has a metadata element containing either a station call sign or channel number for a program, to a specific broadcast frequency and a virtual channel that a service provider broadcasts the program on. This can be done by scanning the frequencies available on the broadcast TV digital connection that comes into the DLNA client device, and correlating that frequency information with the guide information that is published in the digital data stream received from the network operator on the digital connection. Note that the guide information has user oriented data (for example, program description, station call sign, channel number) and technical data (for example, virtual channel number, MPEG program ID). For example, in a DVB-T system, these data can be broadcast in an MPEG stream at a given frequency, and set into a defined MPEG program ID. Every network operator provisions this information in a somewhat custom manner that they define. A provider can also provision richer information about programming from a database accessible over an IP connection. This information will be of much greater richness than the basic information container sent in an MPEG transport stream over the DVB-T interface. Since program/channel frequencies are not universal and are specifically assigned by the broadcast providers, the present principles also provide the means for a correlation table to be provisioned in the field, allowing the device to learn its frequency correlation dynamically. FIGS.1and2illustrate an exemplary method10for using DLNA DMS service to control the playback of broadcast programs. Using an MPEG stream as an exemplary data stream and DVB as an exemplary broadcasting system, we discuss how the correlation can be generated dynamically. Note that the method can be easily extended when other data stream formats or broadcasting systems (for example, ATSC, cable networks) are used. When receiving an MPEG stream, DVB triplets (i.e., original network ID, transport stream ID, and service ID) can be pulled from the MPEG stream along with the physical channel number. This is cross-referenced with a table provisioned for the system that correlates a physical channel to a user's idea of the channel (12). The metadata that is stored in the content management system (CMS) regarding events (broadcast programs) contains the call sign and virtual channel number. In order for CMS to build the URL it finds the DVB triplet for the virtual channel number from its provisioned table, and generates a URL (14), for example, http://192.168.1.100/MediaPlayer?source=dvb://fff.99dc.1&sink=decode. This information may be packaged in a UPnP DIDL-Lite document that is transmitted to the DLNA client as part of the standard “res” element in the “protocolInfo” property, as illustrated in TABLE 1. To gain a measure of control over the session, the URL may be encoded as a security measure such that clients cannot guess how to select URLs. TABLE 1<?xml version=“1.0” encoding=“UTF-8”?><DIDL-Litexmlns:dc=“http://purl.org/dc/elements/1.1/”xmlns=“urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/”xmlns:upnp=“urn:schemas-upnp-org:metadata-1-0/upnp/”xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”xsi:schemaLocation=“urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/http://www.upnp.org/schemas/av/didl-lite.xsdurn:schemas-upnp-org:metadata-1-0/upnp/http://www.upnp.org/schemas/av/upnp.xsd”><item id=“6” parentID=“3” restricted=“0”><dc:title>ESPN Sports</dc:title><dc:creator>ESPN</dc:creator><upnp:class>object.item.videoItem </upnp:class><res protocolInfo=“stbvideo:*:video/x-ms-wma:*” size=“200000”>http://192.168.1.100/MediaPlayer?source=dvb://iff.99dc.1&sink=decode</res><res protocolInfo=“httplive:*:video/x-ms-wma:*” size=“200000”>http://192.168.1.100/MediaPlayer?source=dvb://iff.99dc.1&sink=httplive&transcode= VBR_4MBPS_31:720p</res></item></DIDL-Lite> For example, the DLNA client may present these “res” element choices to the DLNA digital media rendering (DMR) service it discovers on the STB, and this DMR service indicates that it will accept the stbvideo protocol, so the client presents the http://192.168.1.100/MediaPlayer?source=dvb://fff.99dc.1&sink=decode value to the network for processing, and the program starts playing on the STB because of the plumbing through the HTTP server to the MediaPlayer CGI program which in turn issues a “Dbussend” command for the source/sink parameters. The DLNA client receives the URL (16) and presents the URL to the HTTP service specified in the URL (18), for example, to a HTTP client or to be decoded locally. Consequently, the playback of the program is performed at its designated playback devices (18). Step18is described in further detail inFIG.2. Referring toFIG.2, the HTTP service accepts the “HTTP-Get” request (20). The URL is decoded (22) to extract the frequency and program ID information, for example, DVB triplet information, to determine which program to tune to, and to determine on where the tuned channel is to be played (24). FIG.2illustrates two exemplary choices regarding where to play the broadcast programs. One option is to play the video locally (e.g., protocolInfo=“stbvideo . . . ”) (26), for example, on the TV connected to the set-top box. Another option is to stream out (28), for example, in http live (e.g., protocolInfo=“httplive . . . ”). That is, the digital program is directed over IP link to remote media rendering devices. Assuming a tablet with a video playback function is used as a DLNA control point to request a TV to tune to a given channel, two exemplary use cases, namely playing locally and playing remotely, are explained in the following. In the first use case, the tablet requests the TV to tune to a channel and the TV plays that channel. Operating as a DLNA control point, the tablet discovers the DLNA DMS service, and the DLNA DMR service on the STB. The tablet requests the DMR to play the content item it finds in the DMS on the STB. The DMR recognizes resource metadata that it uses to engage the media player to tune to the channel identified in the metadata, with rendering of the video occurring on the attached TV screen. In the second use case, the tablet request to tune to a TV channel and the tablet plays that channel. Operating as a DLNA control point, the tablet discovers the DLNA DMS service. The tablet wants to direct the video to its local screen, and sees in the resource metadata an option to request a stream in a supported network protocol (for example, HTTP, RTSP). The tablet uses the URL encoded for the network protocol it supports and does an HTTP-Get operation with the provided URL. The URL parameters are processed by the HTTP server to send the “Dbussend” command to the media player, which in turn tunes to the requested channel identified in the URL parameters, and output a stream according to the requested protocol. TABLE 2 is an example of a correlation table built in accordance with the present principles. TABLE 2 illustrates data that correlates a call sign (for example, CSPAN) to a frequency mapping on which the programs for that call sign/channel are broadcast. The information that the device tuner needs to lock on to the channel and select the appropriate virtual channel to decode may be represented by the “Dbussend” locators. In this case, the “Dbussend” locators include the DVB triplet information (for example, ffff.993c.1) that the media player needs in order to tune to a particular channel. Those of skill in the art will appreciate that there are many methods to formulate a command sequence including the broadcast parameters a tuner needs to locate a program, and therefore the present principles are not limited to “Dbussend” locators and they are shown and described here only as an exemplary implementation. For example, a unique string recognized by the DMS which the DMS can map to the locator can be used. TABLE 2FrequencyVirtualDbussend(MHz)ChannelChannelLocatorsName23125-66195dvb://ffff.a12.295CSPAN23125-66233dvb://ffff.a12.296TBSP23125-66397dvb://ffff.a12.297HSN23125-667dvb://ffff.a12.29bQVC26130-4dvb://ffff.0.4TVGUIDE52574-441717dvb://ffff.233d.1b9KVEADT52574-442710dvb://ffff.233d.1baKDOCHD52574-443730dvb://ffff.233d.1bbKPXNDT55579-501dvb://ffff.a28.1f5KJLADT55579-502dvb://ffff.a28.1f6KJLADT855579-503dvb://ffff.a28.1f7KXLADT55579-504dvb://ffff.a28.1f8KCET55579-505dvb://ffff.a28.1f9KTTV55579-5067dvb://ffff.a28.1faKABC55579-507dvb://ffff.a28.lfbKCOP55579-508dvb://ffff.a28.1fcKCAL55579-509dvb://ffff.a28.1fdKPXN55579-510dvb://ffff.a28.lfeKBEH55579-511dvb://ffff.a28.lffKVEADT55579-512dvb://ffff.a28.200KCBS It is important to note that the frequency/call sign/channel mapping is provisioned by the broadcast provider in the service broadcast through tables that are encoded in the video stream (for example, in an MPEG-2 stream) of a digital broadcast. The present principles reads this mapping information and correlates the “Dbussend” command from the MPEG mapping tables to a broadcast program's metadata, using the call sign or channel number data within the metadata to establish the correct “Dbussend” command to send to the tuner when viewing of the program is desired. The broadcast provider selects the frequencies and channel/virtual channel that they want to transmit over a given programming channel (for example, ESPN, CSPAN). The “Dbussend Locator” string is algorithmically defined to encode the frequency and physical channel to decode from that frequency, and is presented to the tuner hardware for locking onto and decoding the signal. FIG.3illustrates an exemplary system110for implementing the present principles. The DLNA client or control point112requests (114) content from the DMS116. As discussed above, the DLNA client can be, for example, tablets, smartphones, portable music players, PCs, etc. The DMS forwards (118) the client's request for content to a content management system120. The content management system120reads its content database140with filtering to honor the client's search criteria. The content management system also accesses a channel translation table142which utilizes the program guide information304from the network operator300in seeking to fulfill the client's request. The content management system120then collects the appropriate data and formats/prepares122them into a DIDL-Lite Schema (124) (e.g., url:schemas-upnp-org:metadata-1-0/DIDL-Lite/) which is forwarded back to DMS116for transmission back to the DLNA client112. One aspect of the method of the present principles includes that the channel translation table142is referenced for each content item that is to be sent to the requesting client, with the DIDL-Lite “res” element tag being set with a http parameter string that will instruct the media player134on which channel to tune to when the client makes a control request to start playing. Within the DLNA protocol, the URL128is presented to the DLNA media renderer130, which provides a control interface (to provision for volume control, pause, etc.) to the media player134which actually implements the tuning and rendering of video/audio on the hardware. The DVB triplet information (i.e., in this example the “Dbussend Locator” string) that is encoded into the URL, passes transparently through the DLNA control point112and the DLNA media renderer130. The media player134provides an interface that will accept the encoded URL from the DLNA media renderer130and performs the tuning and playing of the requested program by accessing and tuning to the corresponding broadcast302. Media player134passes the video/audio136to the playback device138. According to one aspect of the invention, the DLNA media renderer130is provided as an example of how a request to tune can be presented to a media player from a network attached device. It also acts as an intermediary for presenting status information from the media player134to the DLNA control point112, and for receiving player control commands from the control point which are in turn presented to the media player138. In this example, the DLNA media renderer130can be a software specification that describes a message based API for interfacing with the media player. The DLNA control point112communicates with the DLNA media renderer130to enable control of rendering an audio/video asset. Other methods and/or implementations as to how to present a request to tune to a particular channel to the media player may be implemented without departing from the scope of the present invention. Those of skill in the art will appreciate that the media player134is used here as an example, and that any media player that is capable of tuning a digital broadcast can be utilized so long as the appropriate interface to set the channel to tune to via the correlated media item/encoded URL. Examples of media players capable of tuning a digital broadcast can be cable set top boxes, satellite received, etc. It is further contemplated herein that a media player web service may also be utilized with the method of the invention to allow remote clients to configure playback for local or remote rendering. The details of media player web services are well known and understood by those of skill in the art. In accordance with one implementation, the content management system120consumes the program guide information304presented by the network operator300and merges this information with a more comprehensive information guide published via a content publishing service200via a WAN. Due to bandwidth restrictions, the program guide304transmitted by the network operator300has limited information about a scheduled program. As such, additional information is obtained from content publishing services available on the WAN (e.g., imdb.com), such as review, actors, directors, program teasers that are ingested to provide the consumer with richer information base to view about the programming. As will be appreciated, these services are not provider specific, and do not contain tuning information for acquiring the broadcast. The content management system120ingests the metadata from the publishing service300, filtering it based on what the network operator300provides, so that information about programs that not available through the network operator are eliminated from the data presented to the client in the LAN. As will be evident from the above, the content management system includes processing capability that enables it to aggregate the sources of information from the various providers, and present a selection of programming content that is available, such that the DLNA client can review the information, make a choice to play it, and the tuning to the selection is engaged through generated encoded URL of the invention. Thus, no external tuning steps are required as a result of this process. An advantage of the system is that the client does not have to distinguish between broadcast, LAN streamed, WAN streamed, or locally stored content when selecting and making a choice of media program to watch. This resulting simplified user interface for client devices allows for developing one uniform use case as opposed to building various and multiple use cases for viewing and controlling media playback. A user, through a client device and application, can view and play content without regards to its source through these mechanisms. All examples and conditional language recited herein are intended for pedagogical purposes to aid the read in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context. In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein. Reference in the specification to “one embodiment” or “an embodiment” of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. It is to be understood that the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Preferably, the present principles may be implemented as a combination of hardware and software. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof) that is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device. It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures are preferably implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present principles is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present principles. While there have been shown, described and pointed out fundamental novel features of the present principles, it will be understood that various omissions, substitutions and changes in the form and details of the methods described and devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the same. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the present principles. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or implementation of the present principles may be incorporated in any other disclosed, described or suggested form or implementation as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.
23,732
11943518
DESCRIPTION OF THE EMBODIMENTS The imaging apparatus of Japanese Patent Application Laid-open No. 2014-165528 determines whether or not a pixel is saturated based on a signal including an overlapping part of visible light and near-infrared components. When saturation occurs, it cannot be determined which of the visible light and the near-infrared is causing this saturation with this apparatus. The apparatus therefore entails a problem that false coloring occurs because correct colors cannot be reproduced by the corrected color signals. An object of the present disclosure is to provide an image processing technique that enables acquisition of a high-quality image with less false coloring from an image containing saturated pixels. After here the image processing apparatus of each embodiment will be described with reference to the drawings. First Embodiment Overall Configuration FIG.1is a diagram illustrating a configuration example of an image processing apparatus according to one embodiment of the present disclosure, wherein the image processing apparatus is provided with an imaging unit. The image processing apparatus of this embodiment may be understood also as an imaging apparatus. In other embodiments of the present disclosure, the image processing apparatus may be configured by an image processor alone, to which signals are input from a separate imaging unit. The image processing apparatus100illustrated inFIG.1is configured by an imaging unit101and an image processor104. The imaging unit101includes a plurality of pixels. In this embodiment, the imaging unit101includes near-infrared pixel components102provided with a color filter most sensitive to near-infrared, and visible light pixel components103provided with a color filter most sensitive to visible light. As will be described later, the imaging unit101has a plurality of pixel groups of red, green, blue, and near-infrared pixels arrayed in two lines and two rows. While it is assumed here that the imager is a CMOS image sensor, it may also be a CCD image sensor. The visible light pixel components103here have a sensitivity to the near-infrared spectral range equal to that of the near-infrared pixel components102. The image processor104includes a near-infrared image signal input unit105, a visible light image signal input unit106, a near-infrared component subtraction unit109, and a white balance processing unit110. The image processor104further includes a near-infrared level determination unit107, a saturated pixel detection unit108, a first saturation processing unit111, a second saturation processing unit112, a saturation processing switching unit113, and a saturation processing application unit114, as functional units for saturation processing. These units may be realized by a dedicated hardware circuit such as an ASIC, or may be realized by a general-purpose processor such as a CPU executing a program. The near-infrared image signal input unit105receives inputs of image signals from the near-infrared pixel components102of the imaging unit101. The visible light image signal input unit106receives inputs of image signals from the visible light pixel components103of the imaging unit101. After here, the image signals input to the input units105and106also shall be referred to as a near-infrared signal and a visible light signal, respectively. The near-infrared level determination unit107determines the signal level of the near-infrared image. Specifically, the near-infrared level determination unit107determines whether or not the level of an output signal of each near-infrared pixel is higher than a level determination threshold. The determination results of the near-infrared level determination unit107are input to the saturation processing switching unit113. The level determination threshold may be set such that if the near-infrared level is lower than the threshold, the saturation occurring in a visible light pixel can be attributed to a visible light component. For example, the level determination threshold may be the half value of the maximum output level (saturation level) of the imaging unit101. The saturated pixel detection unit108detects a saturated visible light pixel (saturated pixel) from the visible light image signal. The detection results of the saturated pixel detection unit108are input to the saturation processing application unit114. The near-infrared component subtraction unit109subtracts an output value of a near-infrared pixel near a visible light pixel from the output signal of this visible light pixel. By the subtraction process, visible light components of respective colors, i.e., red (R), green (G), and blue (B), from which the near-infrared component has been removed, are obtained. The near-infrared pixel near the visible light pixel may be, for example, a near-infrared pixel within the same pixel group as the visible light pixel. The near-infrared component subtraction unit109may subtract a value based on output values of a plurality of neighboring near-infrared pixels of the visible light pixel (e.g., an average value) from the output signal of the visible light pixel. The white balance processing unit110processes the visible light signal after the subtraction of the near-infrared signal, adjusting the values of respective color components to achieve a color proportion that allows a white object to appear as natural white in accordance with the light source. Specifically, the white balance processing unit110performs white balance processing of multiplying the pixel value of each color contained in the image signal of the visible light pixel with a gain in accordance with the color. The gains for respective colors are predetermined. The first saturation processing unit111applies first saturation processing to the visible light image signal after the white balance processing. The first saturation processing is a clipping operation, for example. Specifically, the first saturation processing unit111replaces the pixel values of saturated visible light pixels of respective colors with predetermined values. The predetermined values are white level values, and may be determined based on a pixel value of a visible light pixel of one color (e.g., green) within the same pixel group as the pixel being processed, or may be a preset value. The second saturation processing unit112applies second saturation processing to the visible light image signal after the white balance processing. The second saturation processing is a process of determining the pixel value of a saturated visible light pixel based on pixel values of visible light pixels surrounding the saturated visible light pixel. A specific example of the second saturation processing is a process of replacing the pixel value of the saturated pixel with a pixel value obtained by interpolation of a plurality of visible light pixels surrounding the saturated visible light pixel. The replacement pixel value may be determined by referring only to unsaturated ones of the visible light pixels surrounding the saturated pixel. The saturation processing switching unit113selects either the first saturation processing or the second saturation processing to be applied to a saturated visible light pixel based on the determination result of the near-infrared level determination unit107. Specifically, the saturation processing switching unit113selects the first saturation processing if the output signal of a near-infrared pixel near the saturated pixel is lower than the level determination threshold, and selects the second saturation processing if the output signal is higher than the level determination threshold. The saturation processing application unit114applies the saturation processing selected by the saturation processing switching unit113to the saturated visible light pixel, while outputting the image signals of unsaturated visible light pixels as they are after the white balance processing. The output signals from the saturation processing application unit114are output to a further image processor, image output unit, image recording unit, display unit, etc., which are not shown, for downstream operations. Imager FIG.2AandFIG.2Bare illustrative diagrams of image information input from the imaging unit101of the first embodiment. FIG.2Bis a schematic diagram of a conventional (common) visible light color imager211. The color imager211has pixels212R,212G, and212B respectively provided with red (R), green (G), and blue (B) filters and arranged in the Bayer pattern. FIG.2Ais a schematic diagram of an imager203capable of acquiring both image signals of visible light and near-infrared, and adoptable in this embodiment. The imager203includes pixels20418provided with a near-infrared filter in addition to the pixels204R,204G, and204B respectively provided with red (R), green (G), and blue (B) filters. The image signal of near-infrared is separated from image signals obtained by the imager203, and subtracted from the image signals of visible light pixels, so that images205,206, and207composed solely of red (R), green (G), and blue (B) are generated. An image208composed solely of the near-infrared component is obtained from the near-infrared pixels2041R. The imager203includes a plurality of pixel groups of first visible light pixels204R, second visible light pixels204G, third visible light pixels204B, and near-infrared pixels2041R arrayed in two lines and two rows. The arrangement of each pixel in the pixel group may be other than the configuration shown inFIG.2A. One pixel group may contain a plurality of pixels corresponding to the same color. FIG.3shows a schematic diagram of quantum efficiencies (sensitivities) of respective pixels provided with visible light and near-infrared color filters. The curves301,302,303, and304respectively represent the quantum efficiency (sensitivity) of the blue (B) pixel204B, green (G) pixel204G, red (R) pixel204R, and near-infrared (IR) pixel2041R. Reference numeral305denotes a waveband removed by a band-cut filter inserted between the imager203and the lens. This band-cut filter removes the red to near-infrared bands. As shown inFIG.3, the quantum efficiencies of red (R), green (G), and blue (B) pixels match in the near-infrared of at least a predetermined threshold λ2. The band-cut filter removes the wavelengths in the near-infrared range where the quantum efficiencies of respective color pixels differ so that these wavelengths do not enter the visible light pixels and near-infrared pixels. Specifically, the band-cut filter removes the wavelengths in the range from λ1 to the wavelength λ2 mentioned above, λ1 being the shortest wavelength of the sensitivity spectrum of near-infrared pixels. By adopting such a band-cut filter, the near-infrared component contained in visible light image signals of respective colors can be made equal to the near-infrared component304contained in the near-infrared image signal. Therefore, respective visible light components alone can be acquired by subtracting the near-infrared component304from the output signals from respective visible light pixels. Influence of Pixel Saturation The problem when saturation occurs in a visible light image is explained with reference toFIG.4AtoFIG.6C. The explanation is given using an example in which an image of an apple was captured with the use of the imager203that includes the visible light pixels and near-infrared pixels as described with reference toFIG.2A. FIG.4Ashows an image acquired by red (R) pixels204R of the visible light pixels. Since a fruit surface401aof a red apple contains many red components, high output signals are obtained so that the image is bright. On the other hand, a green leaf part402acontains few red components so that the output signals are low and the image is dark. InFIG.4AandFIG.4B, the shading of the drawings represents the magnitude of output signals, i.e., the whiter, the higher the pixel value (brighter), and the blacker, the lower the pixel value (darker). FIG.4Bshows an image acquired by near-infrared pixels20418. Since both the fruit surface401band green leaf part402breflect much of near-infrared, high output signals are obtained from both parts so that the image is bright. When there is a visible light illumination source (not shown) on the upper right side of the object (apple), the illumination may create a light reflection spot403ain the upper right on the apple surface of the visible light image inFIG.4Awhere the pixels become saturated. If, however, the illumination light is from an LED source, or the like, and does not contain near-infrared, the influence of saturation caused by the visible light illumination does not appear at the spot403bof the image acquired by near-infrared pixels inFIG.4B. On the other hand, when there is a near-infrared illumination source (not shown) located on the lower left side of the object (apple) for obtaining the near-infrared image, bright pixels due to the influence of the near-infrared illumination appear at a spot404bin the lower left on the apple surface of the image acquired by near-infrared pixels inFIG.4B. In addition, an influence404acaused by the near-infrared component appears in the lower left on the apple surface of the visible light image, too, since visible light pixels also have sensitivity to near-infrared wavelengths. As described above, there are cases when a pixel does not become saturated by either one of near-infrared or visible light alone, but saturated by the combination of both of the near-infrared component and visible light components entering the visible light pixels. S saturation processing under such circumstances is explained below with reference toFIG.5AtoFIG.6C. FIG.5Ais a schematic diagram showing the output levels of pixels corresponding to point A inFIG.4AandFIG.4B(unsaturated pixels) obtained from the pixels shown inFIG.2. The horizontal axis represents red (R), green (G), blue (B), and near-infrared (IR) pixels, and the vertical axis represents the output signal levels of the pixel components. Reference numeral501in the drawings represents the full scale level of 16-bit signal processing (65535LSB),502represents the pixel saturation level of the imager, and503represents the output signal level of a near-infrared pixel near point A. Reference numerals504,505, and506represent the outputs of red (R), green (G), and blue (B) pixels, respectively. Since point A is on the fruit surface of the apple, the level of red component of visible light is high. As has been explained with reference toFIG.3, the visible light image signals of red (R), green (G), and blue (B) each includes a near-infrared component overlapping each visible light component, and the near-infrared component of each color is often close to the amount of near-infrared component obtained by a neighboring near-infrared pixel. FIG.5Bshows a schematic diagram of signal levels obtained by subtracting the near-infrared component of a neighboring near-infrared pixel from the respective signal components of pixels inFIG.5A. Reference numerals604,605, and606represent values obtained by subtracting the near-infrared component503of a neighboring near-infrared pixel from the respective visible light image signals504,505, and506of red (R), green (G), and blue (B). FIG.5Cshows a schematic diagram of signal levels after the white balance processing. In the example shown here, the signals of respective colors ofFIG.5Bare multiplied with white balance coefficients such that red (R), green (G), and blue (B) are multiplied by 1.5 times, 1 time, and 2 times, respectively, in order to make white appear natural in accordance with the illumination used when the image was captured. Reference numerals704,705, and706represent values obtained by multiplying the visible light image signals604,605, and606of red (R), green (G), and blue (B) with the white balance coefficients, after the subtraction of the near-infrared component. The color information is correctly retained for unsaturated pixels as shown inFIG.5C. However, a problem arises when the same processing is performed for saturated pixels. This problem is explained below with reference toFIG.6AtoFIG.6C. FIG.6Ais a schematic diagram showing the output levels of pixels corresponding to point B inFIG.4AandFIG.4B(saturated pixels) obtained by the pixels shown inFIG.2. The horizontal axis represents red (R), green (G), blue (B), and near-infrared (IR) pixels, and the vertical axis represents the output signal levels of the pixel components. Reference numeral501in the drawings represents the full scale level of 16-bit signal processing (65535LSB),502represents the pixel saturation level of the imager, and503represents the output signal level of a near-infrared pixel. Reference numerals804and805represent expected outputs of red (R) and green (G) pixels, respectively, if the pixels are not saturated, and806represents the output of blue (B) pixel. Since the outputs804and805exceed the pixel saturation level502, the actual output levels of red (R) and green (G) pixels become equal to the saturation level502. FIG.6Bis a schematic diagram similar toFIG.5Bof signal levels obtained by subtracting the near-infrared component of a neighboring near-infrared pixel from the respective signal components of pixels inFIG.6A. Reference numerals904,905, and906represent values obtained by subtracting the near-infrared component503of the neighboring near-infrared pixel from the respective visible light image signals of red (R), green (G), and blue (B). FIG.6Cshows a schematic diagram of signal levels after the white balance processing, similarly toFIG.5C. The content of the white balance processing is the same as that described above. In the example shown here, the signals of respective colors ofFIG.6Bare multiplied with white balance coefficients such that red (R), green (G), and blue (B) are multiplied by 1.5 times, 1 time, and 2 times, respectively. By multiplying the color signals904,905, and906by 1.5 times, 1 time, and 2 times, respectively, signals after the white balance processing denoted at1004,1005, and1006are obtained. The dotted lines denoted at1007and1008represent expected outputs after the white balance processing corresponding to804and805ofFIG.6Aif the pixels are not saturated. As shown, the signal output that is supposed to have a color balance indicated by the outputs1007,1008, and1006in actuality come out with a different color balance indicated by outputs1004,1005, and1006because of the pixel saturation. In the case of the example shown inFIG.6C, saturation results in a color close to purple instead of white that should appear if there is no saturation. As demonstrated above, saturation can cause false colors and color noises and deteriorate the image quality. An object of this embodiment is to inhibit such false coloring and color noises resulting from pixel saturation, and to realize an image processing apparatus that allows for acquisition of favorable visible light images and near-infrared images. Image Processing Operation The operation in the first embodiment is now described with reference toFIG.1and with the use ofFIG.7toFIG.9.FIG.7is a flowchart for generating color image signals from output signals of visible light pixels in the first embodiment.FIG.8andFIG.9are illustrative diagrams showing the signal levels, to be used for the explanation of the operation. The flow for generating color image signals from output signals of visible light pixels starts at step S1101. This process can be started at any timing, for example, immediately after the imager203has obtained image signals of visible light pixels and near-infrared pixels. At step S1102, the near-infrared image signal input unit105acquires a pixel signal value of a near-infrared pixel and the visible light image signal input unit106acquires pixel signal values of visible light pixels. At step S1103, the near-infrared component subtraction unit109subtracts the pixel value of a neighboring near-infrared pixel from each pixel value of the visible light image signal. After that, at step S1104, the white balance processing unit110executes the white balance processing of multiplying the signal of each color with a white balance coefficient. Meanwhile, at step S1105, the saturated pixel detection unit108compares the output signal of a visible light pixel before the subtraction of near-infrared with a saturation determination threshold to determine whether or not the pixel is saturated. At step S1106, the processing is selected in accordance with whether or not the output value of the visible light pixel is greater than the saturation determination threshold. This selection of processing is determined for each pixel. If the output value of the visible light pixel is not more than the saturation determination threshold, i.e., if the pixel is not saturated (S1106: NO), the process goes to step S1107, where the saturation processing application unit114outputs the output signal of visible light pixel as it is after the white balance processing without applying the saturation processing. On the other hand, if, at step S1106, the output value of the visible light pixel is greater than the saturation determination threshold, i.e., if the pixel is saturated (S1106: YES), the process goes to step S1110. At step S1109, the near-infrared level determination unit107determines the level of the neighboring near-infrared image signal of the visible light pixel, i.e., compares it with a level determination threshold. A control signal based on the level determination result is input to the saturation processing switching unit113. At step S1110, the saturation processing switching unit113selects the saturation processing in accordance with the determination result at step S1109. The saturation processing switching unit113selects the first saturation processing (clipping) performed by the first saturation processing unit111if the IR pixel signal is at low level, i.e., not more than the level determination threshold. On the other hand, if the IR pixel signal is at high level, i.e., greater than the level determination threshold, the second saturation processing (interpolation) by the second saturation processing unit112is selected. The first saturation processing (clipping) performed by the first saturation processing unit111at step S1111is explained below. At step S1111, the first saturation processing unit111clips the R and B levels to the same level such as to match the pixel value of G in the visible light pixel signal to convert the color into white. When the level of near-infrared is low, it is likely that saturation is resulting from visible light, in which case clipping to the white level to delete the color is preferable. FIG.8Ais a schematic diagram showing levels of image signals obtained at step S1102when, while saturation is occurring, the near-infrared level is not more than the threshold. The horizontal axis represents red (R), green (G), blue (B), and near-infrared (IR) pixels, and the vertical axis represents the output signal levels of the pixel components. Reference numeral1208represents the level determination threshold for the determination of the near-infrared level. In the example ofFIG.8A, the output signal level1207of the near-infrared pixel is lower than this level determination threshold level1208. Reference numerals1204and1205represent expected outputs of red (R) and green (G) pixels, respectively, if the pixels are not saturated, and1206represents the output of the blue (B) pixel. Since the outputs1204and1205exceed the pixel saturation level502, the actual output levels of red (R) and green (G) pixels become equal to the saturation level502. FIG.8Bis a schematic diagram of signal levels obtained by subtracting the near-infrared component of a neighboring near-infrared pixel from the signal components of respective pixels ofFIG.8A, i.e., after the processing at step S1103. Reference numerals1304,1305, and1306represent values obtained by subtracting the near-infrared component1207of the neighboring near-infrared pixel from the visible light image signals of respective colors of red (R), green (G), and blue (B). FIG.8Cshows a schematic diagram of signal levels after the white balance processing has been applied similarly toFIG.6Cand the signal levels after the clipping process. In the white balance processing at step S1104, the signals of respective colors, red (R), green (G), and blue (B) inFIG.8B, are multiplied with white balance coefficients, for example, by 1.5 times, 1 time, and 2 times, respectively. Dotted lines denoted at1404,1405, and1406represent signal levels after the white balance processing. In this example, the white balance coefficient of green (G) is 1, so that the output1305is equal to output1405. The image after the white balance processing has a different color due to varied levels of color pixels. Since the outputs from the pixels are saturated, the original color information is lost, so that the color after the white balance processing is likely to be a false color. Since the near-infrared component of the pixel is lower than the threshold, it is unlikely that saturation is caused by a high intensity of near-infrared, i.e., it is likely that the saturation is caused by visible light. Therefore it can be determined that it is more appropriate to saturate this pixel by the clipping process to appear white rather than give this pixel a color. In the clipping operation (first saturation processing) at step S1111, as shown inFIG.8C, green, which has the lowest signal level among the color signals of this pixel after the white balance processing is determined as the clip level1309, and the signal levels of red and blue are clipped to this level. As described above, in this embodiment, when there is pixel saturation (S1106: YES), and the level of near-infrared is low (S1110: NO), the clipping operation by the first saturation processing unit111is selected and applied. Thus, the target pixel is replaced with white image information without a color. Next, the second saturation processing (interpolation) performed by the second saturation processing unit112at step S1112is explained. The second saturation processing unit112generates an interpolation signal from pixels surrounding a saturated pixel, and replaces the signal of the saturated pixel therewith. This is because it is highly likely that saturation is occurring due to a near-infrared component, and that most of the pixel signals of original visible light is lost because of saturation. Pixels surrounding a saturated pixel may be defined as pixels within a predetermined distance (e.g., within three pixels) from the saturated pixel, or defined as a predetermined number of pixels from the saturated pixel. FIG.9Ais a schematic diagram showing levels of pixel signals obtained at step S1102when saturation is occurring and the near-infrared level is greater than the threshold. The horizontal axis represents red (R), green (G), blue (B), and near-infrared (IR) pixels, and the vertical axis represents the output signal levels of the pixel components. Reference numeral1208represents the level determination threshold for the determination of the near-infrared level. In the example ofFIG.9A, the output signal level1507of the near-infrared pixel is higher than this determination threshold level1208. Reference numerals1504and1505represent expected outputs of red (R) and green (G) pixels, respectively, if the pixels are not saturated, and1506represents the output of the blue (B) pixel. Since the outputs1504and1505exceed the pixel saturation level502of the imager, the actual output levels of red (R) and green (G) pixels become equal to the saturation level502. FIG.9Bis a schematic diagram of signal levels obtained by subtracting the near-infrared component of a neighboring near-infrared pixel from the signal components of respective pixels ofFIG.9A, i.e., after the processing at step S1103. Reference numerals1604,1605, and1606represent values obtained by subtracting the near-infrared component1507of the neighboring near-infrared pixel from the visible light image signals of respective colors of red (R), green (G), and blue (B). FIG.9Cshows a schematic diagram of signal levels after the white balance processing has been applied similarly toFIG.6Cand the signal levels after the interpolation. In the white balance processing at step S1104, the signals of respective colors, red (R), green (G), and blue (B) inFIG.9B, are multiplied with white balance coefficients, for example, by 1.5 times, 1 time, and 2 times, respectively. Dotted lines denoted at1704,1705, and1706represent signal levels after the white balance processing. In this example, the white balance coefficient of green (G) is 1, so that the output1605is equal to output1705. The image after the white balance processing has a different color due to varied levels of color pixels. Since the outputs from the pixels are saturated, the original color information is lost, so that the color after the white balance processing is likely to be a false color. Since the near-infrared component of the pixel is higher than the threshold, it is likely that saturation is caused by a high intensity of near-infrared, and that the visible light information is mostly lost. It can therefore be determined that the color information of these pixel components should not be adopted. It is not appropriate either to saturate the pixel by the clipping process to appear white since the saturation is not being caused by a high intensity of visible light. In the interpolation process at step S1112(second saturation processing), the color information of the pixel is replaced with color information obtained by interpolation of the information of a plurality of neighboring unsaturated pixels of the same color. Reference numerals1707,1708, and1709represent the corresponding image signals of red (R), green (G), and blue (B) pixels replaced by the interpolation process at step S1112. As described above, in this embodiment, when there is pixel saturation (S1106: YES), and the level of near-infrared is high (S1110: YES), the interpolation process by the second saturation processing unit112is selected and applied. When the near-infrared component is higher than the threshold, it is likely that visible light is not causing the pixel saturation. Therefore, a more appropriate image signal can be obtained by using a color matching the surrounding color rather than deleting the color information by the clipping process to replace the color with white. As described above, the saturation processing is selected in accordance with the level of near-infrared, to inhibit false coloring and to realize simultaneous acquisition of high-quality visible light image and near-infrared image. Second Embodiment Overall Configuration FIG.10illustrates a configuration example of an image processing apparatus1800according to a second embodiment. Similar to the first embodiment, the image processing apparatus1800is configured with an imaging unit101and an image processor1804. In the following description, the elements similar to those of the first embodiment are given the same reference numerals and not described again. The image processor1804is different from that of the first embodiment in that it includes a saturated pixel flag setting unit1808instead of the saturated pixel detection unit108, and that it additionally includes a surrounding pixel information acquisition unit1816and a saturated pixel flag detection unit1815. The processing contents of the second saturation processing unit1812and saturation processing application unit1817are also different from those of the second saturation processing unit112and the saturation processing application unit114in the first embodiment. The saturated pixel flag setting unit1808carries out a process of detecting a saturated pixel from the visible light image signal input to the visible light image signal input unit106, and, when it detects a saturated pixel, sets a saturation flag to the visible light image signal indicating that this pixel is saturated. The surrounding pixel information acquisition unit1816acquires color information from pixels surrounding a target pixel (saturated pixel), and outputs the same to the second saturation processing unit1812. The second saturation processing unit1812will be described in detail later. The saturated pixel flag detection unit1815determines whether or not the target pixel is saturated based on the saturated pixel flag, and outputs the detection result to the saturation processing application unit1817. If the pixel is saturated, the saturation processing application unit1817applies the saturation processing selected by the saturation processing switching unit113, and, if the pixel is not saturated, outputs the pixel values of the visible light image as they are after the white balance processing. Image Processing Operation The operation according to the second embodiment is described with reference to FIG. andFIG.11.FIG.11is a flowchart for generating color image signals from output signals of visible light pixels in the second embodiment. The flow for generating color image signals from output signals of visible light pixels starts at step S1901. This process can be started at any timing, for example, immediately after the imager203has obtained image signals of visible light pixels and near-infrared pixels. At step S1902, the near-infrared image signal input unit105acquires a pixel signal value of a near-infrared pixel and the visible light image signal input unit106acquires pixel signal values of visible light pixels. At step S1903, the saturated pixel flag setting unit1808compares the output signal of a visible light pixel before the subtraction of near-infrared with a saturation determination threshold to determine whether or not the pixel is saturated. If the output signal of the visible light pixel is higher than the saturation determination threshold, i.e., if it is determined that the pixel is saturated (S1904: YES), the process goes to step S1905, where the saturated pixel flag setting unit1808sets1as the saturation flag of this pixel. On the other hand, if the output signal of the visible light pixel is not more than the saturation determination threshold, i.e., if it is determined that the pixel is not saturated (S1904: NO), the process goes to step S1906, where the saturated pixel flag setting unit1808sets 0 as the saturation flag of this pixel. While 1 and 0 of the saturation flag respectively represent saturated and unsaturated here, any values can be set as the flag. At step S1907, the near-infrared component subtraction unit109subtracts a near-infrared image signal obtained from a neighboring near-infrared pixel from each pixel value of the visible light image signal. After that, at step S1908, the white balance processing unit110executes the white balance processing of multiplying the signal of each color with a white balance coefficient. At step S1909, the processing is selected in accordance with whether or not the saturation flag of the visible light pixel is 1, i.e., whether or not the visible light pixel is saturated. This selection of processing is determined for each pixel. If the saturation flag is 0, i.e., if the pixel is not saturated (S1909: NO), the process goes to step S1910, where the saturation processing application unit114outputs the output signal of the visible light pixel as it is after the white balance processing without applying the saturation processing. On the other hand, if, at step S1909, the saturation flag is 1, i.e., if the pixel is saturated (S1909: YES), the process goes to step S1912. At step S1912, the near-infrared level determination unit107determines the level of near-infrared image signal of a neighboring near-infrared pixel of the visible light pixel, i.e., compares it with a level determination threshold. A control signal based on the level determination result is input to the saturation processing switching unit113. At step S1913, the saturation processing switching unit113selects the saturation processing in accordance with the determination result at step S1912. The saturation processing switching unit113selects the first saturation processing (clipping) performed by the first saturation processing unit111if the IR pixel signal is at low level, i.e., not more than the determination threshold. On the other hand, if the IR pixel signal is at a high level, i.e., greater than the determination threshold, the second saturation processing (color interpolation) by the second saturation processing unit1812is selected. The first saturation processing (clipping) performed by the first saturation processing unit111at step S1914is the same as that of the first embodiment. Namely, the first saturation processing unit111clips the R and B levels to the same level such as to match the pixel value of G in the visible light pixel signal to convert the color into white. The second saturation processing (color interpolation) performed by the second saturation processing unit1812at step S1915is now explained. The second saturation processing unit1812detects the chromaticity of an unsaturated pixel around a target pixel (saturated pixel) acquired by the surrounding pixel information acquisition unit1816to generate correction information, and corrects the pixel values of the saturated pixel. The correction process can be a process of correcting pixel values of a saturated pixel to achieve the same chromaticity as that of surrounding unsaturated pixels. When the near-infrared component near the saturated pixel is at a high level, it is likely that saturation is caused by the near-infrared component. In this case, it is more appropriate to correct the color in accordance with the color information of surrounding unsaturated pixels rather than saturating the pixel by the clipping process to make it appear white. As described above, the saturation processing is selected in accordance with the level of near-infrared, to inhibit false coloring and to realize simultaneous acquisition of high-quality visible light image and near-infrared image. This embodiment is different from the first embodiment mainly in the content of the second saturation processing and in the method of transmitting the detection result of the saturated pixel to the saturation processing application unit114. These modifications need not be applied to the first embodiment in this combination. One of these processes may be the same as that of the first embodiment. Other Embodiments The contents of saturation processing to be applied are not limited to the processes described in the first and second embodiments. The second saturation processing that is applied when the level of near-infrared is high may be other than the process described above, as long as the process determines the pixel values of a saturated pixel based on pixel values of a neighboring pixel of the saturated pixel. For example, the color information of a saturated pixel may be estimated from pixel values of pixels surrounding the saturated pixel using a preconfigured database and replaced with this estimated color information. Alternatively, the color information of a saturated pixel may be replaced with color information obtained by inputting pixel values of pixels surrounding the saturated pixel in a machine learning model designed to estimate color information of the center pixel from the pixel values of the surrounding pixels. While an output signal of an infrared pixel is subtracted from output signals of visible light pixels using the near-infrared component subtraction unit109in the embodiments described above, this process may be omitted. In this case, too, false coloring can be inhibited by switching over the saturation processing in accordance with which of the near-infrared component and visible light component is causing saturation. Embodiment(s) of the present invention can also be realized by a computer of a system or an apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., an application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., a central processing unit (CPU), or a micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and to execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), a digital versatile disc (DVD), or a Blu-ray Disc (BD)™) a flash memory device, a memory card, and the like. The present disclosure allows acquisition of a high-quality image with less false coloring from an image containing saturated pixels. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
42,307
11943519
DESCRIPTION OF EMBODIMENT An embodiment of the present invention will be described referring to the drawings. An information processing system1according to this embodiment of the present invention includes an information processing device10and a controller device30as an example of a device that a user operates with his or her hand. Here, the information processing device10is coupled to the controller device30and a display device40. The information processing device10is a device that processes information output by the controller device30, and may be, for example, a home game machine, a mobile game machine, a personal computer, a smartphone, a tablet, or the like. Further, the information processing device10may be configured integrally with the controller device30. As illustrated inFIG.1, the information processing device10includes a control unit11, a storage unit12, and an interface unit13. The control unit11includes at least one processor such as a central processing unit (CPU), and performs various kinds of information processing by executing programs stored in the storage unit12. Note that specific examples of the processing performed by the control unit11in the present embodiment will be described later. The storage unit12includes at least one memory device such as a random access memory (RAM), and stores the programs executed by the control unit11and data processed by the programs. The interface unit13is an interface for data communication with the controller device30and the display device40. The information processing device10is coupled to each of the controller device30and the display device40via the interface unit13by means of either a wired line or a wireless line. Specifically, the interface unit13transmits, to the display device40, video data supplied by the information processing device10. Further, the interface unit13outputs, to the control unit11, information having been received from the controller device30. The controller device30according to an example of the present embodiment is a device that is worn on a hand of a user, as illustrated as an example inFIGS.2and3, and that receives an operation by the user and outputs information associated with the operation.FIGS.2and3are schematic perspective views of the controller device30illustrating examples of its external views.FIG.2is a schematic perspective view of the controller device30as viewed from a side on which a thumb of a user is located when the user has grasped the controller device30(this side being referred to as a front side for the sake of convenience), andFIG.3is a schematic perspective view of the controller device30as viewed from a side on which fingers ranging from an index finger to a little finger of the user are located when the user has grasped the controller device30(this side being referred to as a back side). Note that the sizes, size ratios, arrangements, and the like of individual portions of the controller device30that are illustrated inFIGS.2and3are just examples, and the example of the present embodiment is not limited to the illustrated sizes, size ratios, arrangements, and the like. In the example ofFIGS.2and3, the controller device30includes a body portion31and a holding fixture32, and a user wears the controller device30on his or her own hand by passing the palm of his or her own hand through the holding fixture32. Here, the holding fixture32may include a belt or the like that has a length adjustable to the size of the hand of the user. On the housing surface of the body portion31of the controller device30, a plurality of sensors33that each detect a positional relation with the hand (and fingers) of the user are arranged along the housing surface shape of the body portion31. The examples ofFIGS.2and3illustrate a state in which the plurality of sensors33are arranged in a matrix on the housing surface of the body portion31. Further, the controller device30may include a switch/button34operable by, for example, an index finger, or any other finger. In this case, the sensors33may also be arranged on the surface of the switch/button34. Here, each of the sensors33may be a sensor, such as an infrared sensor or an electrostatic capacitance sensor, which is capable of detecting the spatial position of an object that is present within its detectable range. In the present embodiment, the information processing device10identifies the positions of individual joints of a hand of a user that are detection targets by using the results of the detection by the sensors33. Further, in the present embodiment, a hand and fingers targeted for the detection by the sensors33are assumed to be individual fingers of a hand on which the user wears the controller device30. Note that, here, the user is assumed to wear the controller device30on any one of the hands of the user, but the user may wear mutually different controller devices30on both the left and right hands of the user. The controller device30of the present embodiment outputs information representing the results of the detection by the sensors33to the information processing device10every predetermined timing (periodically, for example, every 1/60 seconds, or the like). Next, the operation of the control unit11of the information processing device10according to the present embodiment will be described. As illustrated as an example inFIG.4, the control unit11of the present embodiment includes a reception unit21, an input image generation unit22, an estimation processing unit23, and an output unit24. Here, the reception unit21receives the information representing the results of the detection by the sensors33from the controller device30. The input image generation unit22generates an image in which an image element having been drawn with a pixel value determined on the basis of a value represented by the result of detection by each of the sensors33is arranged at a position corresponding to a position at which each of the sensors33is arranged, on a mapping space (hereinafter referred to as an unfolded-figure space) obtained by plane-unfolding at least a portion which constitutes the housing surface of the controller device30and on which the sensors33are arranged. In an example of the present embodiment, the input image generation unit22generates an image in which the housing surface of the controller device30is unfolded on a plane, as illustrated as examples inFIGS.5and6. Such an image of an unfolded state may be generated in such a way that, for example, as illustrated as an example inFIG.5, a portion X resulting from projection of the front side of the surface of the controller device30and a portion Y resulting from projection of the back side thereof are each arranged on a single plane. Further, in another example, the input image generation unit22may generate an unfolded figure in such a way as to, as illustrated as an example inFIG.6, make virtual seams (cuts) in the housing surface of the controller device30. In generating such unfolded figures, various kinds of widely-known processing for creating a UV space (processing for circular-cylindrical mapping, spherical mapping, or any other kind of mapping, processing for forming virtual seams, and any other kind of processing), which are used when textures are pasted to a virtual three-dimensional object, can be employed, and thus, detailed description thereof is omitted here. The input image generation unit22identifies a position corresponding to each of the sensors33that are arranged on the housing surface, on an acquired unfolding figure of the housing surface of the controller device30. Further, the input image generation unit22generates an input image by arranging, at the specified position corresponding to each of the sensors33, an image element having been drawn with a pixel value determined on the basis of a value represented by the result of detection by a corresponding sensor33(i.e., on the basis of the information having been received by the reception unit21). Here, the image element may be, for example, an image of a graphic having the on-unfolded figure shape of a sensor33, or may be an image of a graphic having the shape of a rectangle or the like of a predetermined size. Further, the pixel value of the image element is, for example, a one-dimensional value such as luminance, and may be determined such that the closer a finger or a hand is located, the higher the luminance is. In this example, the image generated by the input image generation unit22is two-dimensional grayscale image data. The estimation processing unit23receives, as input thereto, the input image having been generated by the input image generation unit22, and performs processing for estimating information associated with the position of each of joints of a hand of a user who operates the controller device30, by using the input image. Specifically, the estimation processing unit23may make the above estimation by using a neural network having machine-learned a relation between the input image and information related to the information associated with the position of each of the joints of the hand of the user. As such a neural network, it is sufficient just to use, for example, a multi-layer neural network, and the configuration of the neural network is not particularly limited to any kind. The neural network may be, for example, a convolutional neural network (CNN; convolutional network). Further, the neural network is machine-leaned in, for example, the following manner. In order to collect data for the learning, a user who causes the neural network to be machine-leaned, for example, pastes markers to points whose positions are to be estimated, on his or her own hand, makes various poses of the hand in a state of wearing the controller device30on the hand while making measurements using an apparatus that measures three-dimensional positions of the points from the positions of the markers having been imaged with a camera or the like and any other element necessary for the measurement, and acquires information representing two sets of results at a time point when each of the poses has been made, that is, the results of a corresponding one of the measurements and the results of the detection by the individual sensors33. Here, it is sufficient if the points whose positions are to be measured are the following two groups of twenty points in total, for example. The first one of the two groups is as follows: (1) Distal interphalangeal joints of fingers ranging from index finger to little finger (DIP: four points), (2) Proximal interphalangeal joints of fingers ranging from index finger to little finger (PIP: four points), (3) Interphalangeal joint of thumb (IP: one point), (4) Metacarpophalangeal joints of fingers from ranging thumb to little finger (MP: five points), (5) Carpometacarpal joints (CM: five points), and (6) Wrist joint (Radiocarpal joint: one point). The second one of the two groups is as follows: (1) Tips of terminal phalanges of fingers ranging from thumb to little finger (Tips of fingers: five points), (2) Distal interphalangeal joints of fingers ranging from index finger to little finger (DIP: four points), (3) Proximal interphalangeal joints of fingers ranging from index finger to little finger (PIP: four points), (4) Interphalangeal joint of thumb (IP: one point), (5) Metacarpophalangeal joints of fingers ranging from thumb to little finger (MP: five points), and (6) Carpometacarpal joint of thumb (CM: one point). The above points (position estimation target points, which may include any appropriate point not corresponding to a medical joint) will hereinafter be referred to as “joints” for the sake of description. Note that the measurement results acquired here are values represented in a world coordinate system (X, Y, Z), and thus are preliminarily converted into values in a finally acquisition-desired coordinate system. In an example of the present embodiment, the finally acquisition-desired coordinate system is defined such that a predetermined single point that moves with the controller device30(a predetermined on-device point such as the center of a circumscribed rectangle for the controller device30) is defined as the origin of the coordinate system, an axis extending from the origin in a longitudinal direction of the controller device30is defined as a z-axis, a plane whose normal line corresponds to the z-axis is defined as an xy-plane, and in the circumscribed rectangle for the controller device30, an axis extending in a direction of the normal line of a plane at the front side, this direction being a direction extending from the plane at the front side toward a plane at the back side, is defined as a y-axis, and an axis lying within the xy-plane and being orthogonal to the y-axis is defined as an x-axis (seeFIG.2). In processing for a conversion from the world coordinate system, in which the measurements have been made, to the xyz coordinate system (hereinafter referred to as local coordinates), widely-known methods can be employed, and thus, detailed description of the processing for the conversion is omitted here. Further, for the purpose of the conversion, a user may acquire information representing a posture of the controller device30(namely, information representing the position of the origin and the directions of the x-axis, the y-axis, and the z-axis) together with coordinate information regarding the above individual points of the hand of the user. A computer that performs machine-learning of the neural network used by the estimation processing unit23of the information processing device10(the computer may be the information processing device10itself) acquires sets of two kinds of information, each of the sets being acquired, in such a way as described above, at a time point when a corresponding one of the plurality of poses has been made, one of the two kinds of information being information representing the position of each of the joints of the hand of the user in the local coordinates, the other one thereof being the results of the detection by the individual sensors33. Further, the computer performs the following processing while sequentially selecting each of the acquired sets. That is, the computer that performs machine-learning processing generates an input image on the basis of information that represents the results of the detection by the individual sensors33and that is included in a selected set, by performing the same processing as that performed by the input image generation unit22of the information processing device10. The computer acquires differences between two kinds of information, one of the two kinds of information being information that is output by the neural network upon input of the generated input image into the neural network (here, this information is assumed to be information of the same dimensions as those of the information representing the position of each of the joints of the hand of the user in the local coordinates (here, the dimensions being 60 dimensions=20×3 dimensions)), the other one of the two kinds of information being the information representing the position of each of the joints of the hand of the user in the local coordinates, this information being included in the selected set. Further, the computer controls weights among individual layers of the neural network on the basis of the acquired differences. This control is known as what is called backpropagation processing, and thus, the description thereof is omitted here. The computer sequentially updates the weights among the individual layers of the neural network by the above processing to thereby perform the machine-learning of the neural network such that the neural network outputs the result of estimation of the position of each of the joints of the hand of the user on the basis of the abovementioned input image. Note that, here, described is an example in which machine learning is performed so as to allow the neural network to directly estimate the information representing the position of each of the joints of the hand of the user in the local coordinates, but the present embodiment is not limited to this example. For example, the machine leaning may be performed so as to allow the neural network to, in the same space as that of the input image (the two-dimensional space obtained by unfolding the housing surface of the controller device30, that is, the unfolded-figure space in the present embodiment), estimate both a point (a closest proximity point) that lies on the housing surface of the controller device30and is closest to the position of each of the abovementioned joints in distance within a three-dimensional space and a distance from the closest proximity point to the each of the joints. In this case, the output of the neural network is information for each of the joints of the hand of the user, this information including both a heatmap image representing a probability that each of points in the space of the abovementioned unfolded figure becomes a closest proximity point of a corresponding joint and a distance from a face on the space of the unfolded figure (namely, the surface of the controller device30) to the corresponding joint. The estimation processing unit23according to the present embodiment that uses the neural network having been machine-learned in such a way as described above inputs, to the neural network having been machine-leaned, the input image generated by the input image generation unit22on the basis of the information that the reception unit21has received from the controller device30. Further, the estimation processing unit23acquires the output of the neural network. Here, in the case where the neural network directly estimates, on the basis of the input image, the information representing the position of each of the joints of the hand of the user in the local coordinates, the estimation processing unit23outputs the output of the neural network to the output unit24as it is. Further, in the case where the neural network estimates, on the basis of the input image and for each of the joints of the hand of the user, information representing both a closest proximity point that lies on the unfolded-figure space (the same space as that of the input image) and that is closest to each of the joints and a distance from the surface of the controller device30to each of the joints, the estimation processing unit23acquires, from the above two kinds of information, information representing the position of each of the joints of the hand of the user in the local coordinates by making a coordinate conversion. Specifically, in this example, the estimation processing unit23acquires the result of estimation of a closest proximity point for each of the joints by means of statistical processing or the like by using a heatmap image having been acquired for each of the joints. For example, the estimation processing unit23acquires an estimated value of the closest proximity point by calculating a weighted average of a probability that each of points that are represented by the heatmap image and that lie on the two-dimensional space obtained by unfolding the housing surface of the controller device30becomes the closest proximity point (the weighted average being calculated by multiplying, for each of the points, the probability by coordinate values thereof, calculating the sum of a resultant value of the multiplication for each of the points, and dividing the sum by the number of the points). Alternatively, the estimation processing unit23may determine a point whose probability of being the closest proximity point is maximum from among the individual points represented by the heatmap image, as the result of estimation of the closest proximity point. Next, the estimation processing unit23acquires values in the local coordinates of the controller device30with respect to the closest proximity point having been estimated for each of the joints, and acquires values of a vector representing the direction of a normal line that extends from the housing surface of the controller device30and is located at a position lying on the housing surface and represented by the acquired values in the local coordinates. The values of the vector may be approximately calculated in the following manner. That is, the estimation processing unit23sets, in a virtual manner, a cylindrical column having a center line passing through the origin of the local coordinates and extending in a direction parallel to the z-axis, acquires an intersection point R that is a point of intersection of the cylindrical column with a line segment interconnecting a point Q lying in the local coordinates and corresponding to a closest proximity point at which a target normal line is to be acquired and a point P lying on the centerline and being located closest to the point Q and that is located on a side closer to the point Q, and may determine a normal line vector of the cylindrical column at the intersection point R (a vector with its start point located at the point P and its end point located at the point R), as an acquisition target vector V(Q). Further, in the case where the shape of the housing surface of the controller device30is represented by a three-dimensional graphic model such as a mesh model, the estimation processing unit23may determine a normal line vector of the graphic model at the point Q lying in the local coordinates and corresponding to the closest proximity point at which a target normal line is to be acquired, as the acquisition target vector V(Q). The estimation processing unit23acquires, for each joint i=1, 2, . . . 20, coordinate information Xi representing the position of the joint i on the local coordinate system by, according to the following formula, multiplying a unit vector vi parallel to a vector V(Qi) having been acquired, in such a way as described above, from a point Qi (i=1, 2, . . . 20) lying in the local coordinates and corresponding to a closest proximity point of the joint by a distance ri that is a distance up to the corresponding joint i and that is output by the neural network and adding a resultant value of the multiplication to a value of the point Qi. Xi=Qi+ri·vi Here, any one of X, Q, and v is a three-dimensional quantity representing coordinate values. The output unit24provides, to processing for an application program or any other kind of processing, the above coordinate information in the local coordinate system that has been acquired by the estimation processing unit23and that is associated with each of the joints of the hand of the user who operates the controller device30. For example, in processing for a game application program that the information processing device10is executing, the information processing device10receives, from the output unit24, the coordinate information associated with each of the joints of the hand of the user in the local coordinate system, and performs processing for controlling the actions of characters and any other kind of processing on the basis of the coordinate information. Note that, in the case where the information processing device10is configured integrally with the controller device30, instead of the execution of the game application program or the like by the information processing device10, the information processing device10may transmit, to a different information processing device that is executing the game application program or the like, the coordinate information associated with each of the joints of the hand of the user in the local coordinate system, and the different information processing device may perform processing for various kinds of applications such as processing for controlling the actions of characters by using the coordinate information associated with each of the joints of the hand of the user in the local coordinate system and received from the information processing device10that is configured integrally with the controller device30. [Operation] The information processing system1according to the embodiment of the present invention basically includes the above configuration, and operates in the following manner. When a user makes actions of clenching and unclenching his or her hand in a state of wearing the controller device30on his or her hand, each of the plurality of sensors33arranged on the housing surface of the controller device30detects the spatial position of a partial surface constituting the surface of the hand of the user and falling within a detection range of the each of the sensors33. Further, the controller device30outputs the result of the detection by each of the sensors33. The information processing device10is assumed to preliminarily retain information for configuring a machine-leaned neural network (this information including information for identifying a model of the neural network and information representing weights among individual layers in the identified model). Here, the neural network is assumed to be preliminarily machine-leaned so as to estimate both a heatmap image that represents a closest point (closest proximity point) of each of predetermined joints of the hand within the same unfolded-figure space as that of an input image (this unfolded-figure space being a two-dimensional space obtained by unfolding the housing surface of the controller device30) and a distance from the closest proximity point to each of the joints. Note that the joints of the hand are assumed to correspond to the abovementioned twenty points. The information processing device10receives the results of detection by the sensors33from the controller device30, and generates an input image in which an image element having been drawn with a pixel value determined on the basis of the value of the result of detection by each of the sensors33is arranged at a position corresponding to a position at which each of the sensors33is arranged, on a mapping space obtained by plane-unfolding the housing surface of the controller device30. Further, the information processing device10inputs the generated input image into a machine-learning completed neural network, and acquires, for each of the joints of the hand of the user, an estimated result including information representing both a heatmap image representing a probability that each of points on an unfolded-figure space (the same space as that of the input image) becomes a closest proximity point of the each of the joints and a distance from the surface of the controller device30to each of the joints. The information processing device10acquires, for each joint i (i=1, 2, . . . 20), a closest proximity point Ni by using the acquired heatmap and calculating a weighted average of a probability represented by the heatmap image, that is, a probability that each of points on the two-dimensional space obtained by unfolding the housing surface of the controller device30becomes a closest proximity point (the weighted average being calculated by multiplying, for each of the points, the probability by coordinate values thereof, calculating the sum of a resultant value of the multiplication for each of the points, and dividing the sum by the number of the points). Further, the information processing device10converts the coordinates of the closest proximity point Ni having been estimated for each joint i (i=1, 2, . . . 20) (the above coordinates being coordinates in the coordinate system of the unfolded figure) into values represented by local coordinates of the controller device30. Specifically, the information processing device10acquires a vector (normal line vector V(Qi)) representing the direction of a normal line that extends from the housing surface of the controller device30and that is located at a point Qi lying on the housing surface and corresponding to the closest proximity point Ni. Here, information regarding a three-dimensional graphic model representing the shape of the housing surface of the controller device30is assumed to be preliminarily retained and the normal line vector V(Qi) at the point Qi lying on the housing surface of the controller device30and corresponding to the above closest proximity point Ni is assumed to be acquired on the basis of the information regarding the three-dimensional graphic model. The information processing device10acquires, for each joint i=1, 2, . . . 20, coordinate information Xi representing the position of the joint i on the local coordinate system by, according to the following formula, multiplying a unit vector vi parallel to the vector V(Qi) having been acquired, in such a way as described, from the point Qi (i=1, 2, . . . 20) lying in the local coordinates and corresponding to the closest proximity point of the each joint by a distance ri that is a distance up to the corresponding joint i and that is output by the neural network, and adding a resultant value of the multiplication to a value of the point Qi. Xi=Qi+ri·vi Further, the information processing device10uses the coordinate information Xi (i=1, 2, . . . 20) in the local coordinate system, which has been acquired here and which is associated with each of the joints of the hand of the user, and performs processing for an application program, such as processing for controlling the actions of characters in a game application. In such a way as described above, the present embodiment enables a shape of a hand of a user who operates a device with the hand to be estimated on the basis of the results of detection by sensors arranged on the device. REFERENCE SIGNS LIST 1: Information processing system10: Information processing device11: Control unit12: Storage unit13: Interface unit21: Reception unit22: Input image generation unit23: Estimation processing unit24: Output unit30: Controller device31: Body portion32: Holding fixture33: Sensor34: Button40: Display device
29,980
11943520
DETAILED DESCRIPTION OF THE INVENTION A description of embodiments of the present invention will now be given with reference to the Figures. It is expected that the present invention may take many other forms and shapes, hence the following disclosure is intended to be illustrative and not limiting, and the scope of the invention should be determined by reference to the appended claims. FIG.1depicts a vehicle back-up camera2protected from glare by a sun shield124. While shield12is shown fixed in a position above a lens, some embodiments utilize a gimbal or circular track to allow movement of shield12to a position to either side of the lens if the sun is lower in the sky. Shield12may also be pivoted to be below the lens in situations where the sun is reflected off of snow or water. This embodiment is especially desirable when using the back-up camera to aid in attaching a trailer or driving a trailer into the water under a boat. Shield12also protects the lens from rain and snow accumulation, thereby keeping the lens dry so that it is clear when activated. FIG.2illustrates an embodiment of the present invention utilizing a protective cover8that functions like an eyelid. Cover8pivots about a horizontal axis and is usually in a closed position covering the lens and shielding the lens from contact with elements.FIG.2shows cover8in a semi-closed position andFIG.3shows the cover in the fully closed position. Cover8is activated simultaneously with the camera to quickly uncover the lens. When cover8is closed, it protects the lens from dust, rain and snow and other elements that might distort or impact the picture sent an on-board monitor (not shown). Cover8may have incorporated therein a hood so that when cover8is withdrawn, a hood attached to the edge of cover8rotates into a position above the lens. It will be appreciated that a permanent shield could also be used with or in place of the hood. When a heavy snow has fallen and accumulated on a bumper or other vehicular structure that would otherwise come into contact with the lens when cover8is activated, the hood may provide a scooping action to brush accumulated debris or snow away from the lens as cover8is retracted. FIG.4shows the lens with a water nozzle14located in a position where water can be directed onto the surface of the lens. An air nozzle16is likewise positioned to allow air to be directed over the surface of the lens-g. In some embodiments, water is squirted through nozzle14and thereafter, air is squirted through nozzle16to dry the lens. If dust has accumulated, then a user may elect to use only nozzle16to blow off the lens. It will be appreciated that the cleaning system incorporating nozzles14and16may be employed in addition to any other features used in other embodiments of the present invention. FIG.5shows an embodiment with a heating element18. Heating element18may be positioned so as to encircle the lens, or be placed to heat the back of the lens, or both. In icy conditions, water may freeze onto the surface of the lens. Heating element18will melt the ice and either dry the moisture or if nozzle16is incorporated into the embodiment, the melted ice may be blown off of the lens by nozzle16. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims, rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
3,684
11943522
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS <Endoscope> As illustrated inFIG.1, endoscopes9,9A to9F of embodiments of the present invention includes an insertion portion8B in which image pickup apparatuses1,1A to1F for endoscope (hereinafter, also referred to as “image pickup apparatuses1,1A to1F”) are disposed at a rigid distal end portion8A, an operation portion8C disposed on a base end side of the soft insertion portion8B, and a universal cord8D extending from the operation portion8C. The image pickup apparatus1is disposed at the distal end portion8A of the insertion portion8B of the endoscope9, and outputs an image pickup signal. The image pickup signal outputted from the image pickup apparatus1is transmitted to a processor by way of a cable which allows insertion of the universal cord8D. A drive signal from the processor to the image pickup apparatus1is also transmitted by way of the cable which allows insertion of the universal cord8D. As will be described later, the image pickup apparatuses1,1A to1F have a small external size in an optical axis orthogonal direction, have high performance, and can be easily manufactured. Thus, less-invasive and high-performance endoscopes9,9A to9F can be easily manufactured. Note that the endoscopes9,9A to9F may be rigid endoscopes and can be applied to medical use or industrial use. First Embodiment As illustrated inFIG.2andFIG.3, the image pickup apparatus1for endoscope of the present embodiment includes an optical member10, an image pickup member40, and a resin50which is an adhesive agent. In the following description, the drawings based on the respective embodiments are schematically illustrated. Relationships between thicknesses and widths of each portion, ratios of thicknesses of respective portions, relative angles, and the like, are different from actual relationships, ratios, relative angles, and the like. Some relationships of dimensions and ratios may be different between the drawings. Illustration of some components may be omitted. The optical member10is a stacked optical system in which a plurality of optical devices11to15are stacked. The optical member10has a rectangular parallelepiped shape having an entrance surface10SA, an exit surface10SB facing the entrance surface10SA, and four side surfaces10SS (10SS1,10SS2,10SS3and10SS4). The optical device11includes a glass plate11A and a resin lens11B. The optical device12includes a glass plate12A and resin lenses12B and12C. The optical device13includes a glass plate13A, an aperture13B and a resin lens13C. The optical device14includes a glass plate14A and a resin lens14B. The optical device15is an infrared cut filter device having a function of blocking infrared light. In other words, the optical devices11to14are hybrid lens devices in which resin lenses which are formed with a resin and which have aspheric surfaces are disposed on the glass plates. A configuration of the optical member10is designed in accordance with specifications of the image pickup apparatus. The image pickup member40includes an image pickup device30having a light receiving surface30SA and a back surface30SB facing the light receiving surface30SA. A cover glass20which protects the light receiving surface30SA is bonded on the light receiving surface30SA using a transparent resin (not illustrated). An upper surface20SA of the cover glass20faces the light receiving surface30SA and the back surface30SB of the image pickup device30, and the upper surface20SA, the light receiving surface30SA and the back surface30SB have the same external sizes in the optical axis orthogonal direction as external sizes of the entrance surface10SA and the exit surface10SB. Note that the image pickup member40may be the image pickup device30to which the cover glass20is not bonded. The image pickup device30includes a light receiving member31formed with a CCD, or the like, on the light receiving surface30SA, and includes an external electrode32connected to the light receiving member31on the back surface30SB facing the light receiving surface30SA. The image pickup device30may be either a surface irradiation type image sensor or a backside irradiation type image sensor. The resin50is disposed between the exit surface10SB of the optical member10and the upper surface20SA of the cover glass20of the image pickup member40. The resin50is a transparent ultraviolet curable resin which fills an optical path between the optical member10and the image pickup member40. The uncured resin50in a liquid state is disposed between the optical member10and the image pickup member40. Then, the resin50is subjected to curing processing by irradiation of ultraviolet light in a state where a thickness is adjusted. In other words, a thickness D50of the resin50is adjusted so that an object image, light of which is focused by the optical member10, is formed on the light receiving surface30SA. As will be described later, the optical member10is a wafer level optical portion manufactured by cutting a stacked wafer10W (seeFIG.5) in which a plurality of optical wafers respectively including a plurality of optical devices are stacked and bonded using an adhesive agent, and thus, the optical member10can be easily manufactured. Note that the four side surfaces10SS of the optical member10which is a wafer level optical member are cut surfaces. As already described above, the optical member10which is a wafer level optical member has a focal length different for each stacked wafer. However, in the image pickup apparatus1which is manufactured using a manufacturing method which will be described later, a position of the image-forming plane of the optical member10matches a position of the light receiving surface30SA of the image pickup member40, so that high performance can be achieved, and a favorable image can be obtained. <Manufacturing Method> A manufacturing method of the image pickup apparatus1will be described next along a flowchart illustrated inFIG.4. <Step S10> Manufacturing Process As illustrated inFIG.5, a stacked optical wafer10W is manufactured by optical wafers11W to15W being stacked and bonded. Note that inFIG.5and the like, lines indicated with reference numeral CL are cut lines in a separation process S60. In the optical wafer11W, a lens layer11BW including a plurality of resin lenses11B is disposed on a glass wafer11AW. In the optical wafer12W, a lens layer12BW including a plurality of resin lenses12B and a lens layer12CW including a plurality of resin lenses12C are disposed on a glass wafer12AW. In the optical wafer13W, an aperture layer13BW including a plurality of apertures12B and a lens layer13CW including a plurality of resin lenses13C are disposed on a glass wafer13AW. In the optical wafer14W, a lens layer14BW including a plurality of resin lenses14B is disposed on a glass wafer14AW. The lens layer11BW, the lens layer12BW, the lens layer12CW, the lens layer13CW and the lens layer14BW are disposed by, for example, applying a transparent resin for a lens, pressing a mold having a predetermined shape against the transparent resin for the lens, and curing the transparent resin for the lens by irradiating the transparent resin with ultraviolet light (UV). In other words, the resin lens is formed using a mold. Note that in place of the lens layer11BW connected to the plurality of resin lenses11B, for example, a plurality of separated resin lenses11B may be disposed on the glass wafer11AW. The optical wafer15W is an infrared cut filter wafer. Although not illustrated, alignment marks are respectively disposed at outer peripheral portions of the optical wafers11W to15W, and the optical wafers11W to15W are stacked in a state where the optical wafers11W to15W are positioned on the basis of the alignment marks. Further, the stacked optical wafer10W is manufactured by the optical wafers11W to15W being bonded using an ultraviolet curable adhesive which is disposed in advance and which is not illustrated. Meanwhile, a plurality of light receiving members31are formed on the light receiving surface30SA of the semiconductor wafer using a publicly-known semiconductor manufacturing technology. An image pickup wafer40W is manufactured by the cover glass wafer20W being bonded to the light receiving surface30SA (seeFIG.7). A transparent resin wafer may be used in place of the cover glass wafer20W. In other words, a member which protects the light receiving surface30SA is not limited to the cover glass wafer20W (cover glass20), but may be a resin plate as long as the resin plate is transparent. <Step S20> Measurement Process As illustrated inFIG.6, a position of an image-forming plane FP is measured, the image-forming plane FP being a plane on which an object image, light of which is focused by the optical member10included in the stacked optical wafer10W, is formed. Note that as illustrated inFIG.7, in the manufacturing method of the image pickup apparatus1, the cover glass wafer20W is disposed between the exit surface10SB of the stacked optical wafer10W and the light receiving surface30SA of the semiconductor wafer30W, and further, the resin50W is disposed. Thus, the method preferably further includes a correction process of correcting the measured position of the image-forming plane FP in view of a refractive index, or the like, as well as thicknesses of the cover glass wafer20W and the resin50W. More strictly, a transparent resin (not illustrated) which bonds the cover glass20on the light receiving surface30SA is also disposed between the exit surface10SB and the light receiving surface30SA, and thus, correction may be performed in the correction process in view of a thickness, a refractive index, or the like, of the transparent resin (not illustrated). For example, the position of the image-forming plane FP, that is, a length L from the exit surface10SB to the image-forming plane FP is actually measured by a measurement light being incident from the entrance surface10SA of the stacked optical wafer10W. The plurality of optical members10included in one stacked optical wafer10W have substantially the same length L from the exit surface10SB to the image-forming plane FP. It is therefore only necessary to perform measurement for one optical member10among the plurality of optical members10. <Step S30> Stacking Process As illustrated inFIG.7, the stacked optical wafer10W and the image pickup wafer40W are disposed at an interval, and the resin50W is disposed between the stacked optical wafer10W and the image pickup wafer40W. The resin50W which is a transparent ultraviolet curable resin such as a silicon resin and an epoxy resin and which is not cured, is in a liquid state. Thus, the thickness of the resin50W is variable. The stacked optical wafer10W and the image pickup wafer40W may be stacked in a state where the resin50W having a predetermined thickness is disposed at at least one of the stacked optical wafer10W or the image pickup wafer40W, or the resin50W may be injected between the stacked optical wafer10W and the image pickup wafer40W after the stacked optical wafer10W and the image pickup wafer40W are disposed at a predetermined interval. <Step S40> Interval Adjustment Process The interval between the optical member10and the image pickup member40, that is, the thickness D50of the resin50is adjusted so that the position of the image-forming plane FP measured in the measurement process S20becomes the position of the light receiving surface30SA. The thickness D50of the resin50is adjusted so that a total value of the thickness D50of the resin50and the thickness of the cover glass20becomes the length L. Note that it goes without saying that in a case where the position of the image-forming plane FP measured in step S20is corrected, the interval is adjusted on the basis of the corrected position of the image-forming plane FP. <Step S50> Curing Process The stacked optical wafer10W and the image pickup wafer40W are fixed in a state of the interval adjusted in the interval adjustment process S40by the resin50W being irradiated with ultraviolet light. Note that in a case where the resin50W is an ultraviolet curable and thermoset resin, the resin50W is further subjected to heat processing after ultraviolet irradiation. Note that it is necessary to maintain relative positions of the stacked optical wafer10W and the image pickup wafer40W, that is, an interval, until curing of the resin50W is completed. Thus, as the resin50W, an ultraviolet curable resin or an ultraviolet curable and thermoset resin which is cured in a short time period is more preferable than a thermoset resin which requires time for curing. In a case where the resin50W shrinks through curing processing, the thickness of the resin50W is preferably set so that the thickness after the curing processing becomes the interval between the optical member10and the image pickup member40in view of a shrinkage ratio of the resin50W. <Step S60> Separation Process As illustrated inFIG.8, the bonded wafer10W is separated into a plurality of image pickup apparatuses1by being cut along cut lines CL. The separation process may be, for example, a cut process through laser dicing or a process of forming cutting grooves through sandblasting or etching. According to the manufacturing method of the image pickup apparatus of the present embodiment, the optical member10is a wafer level optical member having the resin lens11B, or the like, so that the optical member10is inexpensive and can be easily manufactured. Further, the position of the image-forming plane on which an object image, light of which is focused by the optical member10, is formed matches the light receiving surface40SA of the image pickup member40, so that high performance can be achieved. Further, the manufacturing method of the present embodiment includes the manufacturing process S10, the measurement process S20, the stacking process S30, the interval adjustment process S40and the curing process S50which are performed in a state of the stacked optical wafer10W in which the optical wafers including the plurality of optical members10are stacked, and the image pickup wafer40W including the plurality of image pickup members40, and further includes the separation process S60of cutting the bonded wafer10W into a plurality of image pickup apparatuses1after the curing process S50. As already described above, the plurality of optical members10included in one stacked optical wafer10W have substantially the same length L from the exit surface10SB to the image-forming plane FP. Thus, even if the measurement process S20, the stacking process S30, the interval adjustment process S40and the curing process S50are performed in a wafer state, a high-performance image pickup apparatus1can be obtained. The manufacturing method of the image pickup apparatus of the present embodiment can collectively manufacture a plurality of image pickup apparatuses1, so that it is possible to achieve high manufacturing efficiency and provide an inexpensive image pickup apparatus. Modifications of First Embodiment Image pickup apparatuses1A and1B of modifications 1 and 2 of the first embodiment, and modifications 1 and 2 of manufacturing methods of the image pickup apparatuses1A and1B will be described next. The image pickup apparatuses1A and1B and the manufacturing methods of the image pickup apparatuses1A and1B are similar to the image pickup apparatus1and the manufacturing method of the image pickup apparatus1and have the same functions, and thus, the same reference numerals will be assigned to components having the same functions and description will be omitted. Modification 1 of First Embodiment In the manufacturing method of the image pickup apparatus1A of the present modification, a plurality of image pickup members40each including the cover glass20and the image pickup device30are manufactured in the manufacturing process S10. Meanwhile, the stacked optical wafer10W in which a plurality of optical wafers11W to15W are stacked is manufactured. The measurement process S20is performed on the stacked optical wafer10W. Then, as illustrated inFIG.9, the interval adjustment process S40and the curing process S50are performed on each of the plurality of image pickup members40and the stacked optical wafer10W. Note that, for example, a plurality of separated resin lenses11B are disposed on the glass wafer11AW in the optical wafer11W. Alignment marks AM are disposed among the plurality of resin lenses11B to manufacture a stacked optical wafer. In a case where the image pickup wafer40W is fixed on the stacked optical wafer10W using the resin50W, a defective image pickup apparatus is manufactured in a case where the image pickup wafer40W includes a defective image pickup device30. The manufacturing method of the image pickup apparatus1A has the effects of the manufacturing method of the image pickup apparatus1, and further has an effect of a higher yield ratio than a yield ratio of the manufacturing method of the image pickup apparatus1because the image pickup apparatus1A is manufactured using only the image pickup member40evaluated as a non-defective item. Further, the manufacturing method of the image pickup apparatus1A enables manufacturing of an image pickup apparatus using commercially available image pickup members40which are separated into pieces or using image pickup members40with different specifications. Modification 2 of First Embodiment With the manufacturing method of the image pickup apparatus1B of the present modification, the optical members10which are separated into pieces and the image pickup members40which are separated into pieces are manufactured in the manufacturing process S10. Note that in a case where a plurality of optical members10which are separated into pieces from one stacked optical wafer10W are used, the measurement process S20may be performed on one of the plurality of optical members10. As illustrated inFIG.10, the interval adjustment process S40and the curing process S50are performed on one optical member10and one image pickup member40. For example, in a state where an interval between the optical member10and the image pickup member40is adjusted to a predetermined interval L, the resin50is injected into a gap and curing processing is performed. The manufacturing method of the image pickup apparatus1B has the effects of the manufacturing method of the image pickup apparatus1, and is further, particularly appropriate for a manufacturing method of an image pickup apparatus for endoscope which is a product of large item small scale production. Second Embodiment An image pickup apparatus1C and a manufacturing method of the image pickup apparatus1C of a second embodiment will be described next. The image pickup apparatus1C and the manufacturing method of the image pickup apparatus1C are similar to the image pickup apparatus1and the manufacturing method of the image pickup apparatus1and have the same functions, and thus, the same reference numerals will be assigned to components having the same functions, and description will be omitted. As illustrated inFIG.11, the image pickup apparatus1C further includes a spacer51which roughly adjusts an optical path length, between the optical member10and the image pickup member40. The spacer51is, for example, a glass plate having a thickness D51. The optical member10is fixed to the spacer51by a transparent ultraviolet curable resin52A disposed between the optical member10and the spacer51. Meanwhile, the image pickup member40is fixed to the spacer51by a transparent ultraviolet curable resin52B disposed between the image pickup member40and the spacer51. In a case where a length (interval) L between the optical member10and the image pickup member40is long, it is not easy to fill space only with the resin, and there is a case where distortion may occur, and optical characteristics may degrade. As a result of the image pickup apparatus1C including the spacer51, thicknesses of the resins52A and52B become thin. Thus, the image pickup apparatus1C has the effects of the image pickup apparatus1, and further, can be easily manufactured and has favorable optical characteristics. Note that an optical path length D51of the spacer51is preferably set slightly smaller than the length L on the basis of the position of the image-forming plane FP measured in the measurement process S20, that is, the length L, because such setting makes the thickness D52A of the resin52A and the thickness D52B of the resin52B thinner, facilitates manufacturing and leads to favorable optical characteristics. Further, the optical path length D51may be finely adjusted by the uncured resin52B disposed between the image pickup member40and the spacer51after the optical member10is fixed to the spacer51, or may be finely adjusted by the uncured resin52A disposed between the optical member10and the spacer51after the image pickup member40is fixed to the spacer51. Modifications of Second Embodiment Image pickup apparatuses1D and1E and manufacturing methods of the image pickup apparatuses1D and1E of modifications of the second embodiment will be described next. The image pickup apparatuses1D and1E and the manufacturing methods of the image pickup apparatuses1D and1E are similar to the image pickup apparatus1and the manufacturing method of the image pickup apparatus1, and the like, and have the same functions, and thus, the same reference numerals will be assigned to components having the same functions, and description will be omitted. Modification 1 of Second Embodiment In the manufacturing method of the image pickup apparatus1D of modification 1 of the second embodiment illustrated inFIG.12, an interval between the optical member10D and the image pickup member40is adjusted by a resin spacer50D1and a resin50D2in the interval adjustment process S40. In a similar manner to the optical member10, a plurality of hybrid optical devices are stacked in the optical member10D. The resin spacer50D1is manufactured in a similar manner to the lens11, or the like, by applying a transparent resin, pressing a mold having a predetermined shape against the transparent resin, and curing the transparent resin by irradiating the transparent resin with ultraviolet (UV) light. In other words, the resin spacer50D1is molded. A thickness D50D1of the resin spacer50D1is set so that an optical path length including the resin spacer50D1becomes slightly smaller than the length L on the basis of the position of the image-forming plane FP measured in the measurement process S20, that is, the length L. In the image pickup apparatus1D, after the resin spacer50D1is disposed at the optical member10E, an interval between the resin spacer50D1and the image pickup member40is finely adjusted to be the length L, and an uncured resin50D2is injected to a gap. The interval between the optical member10D and the image pickup member40is fixed through curing processing of the resin50D2. In other words, the length of the gap becomes the thickness D50D2of the resin50D2. Modification 2 of Second Embodiment In the manufacturing method of the image pickup apparatus1E of modification 2 of the second embodiment illustrated inFIG.13, the interval between the optical member10D and the image pickup member40is adjusted by a resin spacer50E1and a resin50E2in the interval adjustment process S40. In a similar manner to the optical member10, a plurality of hybrid optical devices are stacked in the optical member10E. The resin spacer50E1is molded in a similar manner to the resin spacer50D1. After the resin spacer50E1having a thickness D50W1is disposed at the image pickup member40, an interval between the optical member10D and the resin spacer50E1is finely adjusted. Then, the interval between the optical member10E and the image pickup member40is fixed through curing processing of the resin50E2injected to the gap. In other words, a length of the gap becomes the thickness D50D2of the resin50E2. Note that in place of the resin spacers50D1and50E1, a resin spacer in which an optical path is space, which is formed with a transparent resin or an opaque resin, and which is molded may be used. Third Embodiment An image pickup apparatus1F and a manufacturing method of the image pickup apparatus1F of a third embodiment will be described next. The image pickup apparatus1F and the manufacturing method of the image pickup apparatus1F are similar to the image pickup apparatuses1,1A to1E and the manufacturing methods of the image pickup apparatuses1,1A to1E and have the same functions, and thus, the same reference numerals will be assigned to components having the same functions and description will be omitted. As illustrated inFIG.14andFIG.15, an optical member10F of the image pickup apparatus1F includes a first optical member10A and a second optical member10B. The first optical member10A includes a resin lens11F, a glass plate12F, a resin lens13F and a glass plate14F. The second optical member10B includes a resin spacer16F, a resin lens17F and a glass plate18F. The resin lens11F, the resin lens13F, the resin spacer16F and the resin lens17F are manufactured by cutting a molded resin wafer. The first optical member10A is bonded to the second optical member10B with a resin15F. A size of an entrance surface10SA orthogonal to the optical axis O of the first optical member10A having a rectangular parallelepiped shape is greater than a size of an exit surface10SB orthogonal to the optical axis O of the second optical member10B having a rectangular parallelepiped shape, and thus, the optical member10F is a bright optical system which is capable of focusing more light on an object image. A frame-like spacer53formed with, for example, silicon is disposed at the cover glass20of the image pickup member40. The optical member10F including the second optical member10B inserted into the spacer53can move in an optical axis direction. In the manufacturing method of the image pickup apparatus1F, an interval between the optical member10F and the image pickup member40is adjusted by the optical member10F inserted into the spacer53moving in the optical axis direction in the interval adjustment process S40. Then, the interval between the optical member10F and the image pickup member40is fixed by the resin54being subjected to curing processing. The interval can be adjusted in a state where an optical axis of the optical member10F matches an optical axis of the image pickup member40, so that the image pickup apparatus1F can be easily manufactured. Note that a frame-like spacer may be disposed at the optical member10F, and the interval between the optical member10F and the image pickup member40may be adjusted by the image pickup member40inserted into the spacer moving in the optical axis direction. In the image pickup apparatus1F, the resin54which fixes the optical member10F and the image pickup member40with the spacer53provided between the optical member10F and the image pickup member40does not block an optical path, and thus, the resin54does not have to be a transparent resin, but is preferably an ultraviolet curable resin or an ultraviolet curable and thermoset resin which can be cured in a short time period. In a case where the spacer53is formed with a light blocking material, the resin54is preferably an ultraviolet curable and thermoset resin. Further, it goes without saying that also in the manufacturing methods of the image pickup apparatuses1C and1F, the interval adjustment process S40, and the like, is performed in a state of the stacked optical wafer10W and the image pickup wafer40W in a similar manner to the manufacturing methods of the image pickup apparatuses1and1A. Further, it goes without saying that endoscopes9A to9F including the image pickup apparatuses1A to1F have the effects of the endoscope9, and further respectively have the effects of the image pickup apparatuses1A to1F. The present invention is not limited to the above-described embodiments and the like, and various changes, combinations and application are possible within a range not deviating from the gist of the invention.
28,208
11943523
DETAILED DESCRIPTION Hereinafter, example embodiments of the present inventive concepts will be described as follows with reference to the accompanying drawings. It will be understood that the same reference numerals are assigned to the same or similar constituent elements throughout the specification. In some example embodiments, when a certain part with a layer, film, region, plate, etc. is said to be “on” another part, the part may be “above” or “below” the other part. In some example embodiments, when a certain part with a layer, film, region, plate, etc. is said to be “on” another part, the part may be “indirectly on” or “directly on” the other part. When a certain part is said to be “indirectly on” another part, an interposing structure and/or space may be present between the certain part and the other part such that the certain part and the other part are isolated from direct contact with each other. Conversely, when a certain part is said to be “directly on” another part, it means that there is no other part between the certain part and the other part such that the certain part is in direct contact with the other part. It will be understood that elements and/or properties thereof (e.g., structures, surfaces, directions, or the like), which may be referred to as being “perpendicular,” “parallel,” “coplanar,” or the like with regard to other elements and/or properties thereof (e.g., structures, surfaces, directions, or the like) may be “perpendicular,” “parallel,” “coplanar,” or the like or may be “substantially perpendicular,” “substantially parallel,” “substantially coplanar,” respectively, with regard to the other elements and/or properties thereof. Elements and/or properties thereof (e.g., structures, surfaces, directions, or the like) that are “substantially perpendicular” with regard to other elements and/or properties thereof will be understood to be “perpendicular” with regard to the other elements and/or properties thereof within manufacturing tolerances and/or material tolerances and/or have a deviation in magnitude and/or angle from “perpendicular,” or the like with regard to the other elements and/or properties thereof that is equal to or less than 10% (e.g., a. tolerance of ±10%). Elements and/or properties thereof (e.g., structures, surfaces, directions, or the like) that are “substantially parallel” with regard to other elements and/or properties thereof will be understood to be “parallel” with regard to the other elements and/or properties thereof within manufacturing tolerances and/or material tolerances and/or have a deviation in magnitude and/or angle from “parallel,” or the like with regard to the other elements and/or properties thereof that is equal to or less than 10% (e.g., a. tolerance of ±10%). Elements and/or properties thereof (e.g., structures, surfaces, directions, or the like) that are “substantially coplanar” with regard to other elements and/or properties thereof will be understood to be “coplanar” with regard to the other elements and/or properties thereof within manufacturing tolerances and/or material tolerances and/or have a deviation in magnitude and/or angle from “coplanar,” or the like with regard to the other elements and/or properties thereof that is equal to or less than 10% (e.g., a. tolerance of ±10%). It will be understood that elements and/or properties thereof may be recited herein as being “the same” or “equal” as other elements, and it will be further understood that elements and/or properties thereof recited herein as being “the same” as or “equal” to other elements may be “the same” as or “equal” to or “substantially the same” as or “substantially equal” to the other elements and/or properties thereof. Elements and/or properties thereof that are “substantially the same” as or “substantially equal” to other elements and/or properties thereof will be understood to include elements and/or properties thereof that are the same as or equal to the other elements and/or properties thereof within manufacturing tolerances and/or material tolerances. Elements and/or properties thereof that are the same or substantially the same as other elements and/or properties thereof may be structurally the same or substantially the same, functionally the same or substantially the same, and/or compositionally the same or substantially the same. It will be understood that elements and/or properties thereof described herein as being the “substantially” the same encompasses elements and/or properties thereof that have a relative difference in magnitude that is equal to or less than 10%. Further, regardless of whether elements and/or properties thereof are modified as “substantially,” it will be understood that these elements and/or properties thereof should be construed as including a manufacturing or operational tolerance (e.g., ±10%) around the stated elements and/or properties thereof. When the terms “about” or “substantially” are used in this specification in connection with a numerical value, it is intended that the associated numerical value include a tolerance of ±10% around the stated numerical value. When ranges are specified, the range includes all values therebetween such as increments of 0.1%. FIG.1is an exploded perspective diagram illustrating an image sensor module1000A according to some example embodiments. It will be understood that “module” and “device” may be used interchangeably herein, and therefore an image sensor module according to any of the example embodiments may be interchangeably referred to as an image sensor device. Referring toFIG.1, the image sensor module1000A may include a first module100A (e.g., first device) and a second module200(e.g., second device, or “optical module,” also referred to herein interchangeably as an “optical device”) mounted on an upper portion of the first module100A. The first module100A may include a substrate110, an image sensor120, first and second stiffeners130and140, and an optical filter150. The substrate110may include a flexible substrate, a rigid substrate, and/or a flexible-rigid substrate. For example, the substrate110may be configured as a flexible-rigid substrate including a first rigid substrate110a, a second rigid substrate110c, and a first flexible substrate110bconnecting the first rigid substrate110ato the second rigid substrate110c. The image sensor120and a plurality of passive devices114may be mounted on or in the first rigid substrate110a. The plurality of passive devices114may include passive devices such as, for example, resistors, capacitors, diodes, transistors, relays, and electrically erasable programmable read only memories (EEPROM). A connector CNT connected to another electronic device may be disposed on the second rigid substrate110c. The first flexible substrate110bmay electrically connect the first and second rigid substrates110aand110cto each other. The image sensor120may be electrically connected to another electronic device through the connector CNT. The image sensor120may be mounted on (e.g., directly or indirectly on) an upper surface110U of the substrate110or on an upper surface of the second stiffener140. For example, as illustrated inFIG.1, the substrate110may have a cavity110H (e.g., the substrate110may include one or more inner surfaces110S at least partially defining a cavity110H extending at least partially through the substrate110in the Z-axis direction) in which the image sensor120is at least partially accommodated (e.g., located), and the image sensor120disposed (e.g., located) in the cavity110H may be mounted on an upper surface140U of the second stiffener140. In some example embodiments, the image sensor120may be mounted on the upper surface of the substrate110(seeFIG.10). The image sensor120may include a sensor region121having a pixel array Px and a logic region122disposed on a lower surface of the sensor region121. For example, the image sensor120may include a complementary metal oxide semiconductor (CMOS). The image sensor120may further include a storage region123disposed on a lower surface of the logic region122. The storage region123may store image data obtained by the sensor region121and processed by the logic region122. For example, the storage region123may include a volatile memory device such as a dynamic RAM (DRAM) or a static RAM (SRAM), or a nonvolatile memory device such as a phase change RAM (PRAM), a magnetic RAM (MRAM), a resistive RAM (RRAM), a flash memory, and the like. The first and second stiffeners130and140may include a ceramic material or a metal material which may control warpage of the substrate110and may withstand an external impact. For example, the first and second stiffeners130and140may include iron (Fe) or a metal alloy (e.g., stainless steel) including iron (Fe). The first stiffener130may be mounted on the upper surface of the substrate110and may occupy 40% or more of a plane area (e.g., cross-sectional area and/or surface area in a plane extending in the X-axis direction and the Y-axis direction) of the substrate110, 50% to 80% of the plane area, for example. In the case of the large-area image sensor120, a lens and a substrate110having an area corresponding thereto may be required. The first stiffener130may be mounted on a spare area of the substrate110other than a mounting area of the passive device114and the optical filter150. The first stiffener130may relieve stress acting on the image sensor120along with the second stiffener140disposed on (e.g., below) the lower surface110L of the substrate110(e.g., attached to the lower surface110L of the substrate110). The image sensor120requiring a large-area substrate110on which the first stiffener130may be mounted may have an optical format of 1/1.33 inch or more. For example, the image sensor120may have an optical format ranging from 1/1.33 inch to 1/1 inch. The optical format may be defined as a value obtained by dividing a diagonal length (in mm) of the sensor image region by 16. For example, the optical format of the image sensor120having a pixel size of 0.8 μm and a number of pixels of 108 MP may be calculated as 1/1.33 inch. For example, the optical format of the image sensor120having a pixel size of 1.2 μm and a number of pixels of 50 MP may be calculated as 1/1.31 inch. The optical filter150may be disposed in an upper portion of the image sensor120and may be supported by the first stiffener130(or a “first support member”131described later). The optical filter150may be fixed to the first stiffener130(or a “first support member”131described later) by an adhesive. The optical filter150may filter infrared or near-infrared rays and may thus improve image quality of the image sensor120. For example, the optical filter150may include an IR filter. The second module200(or “optical module”) may be disposed in a path of light incident to the optical filter150and the image sensor120such that light incident in one direction (e.g., in the Z-axis direction) may be incident to the image sensor120. Restated, and as shown in at leastFIG.1, the second module200may overlap the image sensor120and the optical filter150in the vertical direction that extends perpendicular to the upper surface110U of the substrate110(e.g., Z-axis direction), such that the second module200is configured to direct incident light through the second module200and further to the image sensor120through the optical filter150. The second module200may include a lens assembly210and a lens housing220. The lens assembly210may be on (e.g., directly or indirectly on, vertically overlapping, etc.) the optical filter150and may include at least one or more lenses. For example, the lens assembly210may include a plurality of lenses arranged in a vertical direction (Z-axis direction). The lens housing220may be on (e.g., directly or indirectly on, vertically overlapping, etc.) the substrate110and/or the second support member132and may be configured to accommodate and support (e.g., structurally support the structural load, or weight, of) the lens assembly210. The lens housing220may include a holder unit supporting the lens assembly210and a driving unit driving the lens assembly210in an optical axis direction (e.g., the Z axis direction). FIG.2is a plan diagram illustrating a portion of elements of a first module in the image sensor module1000A illustrated inFIG.1.FIGS.3A and3Bare cross-sectional diagrams along lines IIIA-IIIA′ and IIIB-IIIB′ inFIG.2, respectively.FIGS.3C and3Dare enlarged diagrams illustrating region “C” illustrated inFIG.3A.FIG.2does not illustrate the first flexible substrate110b, the second rigid substrate110c, and the optical filter150illustrated inFIG.1, and illustrates a bonding relationship among the substrate110, the image sensor120, and the first stiffener130on a plane. Referring toFIGS.2,3A and3B, a first module100Aa in an example may include a substrate110, an image sensor120, a first stiffener130(or “upper stiffener”), a second stiffener140(or “lower stiffener”), and an optical filter150. As shown in at leastFIG.1, substrate110may include an upper surface110U on which first and second pads110P1and110P2spaced apart from each other (e.g., isolated from direct contact with each other) are disposed (e.g., the first and second pads110P1and110P2may be directly or indirectly on the upper surface110U), a lower surface110L opposite to the upper surface, and a cavity110H accommodating the image sensor120. Restated, the substrate110may include, in addition to the upper surface110U and the lower surface110L opposite to each other, one or more inner surfaces110S that at least partially define a cavity110H extending through the substrate110in a vertical direction extending perpendicular to the upper surface110U (e.g., the Z-axis direction). In an example, the cavity110H may extend from the upper surface110U to the lower surface110L of the substrate110and may penetrate (e.g., extend through an entire thickness in the vertical direction Z of) the substrate110. One or more passive devices114may be disposed between the first stiffener130and an outer edge of the substrate110and may be on (e.g., directly or indirectly on) the upper surface of the substrate110. As illustrated inFIG.2, the passive device114may be disposed between the first stiffener130or the second support member132and the edge of the substrate110on a plane. As shown, the first and second pads110P1and110P2may be isolated from direct contact with each other in a horizontal direction extending parallel to the upper surface110U of the substrate110(e.g., the X-axis direction and/or the Y-axis direction). Referring toFIGS.3C and3D, the substrate110may have a multilayer structure including a plurality of insulating layers111and one or a plurality of wiring layers112(e.g., at least one wiring layer). The plurality of insulating layers111may include a thermosetting resin such as an epoxy resin, a thermoplastic resin such as a polyimide resin, a resin in which an inorganic filler and/or a glass fiber (or a glass cloth or a glass fabric) is impregnated in the thermosetting resin or the thermoplastic resin, such as prepreg, Ajinomoto build-up film (ABF), FR-4, bismaleimide triazine (BT), or the like. The one or plurality of wiring layers112(e.g., the at least one wiring layer) may include a signal pattern, a power pattern, and a ground pattern. As shown inFIGS.3C-3D, the first pad110P1of the substrate110may be electrically connected to at least one of a signal pattern, a power pattern, or a ground pattern of the one or plurality of wiring layers112(e.g., the ground pattern in the uppermost wiring layer112inFIG.3C), and the second pad110P2may be electrically connected to the ground pattern (e.g., the uppermost wiring layer112shown inFIG.3C). The one or plurality of wiring layers112and the first and second pads110P1and110P2may include a metal material such as copper (Cu), aluminum (Al), silver (Ag), tin (Sn), gold (Au), nickel (Ni), lead (Pb), titanium (Ti), or alloys thereof. The one or plurality of wiring layers112and the first and second pads110P1and110P2disposed on different levels may be electrically connected to each other through a conductor via penetrating the insulating layer111. As described herein, a “level” of an element may refer to a vertical distance of the element from the upper surface110U of the substrate110in the vertical direction (e.g., the Z-axis direction). The first pad110P1and the second pad110P2may be insulated from each other or may be electrically connected to each other through the wiring layer112of the substrate110. For example, as illustrated inFIG.3C, when the first pad110P1is connected to the first wiring layer112P1including a signal pattern, a power pattern, or a ground pattern, and the second pad110P2is connected to the second wiring layer112P2including a ground pattern, the first pad110P1and the second pad110P2may be insulated from each other. For example, as illustrated inFIG.3D, when the first pad110P1and the second pad110P2are connected to the third wiring layer112P3including the same ground pattern, the first pad110P1and the second pad110P1may be electrically connected to each other. The first pad110P1and the second pad110P2may be protected by a solder mask, and may have a solder mask defined (SMD) shape or a non-solder mask defined (NSMD) shape. The image sensor120may be disposed on or in the substrate110. The image sensor120may be understood to be on the substrate110when the image sensor120is on (e.g., directly on) a surface of the substrate110, for example as shown inFIGS.8,12-13, and14A-14B. The image sensor120may be understood to be in the substrate when the image sensor120is located at least partially between the upper and lower surfaces110U and110L of the substrate110in the vertical direction (e.g., Z-axis direction), for example as shown inFIGS.3A-3D. For example, the image sensor120may be accommodated in the cavity110H of the substrate110(e.g., located within the cavity110H and between the upper and lower surfaces110U and110L in the vertical direction, or Z-axis direction) and may be mounted on (e.g., directly or indirectly on) the second stiffener140covering the lower surface110L of the substrate110(e.g., the image sensor120may be directly on the upper surface140U of the second stiffener140). The image sensor120may include a connection pad120P disposed on an upper surface, and the connection pad120P may be electrically connected to the first pad110P1of the substrate110. For example, the connection pad120P may be electrically connected to the first pad110P1by a bonding wire W. Thus, it will be understood that the image sensor120may be electrically connected to the first pad110P1(e.g., via connection pad120P and bonding wire W). In an example, the image sensor120may be a large-sized sensor having an optical format of 1/1.33 inch or more. For example, the image sensor120may have an optical format in a range of 1/1.33 inch to 1 inch. The first stiffener130may be disposed on the upper surface of the substrate110and may have an opening130H corresponding to (e.g., partially, entirely, and/or exactly overlapping in the vertical direction, or Z-axis direction) the cavity110H. Restated, the first stiffener130may include one or more inner surfaces130S that may at least partially define the opening130H extending partially or completely through the first stiffener130(e.g., in the vertical direction Z). The first stiffener130may include a first support member131disposed adjacent to the cavity110H and supporting the optical filter150and a second support member132surrounding an external side of the optical filter150. The first support member131may be disposed between the substrate110and the optical filter150(e.g., in the Z-axis direction) and may support a lower portion of the optical filter150. As shown, optical filter150may be mounted on (e.g., may be directly or indirectly on) the first support member131, such that the first support member131is configured to support a structural load (e.g., weight) of the optical filter150. As shown, the optical filter150may be understood to be “on” (e.g., indirectly on) both the substrate110and the image sensor120, for example such that the optical filter150may at least partially overlap at least the image sensor120and/or the substrate110in the vertical direction (e.g., Z-axis direction). As shown, the optical filter150may be understood to be “on” (e.g., indirectly on) both the upper stiffener130and the image sensor120, for example such that the optical filter150may at least partially overlap at least the image sensor120and/or the upper stiffener130in the vertical direction (e.g., Z-axis direction). For example, the optical filter150may completely overlap the image sensor120in the vertical direction and partially overlap the substrate110in the vertical direction. As shown, the optical filter150may cover (e.g., completely overlap in the vertical direction, or Z-axis direction) both the cavity110H and the opening130H. A thickness131hof the first support member131between the substrate110and the optical filter150in the vertical direction (Z-axis direction) may determine a distance between the image sensor120and the optical filter150, and thus, the thickness may be varied in some example embodiments. The thickness131hof the first support member131between the substrate110and the optical filter150in the vertical direction (Z-axis direction) may be greater than a thickness of the image sensor120in the vertical direction. The second support member132may be adjacent to the first support member131on the substrate110and may be further adjacent to an edge of the substrate110than the first support member131(e.g., may be closer to an outer edge110E of the substrate110than the first support member131in a horizontal direction extending parallel to the upper surface110U, such as the X-axis direction and/or the Y-axis direction). The second support member132may surround an edge or a side surface of the optical filter150. For example, the thickness132hof the second support member132in the vertical direction (Z-axis direction) may be greater than the thickness131hof the first support member131in the vertical direction (Z-axis direction), and the upper surface of the second support member132may be disposed on a level the same as or higher than a level of the upper surface of the optical filter150. In an example, the first and second support members131and132may be integrated with each other (e.g., may be separate portions of a single, continuous piece of material). The first support member131may extend in a direction (X-axis and Y-axis directions) horizontal to the upper surface of the substrate110and may have an opening130H exposing the first region110-1of the upper surface of the substrate110and the image sensor120. Restated, the first support member131may have one or more inner surfaces130S that at least partially define the opening130H which may expose the image sensor120and a first region of the upper surface110U in the vertical direction (e.g., Z-axis direction). The first region110-1may be a region of the upper surface of the substrate110more adjacent to (e.g., closer to) the image sensor120than the second support member132, the region which does not overlap the first and second support members131and132in the vertical direction (Z-axis direction). As shown inFIGS.3A-3B, the second support member132may extend from one side of the first support member131in a direction (Z-axis direction) perpendicular to the upper surface of the substrate110(e.g., the vertical direction as described herein). In this case, for example as shown inFIG.3A, the first pad110P1may be disposed on (e.g., directly on) the first region110-1of the upper surface of the substrate110exposed in the vertical direction (Z-axis direction) through the first support member131or the opening130H of the first stiffener130. The second pad110P2may be disposed on (e.g., directly on) the second region110-2of the upper surface of the substrate110which is not exposed in the vertical direction (Z-axis direction) through the first support member131or the opening130H of the first stiffener130. The first and second support members131and132may be attached to the second region110-2, and the second region110-2may surround the first region110-1(e.g., in the X-axis direction and/or Y-axis direction) and may overlap the first and second support members131and132in the vertical direction (Z-axis direction). The first pad110P1does not overlap the first and second support members131and132or the first stiffener130in the vertical direction (Z-axis direction), and the second pad110P2may overlap the first and second support members131and132or the first stiffener130in the vertical direction (Z-axis direction). For example, referring toFIG.2, a plane area of the opening130H may be greater than a plane area of the cavity110H in a direction (X-axis and Y-axis directions) horizontal to the upper surface of the substrate110. Restated, the cross-sectional area of the opening130H in a first plane extending in the X-axis direction and the Y-axis direction and thus extending parallel to the upper surface110U of the substrate110may be greater than a cross-sectional area of the cavity110H in a second plane that may be the same or different plane as the first plane and also extending parallel to the upper surface110U of the substrate110in the X-axis direction and the Y-axis direction. The cavity110H may be disposed in the plane area of the opening130H (e.g., may overlap said plane area in the vertical direction), and the first pad110P1may be disposed adjacent to the cavity110H in a direction (X-axis and Y-axis directions) horizontal to the upper surface of the substrate110(e.g., the horizontal direction) and may be exposed through the opening130H in the direction (Z-axis direction) perpendicular to the upper surface of the substrate110(e.g., the vertical direction). The first stiffener130may be electrically connected to the second pad110P2on the upper surface of the substrate110through an electrical connection structure160. The electrical connection structure160may include a plurality of electrical connection members161electrically connecting the first stiffener130to the second pad110P2, and an insulating member162surrounding the electrical connection member161between the first stiffener130and the substrate110. For example, at least one of the first or second support members131or132may be electrically connected to the second pad110P2. The electrical connection member161may be disposed between (e.g., directly between) at least one of the first or second support members131or132(e.g., at least one support member) and the substrate110and may directly contact the second pad110P2, and the insulating member162may surround the electrical connection member161between the at least one of the first or second support members131or132(e.g., the at least one support member) and the substrate110, thereby enabling the at least one support member of the first or second support members131or132to be electrically connected to the second pad110P2. The electrical connection member161may include a metal material such as copper (Cu), aluminum (Al), silver (Ag), tin (Sn), gold (Au), nickel (Ni), lead (Pb), titanium (Ti), or alloys thereof. For example, the electrical connection member161may include tin (Sn) or a low melting point metal alloy including tin (Sn). The insulating member162may include an insulating material such as a non-conductive film. The at least one support member (e.g., the at least one of the first or second support members131or132) electrically connected to the second pad110P2may include iron (Fe) or a first metal alloy that includes iron (Fe). The second stiffener140may be attached to the lower surface of the substrate110. As illustrated inFIGS.3C and3D, the image sensor120and the substrate110may be adhered to the upper surface of the second stiffener140by an adhesive member AD. The adhesive member AD may include, for example, an insulating adhesive including an epoxy resin. In an example, a thickness140hof the second stiffener140in the vertical direction (Z-axis direction) may be substantially the same as, or smaller than, a thickness131hof the first support member131between the optical filter150and the substrate110in the vertical direction (Z-axis direction). The optical filter150may be mounted on the first stiffener130. The optical filter150may be aligned with the image sensor120in the optical axis direction (e.g., Z axis direction), and may cover the cavity110H of the substrate110and the opening130H of the first stiffener130. The optical filter150may be fixed to the first stiffener130by an adhesive including an epoxy resin, for example. FIGS.4A,4B,4C, and4Dare cross-sectional diagrams illustrating a process of manufacturing a first module100Aa illustrated inFIG.3A. Referring toFIG.4A, the substrate110and the image sensor120may be attached to the second stiffener140, and the image sensor120and the substrate may be electrically connected to each other. The second stiffener140may be configured as, for example, a metal plate including stainless steel. The substrate110and the image sensor120may be attached to the second stiffener140by an insulating adhesive such as a non-conductive film. The image sensor120may be connected to the substrate110by a wire-bonding process. The connection pad120P of the image sensor120may be connected to the first pad110P1of the substrate110. The substrate110may be configured as a substrate for a semiconductor package including a printed circuit board (PCB), a ceramic substrate, a glass substrate, and a tape wiring board. The substrate110may have a cavity110H accommodating the image sensor120. The cavity110H may be formed by a laser drilling process or an etching process, for example. Referring toFIG.4B, an insulating member162may be disposed on the upper surface of the substrate110. The insulating member162may be disposed on the second pad110P2on the upper surface of the substrate110. The insulating member162may be disposed to expose the first pad110P1. The insulating member162may be attached to and cured on the substrate110through heat treatment. Thereafter, a through-hole162H penetrating the insulating member162and exposing at least a portion of the second pad110P2may be formed. The through-hole162H may be formed using a laser drill. Referring toFIG.4C, an electrical connection member161may be disposed in the through-hole162H inFIG.4B. The electrical connection member161may include, for example, solder. The electrical connection structure160formed on the substrate110may include an electrical connection member161on the second pad110P2and an insulating member162surrounding the electrical connection member161. A portion of the electrical connection member161may be coupled to the electrical connection member161in the through-hole162H while being attached to the lower surface of the first stiffener130attached later. Referring toFIG.4D, a first stiffener130may be disposed on the electrical connection structure160. The first stiffener130may include, for example, stainless steel. The first stiffener130may be coupled to the electrical connection structure160by a reflow process. To relieve stress acting on the image sensor120, the first stiffener130may have a plane area of a certain level or more (e.g., 40% or more of the plane area of the substrate) on the substrate110. The insulating member162may be formed in a range corresponding to the plane area of the first stiffener130. FIG.5is a cross-sectional diagram illustrating a modified example of a first module100Ab in an image sensor module1000A according to some example embodiments. Referring toFIG.5, in the modified example, a first stiffener130may include a first support member131and a second support member132separated from each other. For example, the first support member131and the second support member132may be separated from each other (e.g., isolated from direct contact with each other) in a horizontal direction that extends parallel to the upper surface110U of the substrate110(e.g., X-axis direction and/or Y-axis direction), and the second support member132may extend in the horizontal direction (X-axis and Y-axis directions) to surround both an outer side surface131S of the first support member131and an outer side surface150S of the optical filter150in the horizontal direction (e.g., at least partially overlaps the outer side surface131S of the first support member131and the outer side surface150S of the optical filter150in the X-axis direction and/or the Y-axis direction). Differently from the second support member132, the first support member131may include an insulating material. For example, the first support member131may include an insulating material such as an epoxy molding compound (EMC), and the second support member132may include a conductive material such as stainless steel. Alternatively, the first support member131may include a conductive material the same as or different from that of the second support member132. The first support member131may not be electrically connected to the substrate110, and the second support member132may be electrically connected to the substrate110and may be connected to a ground pattern. FIG.6is a cross-sectional diagram illustrating a modified example of a first module100Ac in an image sensor module1000A according to some example embodiments. Referring toFIG.6, in the modified example, an uppermost surface S2of the first stiffener130may be disposed on a level lower than a level of an upper surface S1of the optical filter150. For example, the upper surface S2of the second support member132may be disposed between the lower surface and the upper surface S1of the optical filter150. Since the upper surface S2of the second support member132and the upper surface S1of the optical filter150have a difference h therebetween, a process of attaching the optical filter150may be easily performed. FIGS.7A and7Bare cross-sectional diagrams illustrating a modified example of a first module100Ad-1or100Ad-2in an image sensor module1000A according to some example embodiments. Referring toFIGS.7A and7B, in the modified example, the first stiffener130and the second stiffener140may have different thicknesses. A thickness of the first stiffener130may be determined according to a distance between the image sensor120and the optical filter150. The thickness of the first stiffener130may be defined as a thickness of the first support member131between the optical filter150and the substrate110. When the thickness and a region of the first stiffener130are sufficiently secured, sizes of the first module and the image sensor module may be reduced by reducing the thickness of the second stiffener140. For example, as illustrated inFIG.7A, a thickness140h-1of the second stiffener140in the vertical direction (Z-axis direction) may be less than a thickness131h-1of the first support member131, disposed between the optical filter150and the substrate110, in the vertical direction (Z-axis direction). When the thickness of the first stiffener130is not sufficiently secured, by increasing the thickness of the second stiffener140, stress acting on the substrate110and the image sensor120may be reduced. For example, as illustrated inFIG.7B, a thickness140h-2of the second stiffener140in the vertical direction (Z-axis direction) may be greater than a thickness131h-2of the first support member131disposed between the optical filter150and the substrate110in the vertical direction (Z-axis direction). FIG.8is a cross-sectional diagram illustrating a modified example of a first module100Ae in an image sensor module1000A according to some example embodiments. Referring toFIG.8, in the modified example, the cavity110H of the substrate110may have a recess structure concave from an upper surface to a lower surface of the substrate110. For example, the cavity110H may be formed by (e.g., at least partially defined by) an internal bottom surface110HS1of the substrate110disposed between the upper surface and the lower surface of the substrate110and an internal side surface110HS2of the substrate110connecting the upper surface of the substrate110to the internal bottom surface110HS1. Restated, the one or more inner surfaces110S of the substrate110that at least partially define the cavity110H may include an internal bottom surface110HS1of the substrate110that is between the upper surface110U and the lower surface110L, and an internal side surface110HS2of the substrate110that connects the upper surface110U to the internal bottom surface110HS1. The image sensor120may be mounted on (e.g., directly or indirectly on) the internal bottom surface110HS1of the cavity110H. Since a distance to the optical filter150or the lens may be reduced by the thickness of the substrate110remaining in a lower portion of the cavity110H, the image sensor120may secure a distance to the optical filter150by adjusting a thickness131hof the first support member131. In an example, since the image sensor120is mounted on the internal bottom surface110HS1of the cavity110H, the second stiffener140on the bottom surface of the substrate110may not be provided. FIGS.9A and9Bare cross-sectional diagrams illustrating an example of combination of a first module100A and a second module200in an image sensor module1000A (e.g., modules100Af-1or100Af-2) according to some example embodiments. Referring toFIGS.9A and9B, the second module200may be mounted on the substrate110or the first stiffener130(or the second support member132) so as to be on (e.g., directly on, indirectly on, vertically overlapping, etc.) the image sensor120and the optical filter150. The second module200may include a lens assembly210including a plurality of lenses aligned in an optical axis direction (Z axis direction, also referred to herein as a vertical direction extending perpendicular to the upper surface110U of the substrate110) and a lens housing220accommodating the lens assembly210. The lens assembly210may be aligned with (e.g., overlapping) the image sensor120and the optical filter150in the optical axis direction (e.g., the Z axis direction). The lens housing220may include a holder unit supporting the lens assembly210and a driving unit driving the lens assembly210in the optical axis direction (e.g., the Z axis direction). The lens housing220may be mounted on the substrate110or the second support member132. For example, as illustrated inFIG.9A, the lens housing220may be mounted on an outer region of the upper surface of the substrate110and may accommodate the optical filter150and the first stiffener130. The lens housing220may be fixed to the substrate110by an adhesive. For example, as illustrated inFIG.9B, the first stiffener130may extend to an edge of the substrate110and may occupy a region larger than the example inFIG.9A, and the lens housing220may be mounted on an uppermost surface (or the upper surface of the second support member132) of the first stiffener130. In this case, by sufficiently securing the area of the first stiffener130, the thickness140hof the second stiffener140on the lower surface of the substrate110may be reduced. Accordingly, the image sensor module may have a reduced size, and warpage acting on the substrate110and stress acting on the image sensor120may be effectively relieved. FIG.10is an exploded perspective diagram illustrating an image sensor module1000B according to some example embodiments. Referring toFIG.10, the image sensor module1000B may include a first module100B and a second module200(or “optical module”) mounted on an upper portion of the first module100B. In an example, differently from the first module100A inFIG.1, in the first module100B, a cavity110H (inFIG.1) may not be formed in the substrate110, and the stiffener140(inFIG.1) disposed in a lower portion of the substrate110may not be provided. The image sensor120may be mounted on the upper surface of the substrate110. In this case, to compensate for the distance between the image sensor120and the optical filter150reduced by the thickness of the substrate110, the thickness of the first stiffener130disposed around the image sensor120may be increased. Accordingly, even though the second stiffener140(inFIG.1) on the lower surface of the substrate110is not provided, warpage and stress acting on the image sensor120may be effectively relieved. Since the elements of the first module100B and the second module200are the same as those described in the aforementioned example embodiments described with reference toFIG.1, detailed descriptions thereof will not be provided. FIG.11is a plan diagram illustrating a portion of elements of a first module100B in the image sensor module1000B illustrated inFIG.10.FIG.12is a cross-sectional diagram taken along line XII-XII′ inFIG.11.FIG.11does not illustrate the first flexible substrate110b, the second rigid substrate110c, and the optical filter150illustrated inFIG.10and illustrates a bonding relationship among the substrate110, the image sensor120, and the first stiffener130on a plane. Referring toFIGS.11and12, a first module100Ba in an example may include a substrate110, an image sensor120, a first stiffener130, and an optical filter150. The substrate110may have a plate shape having no cavity formed therein. The image sensor120may be mounted on the upper surface of the substrate110. The first stiffener130may be disposed on the upper surface of the substrate110on which the second pad110P2is disposed. The first stiffener130may surround the image sensor120on the upper surface of the substrate110, and may include a first support member131supporting a lower portion of the optical filter150and a second support member132extending from one side of the first support member131in the vertical direction (Z axis direction) and surrounding at least a portion of a side surface of the optical filter150. A thickness131hof the first stiffener130or the first support member131between the substrate110and the optical filter150may be greater than a thickness of the image sensor120mounted on the substrate110. For example, the thickness131hof the first support member131may be determined in consideration of a thickness of the image sensor120, a distance between the image sensor120and the optical filter150, a height of the bonding wire W, and the like. In this case, the thickness of the first stiffener130may be sufficiently secured, such that warpage and stress acting on the image sensor120may be effectively relieved without the second stiffener140(inFIG.1) on the lower surface of the substrate110. FIG.13is a cross-sectional diagram illustrating a modified example of a first module100Bb in an image sensor module1000B according to some example embodiments. Referring toFIG.13, in the modified example, the first module100Bb may further include a second stiffener140disposed on a lower surface of the substrate110. Even when the image sensor120is mounted on the substrate110, by disposing the second stiffener140on the lower surface of the substrate110, warpage and stress acting on the image sensor120may be reduced. In an example, the thickness131hof the first support member131may be sufficiently secured in consideration of the thickness of the image sensor120, the distance between the image sensor120and the optical filter150, the height of the bonding wire W, and the like. Accordingly, by configuring the thickness140hof the second stiffener140to be smaller than the thickness131hof the first support member131, an increase in the height of the first module100Bb may be reduced. FIGS.14A and14Bare cross-sectional diagrams illustrating an example of combination of a first module100B and a second module200(e.g., module100Bc-1or100Bc-2) in an image sensor module1000B according to some example embodiments. Referring toFIGS.14A and14B, similarly to the example described with reference toFIGS.9A and9B, a second module200may be disposed on a substrate110or a first stiffener130(or a second support member132). For example, as illustrated inFIG.14A, the lens housing220may be mounted on an outer region of the upper surface of the substrate110and may accommodate the optical filter150and the first stiffener130. The lens housing220may be fixed on the substrate110by an adhesive. For example, as illustrated inFIG.14B, the lens housing220may be mounted on an uppermost surface (or an upper surface of the second support member132) of the first stiffener130. In this case, by sufficiently securing an area of the first stiffener130, stress acting on the image sensor120may be effectively relieved without the second stiffener140on the lower surface of the substrate110. FIG.15is a graph illustrating an effect of reduction of warpage and stress by a first stiffener.FIG.15illustrates simulation results obtained by measuring the magnitude of warpage and stress in first experimental example EX1 and second experimental example EX2. In the first experimental example EX1, warpage and stress acting on the image sensor were measured when only the second stiffener140was applied. In the second experimental example EX2, warpage and stress acting on the image sensor were measured when the first and second stiffeners130and140were applied. In the first experimental example EX1, a structure in which the second support member132surrounding an external side of the optical filter150is provided in the first module100Ab, illustrated inFIG.5, was applied. In the second experimental example EX2, a structure of the first stiffener130in which the first and second support members131and132inFIGS.3A and3Bare integrated was applied. In the first and second experimental examples EX1 and EX2, an image sensor of an optical format of 1/1.12 inch was used as an object. Referring toFIG.15, in the second experimental example EX2, warpage acting on the image sensor was reduced by about 45% and the stress was reduced by about 34% as compared to the first experimental example EX1. When a first stiffener including the second support member132(inFIG.3A) was introduced on the upper surface of the substrate, it has been indicated that warpage and stress acting on the image sensor were effectively reduced. FIG.16is an exploded perspective diagram illustrating an electronic device including a plurality of image sensor modules M1, M2, and M3.FIG.17is a cross-sectional diagram illustrating the plurality of image sensor modules M1, M2, and M3illustrated inFIG.16taken long line XVII-XVII′.FIG.18is a cross-sectional diagram illustrating an example in which an image sensor module is applied to an image sensor module M1of the image sensor modules illustrated inFIG.17. Referring toFIG.16, an electronic device10may include a first cover11forming one surface of the electronic device10, a second cover12forming the other surface of the electronic device10, a main board13disposed between the first and second covers11and12, and a plurality of image sensor modules M1, M2, and M3mounted on the main board13. The electronic device10may include a smart phone, a laptop computer, a tablet computer, and a personal digital assistant (PDA). The plurality of image sensor modules M1, M2, and M3may be configured as image sensor modules providing images of different resolutions. For example, the first image sensor module M1may be configured as a high-resolution module including a large-sized image sensor, and the second and third image sensor modules M2and M3may be configured as modules including image sensors having sizes smaller than that of the first image sensor module M1. The most appropriate module among the plurality of image sensor modules M1, M2, and M3may be selected depending on imaging conditions. The first image sensor module M1may have a size greater than sizes of the second and third image sensor modules M2and M3since the first image sensor module M1may need an optical module having a size corresponding to the large-sized image sensor. In the description below, the thicknesses (height in the Z-axis direction) of the plurality of image sensor modules M1, M2, and M3will be compared with one another with reference toFIG.17. The main board13, and/or any portions thereof may include, may be included in, and/or may be implemented by one or more instances of processing circuitry such as hardware including logic circuits; a hardware/software combination such as a processor executing software; or a combination thereof. For example, the processing circuitry more specifically may include, but is not limited to, a central processing unit (CPU), an arithmetic logic unit (ALU), a graphics processing unit (GPU), an application processor (AP), a digital signal processor (DSP), a microcomputer, a field programmable gate array (FPGA), and programmable logic unit, a microprocessor, application-specific integrated circuit (ASIC), a neural network processing unit (NPU), an Electronic Control Unit (ECU), an Image Signal Processor (ISP), and the like. In some example embodiments, the processing circuitry may include a non-transitory computer readable storage device, for example a solid state drive (SSD), storing a program of instructions, and a processor (e.g., CPU) configured to execute the program of instructions to implement the functionality and/or methods performed by some or all of the electronic device10and/or any portion thereof. Referring toFIG.17, the first image sensor module M1may include an image sensor120M1having an optical format of 1/1.33 inch or more and an optical module200M1spaced apart from the image sensor120M1by a particular (or, alternatively, predetermined) distance OPL (hereinafter “optical distance”), for example. To reduce a height T0of the image sensor module M1while the optical distance OPL between the image sensor120M1and the optical module200M1is maintained, the image sensor120M1may be accommodated in the cavity110H of the substrate110. The optical module200M1may have a lens assembly210M1and a lens housing220M1having size corresponding to the large-sized image sensor120M1, and the substrate110to which the optical module200M1is attached may also have a large area. In this case, to control warpage caused by a difference in thermal expansion coefficient, the thickness140h0of the lower stiffener140to which the image sensor120M1is attached may need to be maintained to be a particular (or, alternatively, predetermined) level. Accordingly, the thickness T0of the first image sensor module M1may be determined by heights of the optical module200M1and the lower stiffener140. The second and third image sensor modules M2and M3may include, for example, image sensors120M2and120M3having an optical format of less than 1/1.33 inch and optical modules200M2(having lens assembly210M2and lens housing220M2) and200M3(having lens assembly210M3and lens housing220M3) corresponding thereto. Accordingly, the thicknesses T2and T3of the second and third image sensor modules M2and M3may be smaller than the thickness T0of the first image sensor module M1. The second and third image sensor modules M2and M3may have different structures and different thicknesses. As described above, the sizes of the image sensor120M1,120M2, and120M3may determine the thickness of the image sensor module, and when image sensor modules M1, M2, and M3having different thicknesses are mounted in the same electronic device, a specific image sensor module may protrude further than the other image sensor modules. For example, the thickness T0of the first image sensor module M1may be greater than the thicknesses T2and T3of the second and third image sensor modules M2and M3, and the first image sensor module M1may protrude by a difference in thickness between the first image sensor module M1and the second and third image sensor modules M2and M3. In the description below, changes in the thickness of the first image sensor module M1of when the image sensor module of some example embodiments is applied to the first image sensor module M1will be described with reference toFIG.18. Referring toFIG.18, the first image sensor module M1may include an upper stiffener130disposed on the substrate110. The upper stiffener130may include a first support member131supporting a lower portion of the optical filter150and a second support member132extending in the vertical direction (Z-axis direction) from one side of the first support member131. In the image sensor module M1in the example, the upper stiffener130may control warpage acting on the substrate110and the image sensor120M1. Accordingly, the thickness140h1of the lower stiffener140may be reduced while the optical distance OPL between the image sensor120M1and the optical module200M1is maintained. For example, the thickness T1of the first image sensor module M1inFIG.18may be smaller than the thickness T0of the first image sensor module M1inFIG.17. The thickness difference D1between the first image sensor module M1and the second and third image sensor modules M2and M3inFIG.18may also be smaller than the thickness difference D0between the first image sensor module M1and the second and third image sensor modules M2and M3inFIG.17. Accordingly, in some example embodiments, a phenomenon in which the image sensor module protrudes from an electronic device due to the difference in thickness of the plurality of image sensor modules M1, M2, and M3in terms of appearance may be reduced. FIG.19is a cross-sectional diagram illustrating comparison between image sensor modules M1a(including image sensor120a, lower stiffener140a, and optical module200ahaving lens assembly210aand lens housing220a) and M1b(including image sensor120b, lower stiffener140b, and optical module200bhaving lens assembly210band lens housing220b) before and after an upper stiffener is applied according to some example embodiments.FIG.19illustrates changes in optical distances OPLa and OPLb and in thicknesses210haand210hbof a lens assembly caused by a difference in thicknesses140haand140hbof lower stiffeners140aand140bbefore and after an upper stiffener130is applied to the first image sensor module M1illustrated inFIG.16. Referring toFIG.19, a 1-1 image sensor module M1abefore the upper stiffener130is applied may require a particular (or, alternatively, predetermined) optical distance OPLa and a particular (or, alternatively, predetermined) thickness140haof the lower stiffener140awithin a specific thickness Ta. In a 1-2th image sensor module M1bafter the upper stiffener130is applied, the optical distance OPLb may be further secured by reducing the thickness140hbof the lower stiffener140bwithin the same thickness Tb as that of the 1-1th image sensor module M1a. Accordingly, in the 1-2th image sensor module M1b, the optical module200bmay include a lens assembly210bhaving an increased thickness210hbby combining lenses of various specifications. In some example embodiments, as compared to the thickness Ta of the 1-1th image sensor module M1ahaving image sensor120a, the optical module200bof various specifications (e.g., the number of lenses, the size of the lenses, and the like) and the image sensor120bcorresponding thereto may be mounted without increasing the thickness Tb of the 1-2th image sensor module M1b. For example, the 1-1th image sensor module M1aand the 1-2nd image sensor module M1bmay have different focal lengths and different aperture values. When the camera application is executed in the electronic device10, a preview screen may be firstly displayed using the 1-1th image sensor module M1a. When a user changes an imaging mode in a camera application or executes a zooming operation such that a zooming magnification reaches a particular (or, alternatively, predetermined) reference magnification, a preview screen generated by the 1-2 image sensor module M1bmay be displayed. As described above, by mounting the image sensor modules M1aand M1bof various specifications, having different aperture values, different focal lengths, and the like, in a single electronic device10, camera performance of the electronic device10may improve. According to the aforementioned example embodiments, by introducing a stiffener on the upper surface of the module substrate, an image sensor module with reduced stress acting on the image sensor may be provided. While some example embodiments have been illustrated and described above, it will be apparent to those skilled in the art that modifications and variations could be made without departing from the scope of the present inventive concepts as defined by the appended claims.
56,672
11943524
Throughout the drawings and the detailed description, the same reference numerals refer to the same elements. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience. DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known after an understanding of the disclosure of this application may be omitted for increased clarity and conciseness, noting that omissions of features and their descriptions are also not intended to be admissions of their general knowledge. The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application. Although terms such as “first,” “second,” and “third” may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples. Throughout the specification, when an element, such as a layer, region, or substrate is described as being “on,” “connected to,” or “coupled to” another element, it may be directly “on,” “connected to,” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when an element is described as being “directly on,” “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. The terminology used herein is for the purpose of describing particular examples only, and is not to be used to limit the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. As used herein, the terms “include,” “comprise,” and “have” specify the presence of stated features, numbers, operations, elements, components, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, elements, components, and/or combinations thereof. Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and after an understanding of the disclosure of this application. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of this application, and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. Also, in the description of example embodiments, detailed description of structures or functions that are thereby known after an understanding of the disclosure of the present application will be omitted when it is deemed that such description will cause ambiguous interpretation of the example embodiments. Hereinafter, examples will be described in detail with reference to the accompanying drawings, and like reference numerals in the drawings refer to like elements throughout. One or more examples may provide a camera module that blocks reflected light, and obtains a high-quality image by applying a filter and a filter actuator to perform a polarization function. FIG.1is a perspective view of an example camera module10, in accordance with one or more embodiments, andFIG.2is an exploded perspective view of the example camera module10, in accordance with one or more embodiments. Referring toFIGS.1and2, the example camera module10may include a housing510having an internal space, a lens module200including one or more lenses, an image sensor300provided in the internal space of the housing510and configured to receive light passing through the lens module200, a filter120that polarizes the light passing through the lens module200, a filter actuator100that moves the filter120, a circuit board400electrically connected to the image sensor300, and a cover520that covers the upper side of the housing510. The housing510may be combined with the cover520to constitute an appearance of the camera module10. The housing510may have an internal space to accommodate the lens module200, the image sensor300, the filter120, the filter actuator100, the circuit board400, etc. In one or more examples, a plurality of housings510may be provided and coupled to each other. In an example, a first housing (not shown) that accommodates the lens module200may be provided independently from a second housing (not shown) that accommodates the image sensor300, and the first housing (not shown) and the second housing (not shown) may be coupled to each other to constitute an overall appearance of the camera module10. That is, the housing510illustrated inFIG.1is merely an example, and in one or more examples, the lens module200and the image sensor300or the filter actuator100may be accommodated in separate housings, respectively. In an example, the cover520may cover the upper side of the housing510to separate the internal space of the housing510from the outside of the camera module10. An incident hole521may be formed in at least a portion of the cover520, and light incident through the incident hole521may be incident on the lens module200. In one or more examples, the lens module200may include one or more lenses that image a subject. When a plurality of lenses is arranged, the plurality of lenses may be mounted inside the lens module200, and may be aligned in an optical axis direction (e.g., Z-axis direction). The lens module200may include one or more cylindrical lens barrels each having a hollow shape. In one or more examples, the camera module10may include a lens driver (not shown) that moves the lens module200. The lens driver (not shown) may move the lens module200in a direction of an optical axis (Z-axis) to perform a focusing function or a zoom function, or may move the lens module200in a direction perpendicular to the optical axis (Z-axis) (e.g., X-axis or Y-axis) to perform an optical image stabilization function. Alternatively, the lens driver (not shown) may rotate the lens module200about the optical axis (Z-axis), or rotate about an axis (e.g., X-axis or Y-axis) perpendicular to the optical axis (Z-axis) to perform an optical image stabilization function. That is, the lens driver (not shown) may include a focusing device that performs a focusing operation and an optical image stabilization device or optical image stabilizer that performs optical image stabilization. The image sensor300may convert light incident through the lens module200into an electrical signal. In an example, the image sensor300may include a charge coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS), as only examples. The image sensor300may be electrically connected to the circuit board400, and accordingly, the electrical signal converted by the image sensor300may be output to the outside through the circuit board400. In one or more examples, the image sensor300may be aligned with the lens module200in the optical axis direction (Z-axis direction) on the circuit board400. The image sensor300may receive light incident in the optical axis direction (Z-axis direction) through the lens module200, and convert the incident light into an electrical signal. The example camera module10may include a sensor driver (not shown) that moves the image sensor300. The sensor driver (not shown) may move the image sensor300in the optical axis direction (Z-axis direction) or in a direction (e.g., X-axis or Y-axis direction) intersecting, or perpendicular to, the optical axis to perform a focusing operation or an optical image stabilization operation. In an example, the sensor driver may perform an optical image stabilization operation by moving the image sensor300on a plane (X-Y plane) perpendicular to the optical axis. Alternatively, the sensor driver may perform an optical image stabilization operation by rotating the image sensor300about the optical axis (Z-axis), or rotating the image sensor300about an axis (X-axis or Y-axis) perpendicular to the optical axis (Z-axis). The camera module10, in accordance with one or more examples, may move the filter120in the internal space of the housing510. In one or more examples, the filter120may include a polarization filter that polarizes light passing through the lens module200. That is, in one or more examples, the filter120may allow the passage of only light vibrating in a specific direction coincident with a polarization axis with respect to the incident light. The filter120may include a filter of various types as well as the polarization filter. In an example, the filter120may include an infrared filter that blocks light having a wavelength in an infrared region with respect to the light incident through the lens module200. The filter120, in accordance with one or more examples, may be movably provided in the internal space of the housing510, and in an example, may be provided to be movable in a space formed between the lens module200and the image sensor300. A driving force to move the filter120may be provided by the filter actuator100. In one or more examples, the filter actuator100may be provided in the internal space of the housing510to move the filter120to be positioned between the lens module200and the image sensor300or to be positioned at another location. In an example, the filter actuator100may linearly move the filter120in a direction (e.g., X-axis direction) intersecting the optical axis (Z-axis) to be positioned above the image sensor300or not to be positioned above the image sensor300. When the filter120is positioned above the image sensor300, the incident light passing through the lens module200may be polarized, and the polarized light may be incident on the image sensor300. On the other hand, when the filter120is not positioned above the image sensor300, incident light may be incident on the image sensor300in a non-polarized state. However, a moving direction of the filter120is not limited to what has been described above, and the filter120may move in the internal space of the housing510in various directions. That is, in one or more examples, the camera module10may appropriately move the filter120, so that polarized or non-polarized light is incident on the image sensor300. The example camera module10may obtain a clear image by blocking a flare or unnecessary reflected light with respect to the incident light through the filter120to perform a polarization operation. In an example, when the example camera module10is implemented as a camera for a vehicle, it is possible to obtain a clear image of an obstacle or a lane positioned around the vehicle by blocking reflected light having an indiscriminate phase. Hereinafter, a filter120and a filter actuator100, in accordance with one or more examples, will be described in detail with reference toFIGS.3and4. FIG.3illustrates an exploded perspective view of a filter actuator100, in accordance with one or more embodiments, andFIGS.4A and4Bare views of examples illustrating the driving of the filter actuator100to be described with reference toFIG.3. Since a camera module10, a filter120, and a filter actuator100to be described with reference toFIGS.3,4A, and4Binclude the features of the camera module10, the filter120, and the filter actuator100described above with reference toFIGS.1and2, an overlapping description thereof will not be repeated. The filter actuator100, in accordance with one or more embodiments, may be provided in an internal space of a housing510to move the filter120which polarizes incident light. The filter actuator100, in accordance with one or more embodiments, may include a filter120, a tray110that accommodates and supports the filter120, a driving device or driver130that moves the tray110, and a guide device140that guides the movement of the tray110. The driver130may generate a driving force to move the tray110, and may include, for example, a driving motor131and a lead screw132as illustrated inFIG.3. In the camera module in accordance with one or more embodiments, the filter120may be disposed to be movable with respect to the lens module200or the image sensor300in a space formed between the lens module200and the image sensor300. In an example, the filter120may move in the internal space of the housing510to cover or not to cover an upper side of the image sensor300. That is, the filter120may be inserted into, or removed from, the space between the lens module200and the image sensor300. In one or more examples, the filter120may reciprocate between any two locations in the internal space of the housing510. In an example, the filter120may be provided to be movable between a first position facing the image sensor300and a second position disposed away from the image sensor300. The filter120may be disposed to face the image sensor300at the first position, and thus, incident light passing through the lens module200may be incident on the image sensor300after passing through the filter120. The filter120may be disposed away from the space between the image sensor300and the lens module200at the second position, and thus, incident light passing through the lens module200may be incident on the image sensor300without passing through the filter120. In an example,FIG.4Aillustrates an example state in which the filter120is disposed in the first position between the lens module200and the image sensor300, andFIG.4Billustrates an example state in which the filter120is disposed in the second position away from the space between the lens module200and the image sensor300. In one or more examples, the filter actuator100may be provided to linearly move the filter120in a direction intersecting the optical axis (Z-axis) between the first position and the second position. Accordingly, as illustrated inFIGS.4A and4B, the filter120may reciprocate in a space or an area formed between the lens module200and the image sensor300to be inserted into, or removed from, the space between the lens module200and the image sensor300. In one or more examples, both the first position and the second position of the filter120may be formed in the internal space of the housing510. That is, the filter120may be provided to move only in the internal space of the housing510. Accordingly, the filter120may be protected from an external environment of the housing510or the camera module, and penetration of foreign substances from the outside of the housing510can be prevented while the filter120is inserted or removed. In one or more examples, the filter120may move between the lens module200and the image sensor300in a direction (e.g., X-axis direction) intersecting the optical axis. The filter120may be provided in parallel with the image sensor300, and thus may be disposed to face the image sensor300at a predetermined distance by moving in a direction (X-axis direction) intersecting the optical axis. In one or more examples, the filter120may have a width equal to or greater than a light receiving portion of the image sensor300. Accordingly, when the filter120covers the upper side of the image sensor300, incident light passing through the lens module200may be incident on the image sensor300after entirely passing through the filter120. However, the filter120may not necessarily have a greater width than the light receiving portion of the image sensor300. In an example in which the filter120is positioned between the lens module200and the image sensor300, it is beneficial that the filter120be wide enough to allow incident light to pass therethrough. The filter120may include a polarization filter that allows passage of light vibrating or travelling in a direction coincident with a polarization axis. Therefore, in one or more examples, the the camera module may selectively perform a polarization function as necessary by positioning the filter120between the lens module200and the image sensor300when it is necessary to polarize incident light, and removing the filter120from the space between the lens module200and the image sensor300when it is not necessary to polarize incident light. In one or more examples, the filter120may be accommodated in the tray110, and may be implemented to be movable in the internal space of the housing510. The tray110may surround and support the filter120, which is relatively thin and light, so that the filter120is moved precisely and stably inside the housing510. In an example, as illustrated inFIG.3, the tray110may be provided in the shape of a plate having a through hole in the optical axis direction (Z-axis direction), and the filter120may be accommodated in the through hole of the tray110. Accordingly, the filter120and the tray110may move together inside the housing510. However, the configuration of the tray110is not limited to what has been described above. In an example, the tray110may be integrally formed with the filter120. The tray110may include connectors111and112connected to the driver130that generates a driving force to move the filter120or the guide device140o guide the movement of the filter120. In an example, as illustrated inFIG.3, the tray110may include a first connector111connected to the driver130, and a second connector112connected to the guide device140. In one or more examples, the first connector111and the second connector112may be provided on different sides of the tray110, respectively. In an example, the first connector111may be provided at a first side edge of the tray110, and the second connector112may be provided at a second side edge of the tray110opposite to the first side edge of the tray110. In one or more examples, the first connector111and the second connector112may have different sizes. In an example, a length of the second connector112connected to the guide device140may be greater than a length of the first connector111to sufficiently secure an area contacting the guide device140. Alternatively, the first connector111connected to the driving device130may be shorter and thicker than the second connector112to reduce friction and receive driving force well. However, the specific shapes of the first connector111and the second connector112are not limited to what has been described above, and the first connector111and the second connector112may be provided in various shapes and lengths. The example filter actuator100may include a driving device130that moves the tray110in which the filter120is accommodated. The driving device130may generate a driving force to move the filter120. In an example, as illustrated inFIG.3, the driving device130may include a driving motor131that generates a rotational driving force, and a lead screw132connected to the driving motor131to transmit the driving force. Hereinafter, the example driving device130of the filter actuator100will be described on the premise that it includes a driving motor131and a lead screw132. However, the configuration of the driving device130is not limited thereto, and in other some embodiments, the driving device130may include a coil and a magnet to move the filter120based on an electromagnetic force. The driving device130, in accordance with one or more examples, may move the filter120based on a rotational driving force of the driving motor131. The driving motor131may be fixedly provided with respect to the housing510. In an example, as illustrated inFIG.3, the driving motor131may be fixedly provided on an upper side of the circuit board400. The driving motor131may be electrically connected to the circuit board400to receive electrical energy from an external or internal power source of the camera module (10inFIG.1or2), so that the driving motor131may be driven. A screw-shaped worm133may be further provided on a driving shaft of the driving motor131, and the driving motor131may be connected to the lead screw132through the worm133. Accordingly, a rotation axis of the driving shaft of the driving motor131and a rotation axis of the lead screw132may be perpendicular to each other. Additionally, a high rotation speed of the driving motor131may be appropriately reduced through an operation of the worm133, and an appropriate driving force of the driving motor131may be transmitted to the lead screw132. The lead screw132may be supported by a plurality of support members135, and may be rotatably provided inside the housing510. The lead screw132may rotate by receiving a rotational driving force from the driving motor131. According to the rotation of the lead screw132, the tray110and the filter120connected thereto may move the lead screw132in a direction intersecting the optical axis (Z-axis) direction. In one or more examples, the lead screw132may extend in a direction (e.g., an X-axis direction) perpendicular to the optical axis, and may be provided to be rotatable about a rotation axis passing through the center of the lead screw. In a non-limited example, the lead screw132may be a screw-type shaft with a thread formed on an outer circumferential surface thereof. The lead screw132may be provided to penetrate through the first connector111of the tray110. In this example, the first connector111of the tray110may include a nut corresponding to the thread of the lead screw132in a portion through which the lead screw132penetrates, so that the first connector111of the tray110is screw-coupled to the lead screw132. Accordingly, as illustrated inFIG.4B, according to the rotation of the lead screw132, the tray110may move linearly in a rotation axis direction of the lead screw132. However, the configuration of the connection between the lead screw132and the tray110is not limited to what has been described above. In an example, a thread may be formed in the first connector111of the tray110, and a nut corresponding to the thread of the first connector111may be provided on the lead screw132. In an example, a lubricating fluid may be applied to the portion where the lead screw132and the first connector111contact each other to reduce friction. A worm wheel134may be provided at one end of the lead screw132to be screw-coupled to the worm133of the driving motor131. That is, the driving motor131and the lead screw132may be connected to each other through a worm gear device (that is, the worm133and the worm wheel134), and the lead screw132may rotate about an axis intersecting the rotation axis of the driving motor131as the driving motor131rotates. The other end of the lead screw132may be rotatably coupled to an inner surface of the housing510via the support member135. In one or more examples, the filter actuator100may include a guide device140that guides the movement of the tray110. The guide device140may include a guide shaft141extending in a direction of an axis (the X-axis direction) perpendicular to the optical axis, and the tray110may move in the direction perpendicular to the optical axis (the Z-axis direction) along the guide shaft141. In an example, as illustrated inFIG.3, the guide shaft141may be provided in the shape of a bar extending in a direction (e.g., the X-axis direction) perpendicular to the optical axis (the Z-axis direction). A first end and a second end of the guide shaft141may be fixed to the housing510via respective support members142, and at least a portion between a first end and a second end of the guide shaft141may penetrate through the second connector112of the tray110. Accordingly, the tray110may slidably move along an extension direction of the guide shaft141. In one or more examples, the guide shaft141and the lead screw132may extend in a direction to be parallel with each other. Thus, since the tray110may be connected to both the guide shaft141and the lead screw132, which are provided in parallel with each other, the linear movement of the tray110may be stable. In one or more examples, a friction reducing member, (not shown) that reduces friction between the guide shaft141and the second connector112, may be further provided. In a non-limited example, the friction reducing member (not shown) may include a bearing or a bush. Alternatively, a lubricating fluid may be applied between the guide shaft141and the second connector112to reduce friction. In one or more examples, a filter actuator100may rotate a polarization axis of a filter120to polarize incident light in a more effective manner. Hereinafter, a filter actuator100that rotates a polarization axis will be described with reference toFIGS.5through7. FIG.5is an exploded perspective view of an example tray110according, in accordance with one or more embodiments, andFIG.6is a perspective view of an example filter actuator100including the example tray, in accordance with one or more embodiments.FIGS.7A and7Bare views of examples illustrating the driving of the example filter actuator100, in accordance with one or more embodiments. The filter actuator100to be described below with reference toFIGS.5through7may include all the components of the example filter actuator100described above with reference toFIGS.1through4, with a tray and a guide143to rotate a polarization axis120aof a filter120being added thereto, and thus, the description based onFIGS.5through7overlapping with that based onFIGS.1through4will not be repeated. The tray included in the filter actuator100, in accordance with one or more embodiments, may further include a rotation tray113that is rotatably provided. In an example, as illustrated inFIG.5, the tray may include a movement tray110provided to be movable in the internal space of the housing (510inFIG.6), and a rotation tray113provided to be rotatable with respect to the movement tray110. In an example, the movement tray110may include all the features of the tray110described above with reference toFIGS.3and4, while the rotation tray113is provided therewith. Thus, for the position at which the movement tray110is provided, or the connection structure between the movement tray110and the other components, etc., the description of the tray110ofFIGS.3and4may be referred to. In an example, the lead screw132may be connected to the first connector111of the movement tray110to transmit a driving force, and the guide shaft141may be connected to the second connector112of the movement tray110to guide the movement of the movement tray110. In one or more examples, the rotation tray113may be rotatably coupled to the movement tray110. That is, as the movement tray110moves, the rotation tray113may move together in the same direction as the movement tray110, and at the same time, rotate with respect to the movement tray110. The rotation tray113may have a hollow section or area, and the filter120may be inserted into the hollow area of the rotation tray113to rotate together with the rotation tray113with respect to the movement tray110. In one or more examples, the rotation tray113may be rotated by a driving force to move the movement tray110. That is, the rotation tray113may be provided to rotate with respect to the movement tray110as the movement tray110moves. Accordingly, the guide device140, in accordance with one or more embodiments, may further include a rotation guide143. The rotation guide143may be connected to the rotation tray113, and may rotate the rotation tray113according to the movement of the movement tray110. That is, the guide device140, in accordance with one or more embodiments, may include a guide shaft141connected to the movement tray110and a rotation guide143connected to the rotation tray113. In an example, as illustrated inFIG.6, the guide device140may include a guide shaft141and a rotation guide143spaced apart from each other and extending in parallel with each other. The guide shaft141and the rotation guide143may be fixed to the housing by the same support member142. In one or more examples, the rotation tray113may have a friction portion113aon an outer circumferential surface thereof, and the rotation guide143may be provided in contact with the friction portion113aof the rotation tray113. As the movement tray110is moved by the driving motor131, a rolling frictional force may be generated between the friction portion113aof the rotation tray113and the rotation guide143, and the rotation tray113may be rotated with respect to the movement tray110by the rolling frictional force. Therefore, both movement and rotation of the filter120can be implemented through a single driving motor (e.g.,131inFIG.6). In one or more examples, while the movement tray110moves between a first position (e.g., a position of the tray110inFIG.4A) and a second position (e.g., a position of the tray110inFIG.4B), the rotation tray113may also rotate continuously accordingly. Since the rolling friction force may be continuously generated between the rotation tray113and the rotation guide143while the movement tray110is moving, a moving distance of the movement tray110and an amount of rotation of the rotation tray113may be positively proportional to each other. That is, in one or more examples, a rotation angle of the polarization axis120aof the filter120may increase in proportion to the moving distance of the filter120. Each ofFIGS.7A and7Billustrates a state in which the filter120rotates as the movement tray110and the rotation tray113move. As illustrated inFIGS.7A and7B, when the movement tray110moves along the guide shaft141in a direction (e.g., X-axis direction) perpendicular to the optical axis (e.g., Z-axis inFIG.6), the rotation tray113may rotate in engagement with the rotation guide143. Accordingly, the polarization axis120aof the filter120accommodated in the rotation guide143may also rotate together. As the moving distance of the movement tray110increases, the rotation angle of the rotation tray113may also increase. That is, according to the camera module10, in one or more examples, the polarization axis120acan be rotated at a desired angle by adjusting the moving distance of the filter120. In one or more examples, through holes of the trays110and113that allow passage of incident light, may be formed to be larger than the light receiving portion of the image sensor300. That is, the filter120accommodated in the through holes of the trays110and113may be provided to entirely cover the light receiving portion of the image sensor. Accordingly, an angle of the polarization axis120amay be adjusted in a state where the filter120covers the light receiving portion of the image sensor300by moving by a predetermined distance in a direction perpendicular to the optical axis (e.g., the Z-axis inFIG.6). In a state where the filter120covers the light receiving portion of the image sensor300, the angle of the polarization axis120amay be adjusted in a range of 0 to 180 degrees with respect to a moving direction of the filter120. Accordingly, the camera module according to some embodiments may selectively allow passage of only light vibrating in a desired direction with respect to incident light passing through the lens module. In one or more examples, a structure that increases a friction force between the friction portion113aof the rotation tray113and the rotation guide143may be further provided. In an example, as illustrated inFIG.6, the friction portion113aof the rotation tray113may include teeth, and correspondingly, the rotation guide143may be provided to engage the teeth of the rotation tray113. The rotation guide143may include a rack gear extending in one direction for firm engagement with the teeth. Accordingly, it is possible to more precisely control an amount of rotation of the rotation tray113and the filter120. However, the driving of the rotation tray113is not limited to the above-described configuration. That is, a driving force to rotate the rotation tray113may be generated by a separate rotation driving device. In an example, apart from the driving motor131that moves the movement tray110, another driving device connected to the rotation tray113to transmit a rotational driving force may be further provided. Since the camera module10according to one or more examples may appropriately block unnecessary reflected light, particularly incident from an external environment, it is possible to obtain high-quality image data even in a harsh external environment such as rainy weather. That is, in the camera module10according to one or more examples, the filter120that performs a polarization operation, may be inserted between the lens module200and the image sensor300to provide clearer image information. Additionally, in an environment where polarization is not needed, the filter120may be moved to a position, distant from the image sensor300to secure a sufficient amount of light for photographing. In the camera module10in accordance with one or more embodiments, the polarization axis120amay be continuously rotated in a state where the filter120is positioned above the image sensor300. Accordingly, reflected light can be blocked in a more effective manner, and high-quality image data can be obtained in any environment. As set forth above, the camera module according to one or more embodiments may block unnecessary reflected light, and may obtain high-quality image information by implementing the filter polarizing incident light. The camera module, in accordance with one or more embodiments, may selectively apply the filter according to various examples by moving the filter to be positioned above the image sensor, or by moving the filter so that it is not positioned above the image sensor. The camera module, in accordance with one or more embodiments, may protect the filter and the filter actuator from an environment outside the camera module by placing the filter and the filter actuator between the lens module and the image sensor inside the camera module. The camera module, in accordance with one or more embodiments, may be structurally simple and small by placing the filter actuator inside the housing. While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
36,627
11943525
DETAILED DESCRIPTION While we can sell bare camera chip cubes, we also propose prefabricated camera/cable/illuminator assemblies. In making these assemblies, we leverage our experience with microelectronics, ball-bonding, precision molding, and thin-film processing. These assemblies, as illustrated inFIG.1, include a camera chip cube102mounted on an interposer104, along with at least one light-emitting diode (LED)106,108. Camera chip cube102includes an image sensor110, a spacer112, and a lens114. Camera chip cube102is contained within a housing116, with LEDs106,108at the bottom of tubular light guide structures118,120within housing116. Across top of the housing116, covering both the camera chip cube102and the LEDs106,108with their light guide structures118,120, is sealed a transparent window122. Attached to the interposer104is a cable containing several conductors that are electrically coupled through conductors within interposer104to ball-type bondpads of camera chip cube102to power the camera chip cube and receive images from the camera chip cube102as a serial analog or digital signal. Conductors of the cable are also coupled through conductors of the interposer to either directly in some embodiments, or through transistors within camera chip cube102in alternative embodiments, provide power to LEDs106or108. In typical embodiments, cable124has five to seven wires and terminates in a connector130, connector130being adapted for connection to an ancillary cable-module adapter board that provides power to the camera/cable/illuminator assembly, receives images therefrom, stores and processes the images for display, and displays the processed images to a user. Each LED illuminator200(FIG.2) with LED106,108,202and light guide structure118,120,204is formed in an opening206in housing116,208. An interior surface of the opening206is lined with a reflective metal coating209to reduce light loss through sides of the light guide structure204into housing208. A microlens array210is disposed atop opening206and is formed on an underside surface of transparent window122to shape the light as it leaves the light guide structure to illuminate objects, if any, in front of camera chip cube102. An alternative embodiment300(FIG.3A) of the cable, camera, and LED illuminator assembly with light guide structures has a reflective surface deposited on an exterior of the camera chip cube302. This embodiment can be more compact than the embodiment ofFIG.1because, while housing304surrounds the light guide structure318,320and camera chip cube302there is no need for housing304to separate camera chip cube302from light guide structure318. As a result, transparent window322may be smaller than the transparent window122, and the interposer324ofFIG.3Amay be smaller than interposer104ofFIG.1. Components inFIG.3Aare similar to and perform a similar function to components ofFIG.1having the same reference numbers; the connector is not shown inFIG.3Afor simplicity. In alternative embodiments otherwise resembling those ofFIGS.1,2and3A, a single lens352(FIG.3B) replaces microlens array210of each light guide structure118,120,318,320. Single lens352or microlens array210are typically formed by molding onto transparent window322,122prior to attaching transparent window122,322to housing304,116. These microlens arrays210and single lens352serve to transform a Lambertian light distribution in the light guides into a flat top light distribution to provide good illumination for the camera. In alternative embodiments otherwise resembling those ofFIGS.1,2,3, and3B, space between reflective-coated walls of the light guide structures318,118,204is filled with a phosphor. In a particular embodiment, the LEDs106,108are blue-light LEDs and the phosphor converts blue light into a broader-spectrum white light to provide camera chip cube302with white illumination so, should chip-cube camera be a color camera, camera chip cube302can provide color images. Hemoglobin absorbs significant short-wavelength visible light but allows some longer wavelengths through. In alternative embodiments, chip-cube camera302has a red-green-blue-infrared 4-filter tiling pattern of color filters on photodiodes of its image sensor and can provide red-green-blue-infrared four-color video images. In this embodiment, there may be one or more white LEDs or blue LEDs with associated phosphor provided for color imaging, and one or more infrared LEDs for longer-wavelength infrared imaging to provide short-range imaging through blood. In alternative embodiments, chip-cube camera302has a red/green/blue/fluorescent emissions four-filter tiling pattern of color filters on photodiodes of its image sensor and can provide red-green-blue-fluorescence four-color video images. In this embodiment, there may be one or more white LEDs or blue LEDs with associated phosphor and a fluorescent-emissions blocking filter provided for color imaging, and one or more fluorescent-stimulus wavelength LEDs for longer-wavelength infrared imaging to provide for imaging of fluorophores in medical imaging. In an alternative embodiment, a precut piece of graded-index optical fiber may be inserted into light guide structure318118,204. With such techniques, the interposer may have diameter less than 2.1 millimeters with cameras resembling the OVM4946 cameras, or less than 1.7 millimeters with OVM6948 cameras. In a round-interposer400(FIG.4) embodiment there are a first LED402, an optional second LED404, an optional third LED406, and an optional fourth LED408, the LEDs flanking a camera cube410. Each LED is at the base of a light guide structure420in a housing having the same external shape as interposer400. In a square-interposer500(FIG.5) embodiment, there also lies beneath the transparent window a first LED402, an optional second LED404, an optional third LED406, and an optional fourth LED408, the LEDs flanking a camera chip cube510. In this embodiment, camera chip cube510has a reflective outer surface and is enclosed in a housing having a cavity520and dimensions similar to those of square interposer500, cavity520having a reflective inner surface; the space between the inner surface of cavity520and outer surface of camera chip cube510serving as a light guide structure. The interposer, camera chip cube, and cable assembly600forms an end of endoscope700with endoscope body702and operating handle706that may include controls for steering wires, and a connector708to an electronic digital image display & processing system710that displays images for guidance to a physician or other user. In another particular embodiment, an endoscope head750has an interposer and housing752having arcuate shape with camera chip cube754surrounded by four LEDs756,758each at the base of a light guide structure760as previously described. Arcuate interposer and housing752are positioned adjacent a lumen762of endoscope750. In another particular embodiment, for use in small-diameter endoscope heads like endoscope head800(FIG.8), the interposer and housing has truncated isosceles trapezoidal shape. In these embodiments, a short parallel side804of the cavity is configured to be positioned against a curved interior side of endoscope head800; short parallel side804is adjacent camera cube806. Long parallel side808is configured to be positioned more centrally to endoscope head800and is adjacent to camera chip cube806and two LEDs810,812, one LED being positioned on each side of camera chip cube806and positioned nearer to long parallel side808than to short parallel side804. Isosceles side814,816, extend downward from short parallel side804at a 45-degree angle towards, but do not meet, long parallel side808, and terminate in a vertical truncation side818,820after providing room for LEDs810,812. Use of the truncated isosceles trapezoidal shaped interposer and housing may provide more room for endoscope lumens825or other functional portions of endoscope head800than may be available with a square interposer and housing. In some embodiments of an endoscope head850, the camera chip cube806has a reflective outer surface and interposer and housing802has a cavity830lined with a reflective coating, so space between housing802and camera chip cube806serves as a light guide. The light guides in the housing herein described permit the camera cube to be bonded to camera bondpads of an interposer, and the light-emitting diodes (LED) to be bonded to LED bondpads of the interposer, with the LED bondpads at the same height as the camera bondpads while directing light onto objects in front of the camera chip cube without a shadow being cast on those objects by the camera chip cube. In the embodiments herein described, the housing and light guides extend from the interposer and LEDs to the height of the camera chip cube, the light guides extending to a top of the housing. Combinations The cavity interposer, camera cube, LEDs, and cable herein described may be configured in a number of ways. Among configurations anticipated by the inventors are: An electronic camera assembly designated A including: a camera chip cube bonded to camera bondpads of an interposer; at least one light-emitting diode (LED) bonded to LED bondpads of the interposer at the same height as the camera bondpads; and a housing extending from the interposer and LEDs to the height of the camera chip cube, and light guides extending from the LEDs to a top of the housing. An electronic camera assembly designated AA including the electronic camera assembly designated A further comprising a cable coupled to the interposer. An electronic camera assembly designated AB including the electronic camera assembly designated A or AA wherein the camera chip cube has footprint dimensions of less than three and a half millimeters square. An electronic camera assembly designated AC including the electronic camera assembly designated AB wherein the camera chip cube has footprint dimensions of less than two millimeters square. An electronic camera assembly designated AD including the electronic camera assembly designated A, AA, AB, or AC wherein the light guides are topped with an array of microlenses. An electronic camera assembly designated AE including the electronic camera assembly designated A, AA, AB, or AC wherein the light guides are each topped with a singular lens. An electronic camera assembly designated AF including the electronic camera assembly designated A, AA, AB, AC, AD, or AE wherein the light guides are formed between a reflective inner surface of a cavity of the housing and a reflective outer surface of the camera chip cube. An electronic camera assembly designated AG including the electronic camera assembly designated A, AA, AB, AC, AD, or AE wherein the light guides are formed by reflective surfaces of cavities in the housing. An electronic camera assembly designated AH including the electronic camera assembly designated A, AA, AB, AC, AD, AE, AF, or AG wherein the light guides are filled with a phosphor. An electronic camera assembly designated AI including the electronic camera assembly designated A, AA, AB, AC, AD, AE, AF, AG, or AH wherein the interposer has an arcuate shape. An electronic camera assembly designated AJ including the electronic camera assembly designated A, AA, AB, AC, AD, AE, AF, AG, or AH wherein the interposer has a truncated isosceles trapezoid shape. Changes may be made in the above methods and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween. It is also anticipated that steps of methods may be performed in an order different from that illustrated and still be within the meaning of the claims that follow.
12,077
11943526
DETAILED DESCRIPTION The presently disclosed subject matter is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed invention might also be embodied in other ways, to include different steps or elements similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the term “step” may be used herein to connote different aspects of methods employed, the term should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. These sections may be present in some industrial processes such as that which may be found in a paper production facility, but the systems and methods disclosed herein are equally applicable to other industrial settings. In general, this disclosure discloses how a light-emitting diode (LED) array may be designed to include both infra-red (IR) and white illuminators. A camera clock and LED driver are synchronized together such that during the capture time of the camera (on-time) the IR LED's are on and during the off-time of the camera frame capture the white LED's are on. A common frame rate of a camera may be 60 frames per second. This frame rate equates to an image capture every 16,666 microseconds. For each frame capture, the camera is actively acquiring the image to the sensor (called exposure or shutter speed) only a fraction of the overall cycle. The sensor is then idle until the next frame capture. The IR LED array is set to fire at the beginning of the frame capture at some duration equal to or less than the exposure time. If the camera is set to expose for 200 microseconds there are over 16,000 microseconds the IR LED is off per frame cycle. The combination IR and white LED system will fire the white LED just after the IR and will stop just before the next frame capture. In this way the camera sensor is exposed only to IR LED energy while the white LED provides visible light to the same area for general purpose human visible illumination.FIG.1andFIG.2provide more detail on this disclose concept. FIG.1depicts a block diagram illustrating a system100for providing simultaneous machine vision illumination control and human vision illumination control in accordance with embodiments of the present disclosure. The system100includes a control device102, a camera104, a first illumination source106, and a second illumination source108. The system100also includes a manufacturing process110that produces steam112. The system100further includes a human114that is located to monitor the manufacturing process110. The control device102provides a camera control output signal116. The control output signal116provides camera timing information to the camera104. The control device102also provides first illumination control output signal118provides first illumination timing information and first intensity level information for the first illumination source. The control device102additionally provides a second illumination control output signal120provides second illumination timing information and second intensity level information for the second illumination source. In some embodiment, timing information and intensity level information may be provided as separate signals. The control device102may be a computing device. For example, the control device102may be a personal computer (PC) with a specialized controller card, a microcontroller, or the like. The control device102may also be an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), discrete logic/circuity, or a combination thereof. The camera104includes a charge-coupled device (CCD) that acts as an image sensor for capturing high resolution images of the manufacturing process110. In other embodiments the camera104includes a complementary metal-oxide-semiconductor (CMOS) sensor or an N-type metal-oxide-semiconductor (NMOS) sensor for capturing the high resolution images. The camera is configured to operate at one more specific frame rates. Typical frame rates may be 24 frames-per-second (fps), 30 fps, 60 fps, 120 fps, etc. The images may have a pixel resolution of 1280×720, 1920×1080, 3840×2160, 7680×4320, etc. The camera may also be configured to provide the control device102a plurality of images as requested via a camera interface. The camera interface may be an Ethernet interface. For example, the Ethernet interface may be GigE, Dual GigE, 5GigE, 10 GigE, or the like. In other embodiments, the camera interface may be a Camera Link HS interface, a CoaXPress® interface, a Universal Serial Bus (USB) 3.0 interface, or the like. The first illumination source106may include a first LED array and the second illumination source may include a second LED array. In some embodiments, the first LED array and the second LED array may be housed in a single lighting structure and positioned to illuminate the manufacturing process110and also the steam112. The first illumination source (e.g. the first LED array) may be configured to provide a first frequency band of illumination and the second illumination source (e.g. the second LED array) may be configured to provide a second frequency band of illumination. The first and second frequency bands of illumination may be mutually exclusive. For example, the first frequency band of illumination may be centered in a range between 820 nanometers and 880 nanometers. This wavelength provides the camera104with the ability to capture images of the manufacturing process110, while not being obscured by the steam112. The second frequency band of illumination may be centered at a wavelength range between 380 nanometers and 740 nanometers. The wavelengths within this wavelength range provide the human114with the ability to still see the steam112while monitoring the manufacturing process110and thus prevent injury. In some embodiments (not shown inFIG.1), the control device102may be configured to provide additional control signals to additional cameras and additional illumination sources. Additionally, the camera104may be decoupled from the control device102, and be configured to sync to the first illumination source via an image capture process. Basically the camera104would sync its internal frame rate and a CCD exposure time to the pattern provided by the first illumination source. In other embodiments, the control device102may be embedded in the camera104, the first illumination source106, or the second illumination source108. FIG.2depicts a timing diagram200illustrating the control signals ofFIG.1provided for the camera, the first illumination source, and the second illumination source in accordance with embodiments of the present disclosure. The camera control output signal116, the first illumination control output signal118, and the second illumination control output signal120are each depicted as one of a voltage signal or a current signal; and may be provided by controlled voltage sources and/or controlled current sources from the control device102. The camera control output signal116provides an active high signal during a CCD exposure time. The first illumination control output signal118and the second illumination control output signal120provide both timing for on/off and luminance levels for the first and second illumination sources. A cycle begins with the second illumination source on (i.e. providing illumination for human vision), the first illumination source off, and the CCD exposure off. After the second illumination source turns off, the CCD exposure begins. Next the first illumination source turns on to a luminance level based on an amplitude of either current or voltage level of the first illumination control output signal118(i.e. providing illumination for the machine vision). The first illumination source may be configured to provide a lumen level between 100,000 lumens and 500,000 lumens based on the first illumination intensity information (i.e. first luminance level). After the second illumination source turns off, the CCD exposure time ends. Next the second illumination source turns on to a luminance level based on an amplitude of either current or voltage level of the second illumination control output signal120(i.e. providing illumination again for human vision). The second illumination source may also be configured to provide a lumen level between 100,000 lumens and 500,000 lumens based on the second illumination intensity information (i.e. second luminance level). In other embodiments the first illumination source may be configured to provide a lumen level between 1000 lumens and 100,000 lumens based on the first illumination intensity information (i.e. first luminance level). And, the second illumination source may also be configured to provide a lumen level between 1000 lumens and 100,000 lumens based on the second illumination intensity information (i.e. second luminance level). Basically, the first and second illumination timing information comprises active cycle times (i.e. the time between a rising edge and a falling edge of each waveform). As depicted the first illumination timing information and second illumination timing information are mutually exclusive (i.e. active cycles do not overlap). FIG.3depicts another block diagram illustrating a system300for providing simultaneous machine vision illumination control and human vision illumination control in accordance with embodiments of the present disclosure. The system300includes the system100ofFIG.1with the addition of a local area network (LAN)302, a wide area network (WAN)304, and a computing device306. The camera control output signal116, the first illumination control output signal118, and the second illumination control output signal120are each provided as digital signals over the LAN302. The computing device306may be configured to provide setup information to the control device102over the WAN304and LAN302. In some embodiments, the WAN304is the Internet and the computing device306may be one or more servers in a cloud computing environment. In other embodiments, the computing device306may be a personal computer, or the like. The setup information may be used by the control device102when configuring the camera control output signal116, the first illumination control output signal118, and the second illumination control output signal120. The computing device306may also be configured to receive a plurality of images from the camera104and generate the setup information based on processing the plurality of images. The computing device may use a soft development kit (SDK) and process the plurality of images using machine vision algorithms that leverage both central processing units (CPSs) and graphical process units (GPUs). In some embodiments, the LAN302may be an Industrial Ethernet (IE). The LAN302may include standard Ethernet network infrastructure including switches, hubs, and repeaters. In this embodiment, the camera control output signal116, the first illumination control output signal118, and the second illumination control output signal120may be provided using one or more real-time protocols associated with IE, such as the Institute of Electrical and Electronics Engineers (IEEE) 1588-2008 (or later) standard titled “IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems”. The computing device306may also be located directly on the LAN302. In certain embodiments, the computing device306and the control device102may be the same device. FIG.4depicts a block diagram illustrating a server400for providing the computing device306of the system300ofFIG.3in accordance with embodiments of the present disclosure. The server400may include at least one of a processor402, a main memory404, a database406, an enterprise network interface408, and an administration user interface (UI)410. The processor402may be a multi-core server class processor suitable for hardware virtualization. The processor may support at least a 64-bit architecture and a single instruction multiple data (SIMD) instruction set. The main memory404may include a combination of volatile memory (e.g. random access memory) and non-volatile memory (e.g. flash memory). The database406may include one or more hard drives. The enterprise network interface408may provide one or more high-speed communication ports to enterprise switches, routers, and/or network storage appliances. The enterprise network interface408may include high-speed optical Ethernet, InfiniB and (IB), Internet Small Computer System Interface iSCSI, and/or Fibre Channel interfaces. The administration UI may support local and/or remote configuration of the server600by a network administrator. FIG.5depicts a block diagram illustrating a personal computer500for providing the computing device306ofFIG.3in accordance with embodiments of the present disclosure The personal computer500may include at least a processor504, a memory506, a display508, a user interface (UI)510, and a network interface512. The personal computer500may include an operating system such as a Windows® OS, a Macintosh® OS, a Linux® OS, or the like. The memory506may include a combination of volatile memory (e.g. random access memory) and non-volatile memory (e.g. solid state drive and/or hard drives). The display508may be an external display (e.g. computer monitor) or internal display (e.g. laptop). The UI510may include a keyboard, and a pointing device (e.g. mouse). The network interface512may be a wired Ethernet interface or a Wi-Fi interface. In summary, a first illumination source is configured to provide machine vision illumination for a manufacturing process; and a second illumination source is configured to provide human vision illumination for the manufacturing process. In certain embodiments, the manufacturing process may be a paper manufacturing process. The various techniques described herein may be implemented with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the disclosed embodiments, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed invention. In the case of program code execution on programmable computers, the computer will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device and at least one output device. One or more programs are preferably implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations. The described methods and apparatus may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as an EPROM, a gate array, a programmable logic device (PLD), a client computer, a video recorder or the like, the machine becomes an apparatus for practicing the presently disclosed invention. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates to perform the processing of the presently disclosed invention. While the embodiments have been described in connection with the preferred embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same function without deviating therefrom. Therefore, the disclosed embodiments should not be limited to any single embodiment, but rather should be construed in breadth and scope in accordance with the appended claims.
16,545
11943527
DESCRIPTION OF THE EMBODIMENTS The following describes embodiments of the present invention with reference to the accompanying drawings. The same components are assigned the same reference numerals throughout various embodiments, and duplicative descriptions are omitted. The embodiments may be modified or combined with each other as appropriate. An example of a configuration of an image capturing and display apparatus100according to some embodiments will be described with reference toFIGS.1A and1B.FIG.1Ais a plan view of the image capturing and display apparatus100, andFIG.1Bis a cross-sectional view of the image capturing and display apparatus100. The image capturing and display apparatus100has a pixel region111inside a dotted line110and a peripheral circuit region112outside the dotted line110. A plurality of pixels101are disposed in an array in the pixel region111. A vertical scanning circuit102is disposed in the peripheral circuit region112. In addition, a power supply circuit (not shown) and so on are also disposed in the peripheral circuit region112. Conductive lines103are each disposed for a plurality of pixels that are lined up in a row direction. The pixels101are supplied with control signals from the vertical scanning circuit102via the conductive lines103. InFIG.1A, one conductive line103is shown for each row of pixels. However, if each pixel is to be supplied with a plurality of kinds of control signals, a plurality of conductive lines103are respectively disposed for the control signals. The image capturing and display apparatus100has an upper surface100band a lower surface100athat is opposite the upper surface100b. The upper surface100bmay also be referred to as a top surface, and the lower surface100amay also be referred to as a back surface or a bottom surface. Each pixel101emits, from the upper surface100bof the image capturing and display apparatus100, light of an intensity corresponding to the intensity of incident light entering through the lower surface100aof the image capturing and display apparatus100. Therefore, the lower surface100amay also be referred to as an incidence surface, and the upper surface100bmay also be referred to as a light-emitting surface. A specific example of the configuration of a pixel101will be described with reference to the equivalent circuit diagrams shown inFIGS.2A to4B.FIG.2Ashows an equivalent circuit diagram for a pixel101a, which is one specific example of the pixels101. The pixel101aincludes a photoelectric conversion element201, an amplifier transistor202, a light-emitting element203, a reset transistor204, and a reset transistor205. One end of the photoelectric conversion element201is connected to the gate of the amplifier transistor202, and the other end is connected to ground. The node between the photoelectric conversion element201and the gate of the amplifier transistor202functions as a floating diffusion FD. One primary electrode of the amplifier transistor202is connected to a power supply line through which a voltage VDD is supplied, and the other primary electrode is connected to the light-emitting element203. One end of the light-emitting element203is connected to the amplifier transistor202, and the other end is connected to ground. The light-emitting element203may be connected to another constant voltage source instead of being connected to ground. The floating diffusion FD is connected to a power supply line through which a voltage V1is supplied, via the reset transistor204. The gate of the reset transistor204is supplied with a control signal RES1from the vertical scanning circuit102. The node between the light-emitting element203and the amplifier transistor202is connected to a power supply line through which a voltage V2is supplied, via the reset transistor205. The gate of the reset transistor205is supplied with a control signal RES2from the vertical scanning circuit102. The photoelectric conversion element201converts incident light from the outside of the image capturing and display apparatus100(from the lower surface100ain the example shown inFIG.1B) to an electrical charge signal. The photoelectric conversion element201is, for example, a photodiode, an organic photoelectric conversion element, an inorganic photoelectric conversion element, or the like. Examples of the material of the photodiode include silicon, germanium, indium, gallium, arsenic, and so on. Examples of a photoelectric conversion layer include a PN junction type, in which a P-type semiconductor layer and an N-type semiconductor layer are joined to each other, a PIN type, in which a semiconductor layer with a high electrical resistance is sandwiched between a P-type semiconductor layer and a N-type semiconductor layer, an avalanche type, which utilizes avalanche breakdown, and so on. Each pixel101inFIG.1Aincludes a photoelectric conversion element201. Therefore, it can be said that the pixel region111is defined as a region in which a plurality of photoelectric conversion elements201are disposed in an array. An organic photoelectric conversion element has, for example, a structure in which at least one organic thin film layer (organic photoelectric conversion layer) that performs photoelectric conversion is disposed between a pair of electrodes. An organic photoelectric conversion element may have a structure in which there are a plurality of organic thin film layers between a pair of electrodes. An organic photoelectric conversion layer may be made of a single material or a mix of a plurality of materials. An organic thin film layer may be formed through a vacuum deposition process, a coating process, or the like. An inorganic photoelectric conversion element is, for example, a quantum dot photoelectric conversion element that uses a quantum dot thin film layer that contains a fine semiconductor crystal, instead of an organic photoelectric conversion layer, a perovskite photoelectric conversion element that includes a photoelectric conversion layer made of a transition metal oxide or the like that has a perovskite structure, instead of an organic photoelectric conversion layer, or the like. The light-emitting element203emits light of an intensity corresponding to an electrical charge signal acquired by the photoelectric conversion element201. The light-emitting element203is, for example, an inorganic LED (light emitting diode), an organic LED (an OLED, an organic light emitting diode, an organic EL, an organic electroluminescent element), or the like. Examples of the material of an inorganic LED include aluminum, gallium, arsenic, phosphorus, indium, nitrogen, selenium, zinc, diamond, a zinc oxide, a perovskite semiconductor, and so on. A pn junction structure formed using these materials emit light with energy (a wavelength) corresponding to the band gap of the materials. An organic LED includes, for example, a light-emitting layer that contains at least one type of organic light-emitting material between a pair of electrodes. An organic LED may include a plurality of light-emitting layers, and may have a structure in which there are a plurality of organic thin film layers. A light-emitting layer may be made of a single material or a mix of a plurality of materials. Light from a light-emitting layer may be fluorescence or phosphorescence, and may be monochromatic light emission (blue, green, red, etc.) or white light emission. An organic thin film layer may be formed through a vacuum deposition process, a coating process, or the like. The amplifier transistor202constitutes an amplifier circuit that amplifies an electrical charge signal acquired by the photoelectric conversion element201. The reset transistor204resets the voltage across the photoelectric conversion element201to the initial state, upon being turned ON. The reset transistor205resets the voltage across the light-emitting element203to the initial state, upon being turned ON. FIG.2Bshows an equivalent circuit diagram for a pixel101b, which is one specific example of the pixels101. The pixel101bis different from the pixel101ain that the pixel101bfurther includes a transfer transistor206between the photoelectric conversion element201and the amplifier transistor202, and may be otherwise the same as the pixel101a. As the transfer transistor206is provided, it is possible to mainly reduce noise called kTC noise, which may be generated in the reset transistor204due to variations in the reset level. The gate of the transfer transistor206is supplied with a control signal TX from the vertical scanning circuit102. That is, the vertical scanning circuit102functions as a driving circuit that generates a control signal TX that switches the transfer transistor206to ON and OFF. The node between the transfer transistor206and the gate of the amplifier transistor202functions as the floating diffusion FD. A signal path used to transmit a signal from the photoelectric conversion element201to the light-emitting element203may further be provided with a buffer circuit (not shown) between the floating diffusion FD and the amplifier transistor202. With a buffer circuit, it is possible to suppress the influence of a noise charge that may be generated due to a contact connection between silicon and metal. FIG.2Cshows an equivalent circuit diagram for a pixel101c, which is one specific example of the pixels101. The pixel101cis different from the pixel101bin that the pixel101cfurther includes a buffer circuit207and a reset transistor208, and may be otherwise the same as the pixel101b. One end of the buffer circuit207is connected to the amplifier transistor202, and the other end is connected to the light-emitting element203. The node between the light-emitting element203and the buffer circuit207is connected to a power supply line through which a voltage V3is supplied, via the reset transistor208. The gate of the reset transistor208is supplied with a control signal RES3from the vertical scanning circuit102. The buffer circuit207corrects a current (an electrical charge signal) flowing from the amplifier transistor202to the light-emitting element203. The buffer circuit207is, for example, a y correction circuit, a two-dimensional shading correction circuit, a variation correction circuit, a feedback reset circuit, or the like. FIG.3Ashows an equivalent circuit diagram for a pixel101d, which is one specific example of the pixels101. The pixel101dis different from the pixel101bin that the pixel101dfurther includes a capacitor element301and a reset transistor302, and may be otherwise the same as the pixel101b. The capacitor element301is connected between the transfer transistor206and the amplifier transistor202. The node between the transfer transistor206and the capacitor element301functions as the floating diffusion FD. The node between the capacitor element301and the amplifier transistor202is connected to a power supply line that supplies a voltage V4, via the reset transistor302. The gate of the reset transistor302is supplied with a control signal RES4from the vertical scanning circuit102. As the capacitor element301is provided, it is possible to supply the amplifier transistor202with, as a signal, a change in the voltage across the capacitor element301caused by an electrical charge transferred to the floating diffusion FD. As a result, it is possible to set the power supply voltage of the photoelectric conversion element201and the power supply voltage of the light-emitting element203so as to be independent of each other. FIG.3Bshows an equivalent circuit diagram for a pixel101e, which is one specific example of the pixels101. The pixel101eis different from the pixel101bin that the pixel101efurther includes a capacitor element303and a capacitance adding transistor304, and may be otherwise the same as the pixel101b. One end of the capacitor element303is connected to the capacitance adding transistor304, and the other end is connected to ground. The capacitance adding transistor304is connected between the capacitor element303and the floating diffusion FD. The gate of the capacitance adding transistor304is supplied with a control signal ADD from the vertical scanning circuit102. Upon the capacitance adding transistor304being turned ON, the capacitance of the capacitor element301is added to the capacitance of the floating diffusion FD. Thus, the electrical charge signal to be supplied to the light-emitting element203is changed. FIG.4Ashows an equivalent circuit diagram for a pixel101f, which is one specific example of the pixels101. The pixel101fis different from the pixel101bin that the pixel101ffurther includes a clip transistor401, and may be otherwise the same as the pixel101b. The node between the light-emitting element203and the amplifier transistor202is connected to a power supply line through which a voltage V6is supplied, via the clip transistor401. The gate of the clip transistor401is connected to a power supply line through which a voltage V7is supplied. If light of an intensity that is significantly higher than the saturation level of the photoelectric conversion element201is incident to the image capturing and display apparatus100, there is the possibility of the voltage level of the floating diffusion FD or the light-emitting element203dropping to be lower than a voltage level that allows for normal operation. To suppress such a voltage level drop, the pixel101fincludes a clip circuit. The reset transistor204functions as a clip circuit for the floating diffusion FD. Also, the clip transistor401functions as a clip circuit for the node between the light-emitting element203and the amplifier transistor202. As the clip transistor401is included in the pixel101f, it is possible to prevent an unexpected voltage from being applied to the light-emitting element203. In the pixel101f, a voltage that is applied to the clip transistor401is shared in each row or region. FIG.4Bshows an equivalent circuit diagram for a pixel101g, which is one specific example of the pixels101. The pixel101gis different from the pixel101bin that the pixel101gfurther includes a clip transistor402, and may be otherwise the same as the pixel101b. The node between the light-emitting element203and the amplifier transistor202is connected to a power supply line through which a voltage V8is supplied, via the clip transistor402. The clip transistor402is connected to the node between the light-emitting element203and the amplifier transistor202. The clip transistor402functions as a clip circuit for the node between the light-emitting element203and the amplifier transistor202. In each pixel101g, a clip operation is performed according to the voltage at the input unit of the light-emitting element203. Each of the specific examples of pixels shown inFIGS.2A to4Bincludes the reset transistors204and205and the amplifier transistor202. Alternatively, at least one of: the reset transistors204and205; and/or the amplifier transistor202may be omitted. The configurations of the pixels are not limited to those shown inFIGS.2A to4B, and a circuit element may be added as appropriate, or the configurations may be combined. For example, it is possible to make a modification such as sharing a circuit element between a plurality of light-receiving elements or a plurality of light-emitting elements or providing a selection switch and connecting a circuit element to any of a plurality of elements. Next, an example of an example of operations of the image capturing and display apparatus100will be described with reference to the timing charts shown inFIGS.5A and5B. InFIGS.5A and5B, RES1, RES2, and TX denote levels of control signals that are generated by the vertical scanning circuit102, V_FD denotes the value of a voltage at the floating diffusion FD, and I_AN denotes the value of a current flowing through the light-emitting element203. FIG.5Ais a timing chart in the case where the pixels101of the image capturing and display apparatus100are the pixels101a. The reset transistor204is ON when the control signal RES1is at the high level, and is OFF when the control signal RES1is at the low level. The relationship between the reset transistor205and the control signal RES2is the same as above. Before a time point t11, both of the reset transistors204and205are ON, and therefore the voltage at the floating diffusion FD has been reset to the voltage V1and the voltage at the node between the amplifier transistor202and the light-emitting element203has been reset to the voltage V2. The voltage V1and the voltage V2may be the same value. At the time point t11, the vertical scanning circuit102switches the control signals RES1and RES2from ON to OFF. As a result, the photoelectric conversion element201generates an electrical charge signal corresponding to the intensity of incident light. The electrical charge thus generated may be an electron or a hole. The voltage at the floating diffusion FD changes according to the value of the electrical charge signal thus generated. A current corresponding to the amount of change in the voltage at the floating diffusion FD flows between the source and the drain of the amplifier transistor202. This current is supplied to the light-emitting element203, and thus the light-emitting element203emits light of an intensity corresponding to the amount of current. Thereafter, at a time point t12, the vertical scanning circuit102switches the control signals RES1and RES2from OFF to ON. As a result, the voltage at the floating diffusion FD is reset to the voltage V1, and the voltage at the node between the amplifier transistor202and the light-emitting element203is reset to the voltage V2. FIG.5Bis a timing chart in the case where the pixels101of the image capturing and display apparatus100are the pixels101b. The transfer transistor206is ON when the control signal TX is at the high level, and is OFF when the control signal TX is at the low level. Before a time point t21, both of the reset transistors204and205are ON, and therefore the voltage at the floating diffusion FD has been reset to the voltage V1and the voltage at the node between the amplifier transistor202and the light-emitting element203has been reset to the voltage V2. The voltage V1and the voltage V2may be the same value. At the time point t21, the vertical scanning circuit102switches the control signals RES1and RES2from ON to OFF. As a result, the photoelectric conversion element201generates an electrical charge signal corresponding to the intensity of incident light, and stores the electrical charge signal in the photoelectric conversion element201. Also, the voltage at the floating diffusion FD changes according to noise. At a time point t22, the vertical scanning circuit102switches the control signal TX from OFF to ON. As a result, the electrical charge stored in the photoelectric conversion element201is transferred to the floating diffusion FD, and a current corresponding to the amount of change in the voltage at the floating diffusion FD flows between the source and the drain of the amplifier transistor202. This current is supplied to the light-emitting element203, and thus the light-emitting element203emits light of an intensity corresponding to the amount of current. At a time point t23, the vertical scanning circuit102switches the control signal TX from ON to OFF, and thereafter, at a time point t24, the vertical scanning circuit102switches the control signals RES1and RES2from OFF to ON. As a result, the resetting operations described with reference toFIG.5Aare performed. Even if the pixels101of the image capturing and display apparatus100are any of the pixels101cto101g, processing may be performed according to the timing chart shown inFIG.5B. In such cases, the timing with which the vertical scanning circuit102switches the control signals RES3and RES4may also be the same as the timing with which the vertical scanning circuit102switches the control signals RES1and RES2. Therefore, the same conductive line may be shared to supply the control signals RES1to RES4to one pixel. In other words, the vertical scanning circuit102may supply control signals to the respective gates of the reset transistors204,205,208, and302through a common conductive line. With this configuration, the number of conductive lines can be reduced, and the flexibility in laying out the pixels can be increased. The vertical scanning circuit102may switch the level of the control signal RES1that is supplied to the reset transistor204of each of the plurality of rows of pixels, at the same time. That is, the vertical scanning circuit102may switch the respective reset transistors204of the pixels101included in the pixel region to ON or OFF all at once, thereby resetting the voltages at the respective floating diffusions FD all at once (at the same time). The same applies to the control signals RES2to RES4. Similarly, the vertical scanning circuit102may switch the level of the control signal TX that is supplied to the transfer transistor206of each of the plurality of rows of pixels, at the same time. That is, the vertical scanning circuit102may switch the respective transfer transistors206of the pixels101included in the pixel region to ON or OFF all at once, thereby transferring the electrical charge signals stored in the respective photoelectric conversion elements201, all at once (at the same time). Thus, as a result of the vertical scanning circuit102driving all of the pixels in the pixel region with the same timing, the refresh rate of the image capturing and display apparatus100can be improved. In any of the above-described pixels101ato101g, signal paths for transmitting signals from a plurality of photoelectric conversion elements201to a plurality of light-emitting elements203lie within the pixel region111. Therefore, compared to a case where a signal acquired by the photoelectric conversion element201is read out of the pixel region, and an image is displayed after data processing is performed on an image acquired from the pixel array, it is possible to shorten the period of time from when light enters the photoelectric conversion element201to when the light-emitting element203emits light. Next, an example of the configuration of the pixel101bwill be described with reference to the cross-sectional view shown inFIG.6. The image capturing and display apparatus100includes two substrates600and650that are stacked on each other. The substrate600includes a photoelectric conversion element, and the substrate650includes a light-emitting element. The substrate600includes a semiconductor layer601, an insulation layer604, a color filter layer611, and a microlens612. The semiconductor layer601includes a plurality of impurity regions, including impurity regions602and603. The impurity region602constitutes the photoelectric conversion element201. The impurity region603functions as the floating diffusion FD. In addition, the plurality of impurity regions also include regions that constitute the reset transistor204. The color filter layer611is in a Bayer arrangement, for example, and breaks incident light up into colors red, green, and blue. Instead of the color filter layer611, an element that performs photoelectric conversion on red light, an element that performs photoelectric conversion on green light, and an element that performs photoelectric conversion on blue light may be disposed in the pixel region111. Electrodes605and606, a conductive pattern607, plugs608and609, and a connection portion610are formed in the insulation layer604. The electrode605functions as the gate of the transfer transistor206. The electrode606functions as the gate of the reset transistor204. The floating diffusion FD is connected to the connection portion610via the plug608, a portion of the conductive pattern607, and the plug609. As described above, the photoelectric conversion element201performs photoelectric conversion on incident light entering through the lower surface100aof the image capturing and display apparatus100. That is, the substrate600is a back side illumination type substrate. Alternatively, the substrate600may be formed as a top side illumination type substrate. The substrate650includes a semiconductor layer651and an insulation layer653. The semiconductor layer651includes a plurality of impurity regions, including an impurity region652. The impurity region652functions as the node between the amplifier transistor202and the light-emitting element203. The plurality of impurity regions also include regions that constitute the reset transistor205. Electrodes654and655, a conductive pattern656, a plug658, a light-emitting layer657, and a connection portion659are formed in the insulation layer653. The light-emitting layer657constitutes the light-emitting element203. A light-emitting layer657that emits red light, a light-emitting layer657that emits green light, and a light-emitting layer657that emits blue light may be disposed in the pixel region111. Alternatively, the substrate650may include a color filter on the light-emitting layer657, and the color filter may convert white light emitted from the light-emitting layer657into the individual colors. The electrode654functions as the gate of the amplifier transistor202. The electrode655functions as the gate of the reset transistor205. A portion of the conductive pattern656is connected to the connection portion659via the plug658. The connection portion610and the connection portion659may be directly connected to each other, or connected via a micro bump. If the substrates600and650have a thickness of several hundred micrometers or smaller, a supporting substrate may be attached to each of the substrates600and650in order to ensure the strength of the image capturing and display apparatus100. For example, a transparent supporting substrate that is formed using glass, plastic, quartz, or the like may be attached to at least one of the lower side of the substrate600and/or the upper side of the substrate650. The supporting substrate may be attached by using a device layer transfer method. According to this method, for example, the substrate600is generated on a base with a porous structure region therebetween, and thereafter the substrate600is separated so as to be disposed on the transparent supporting substrate. In the structure shown inFIG.6, the image capturing and display apparatus100is constituted by two substrates600and650that are stacked on each other. Alternatively, the constituent elements shown inFIG.6may be formed using a single semiconductor substrate through a series of processes. In this case, a penetrating electrode that connects a portion of the conductive pattern656and a portion of the conductive pattern607to each other is formed. InFIG.6, the photoelectric conversion element201, the floating diffusion FD, the reset transistor204, and the transfer transistor206are formed in the substrate600, and the amplifier transistor202, the reset transistor205, and the light-emitting element203are formed in the substrate650. Alternatively, the floating diffusion FD, the reset transistor204, and the transfer transistor206may be formed in the substrate650. With this arrangement, it is possible to increase the area of the impurity region602in the photoelectric conversion element201formed in the substrate600. In the structure shown inFIG.6, the photoelectric conversion element201receives incident light from the lower side of the image capturing and display apparatus100, and the light-emitting element203emits light toward the upper side of the image capturing and display apparatus100. Therefore, the photoelectric conversion element201and the light-emitting element203have a positional relationship in which they do not interfere with each other. In this way, the photoelectric conversion element201and the light-emitting element203are disposed on different surfaces, and thus light from the light-emitting element203and light from the outside can be separated from each other at the photoelectric conversion element201, and a large number of pixels can be disposed in the pixel region111. Next, an example of a configuration of an image capturing and display apparatus700according to some embodiments will be described with reference toFIGS.7A and7B.FIG.7Ais a cross-sectional view of the image capturing and display apparatus700. A plan view of the image capturing and display apparatus700may be similar to the plan view of the image capturing and display apparatus100shown inFIG.1A, and therefore it is omitted. The image capturing and display apparatus700has a pixel region711inside a dotted line710and a peripheral circuit region712outside the dotted line710. A plurality of pixels701are disposed in an array in the pixel region711. A vertical scanning circuit102is disposed in the peripheral circuit region712. In addition, a power supply circuit (not shown) and so on are also disposed in the peripheral circuit region712. Conductive lines103are each disposed for a plurality of pixels that are lined up in a row direction. The pixels701are supplied with control signals from the vertical scanning circuit102via the conductive lines103. The image capturing and display apparatus700has an upper surface700band a lower surface700athat is opposite the upper surface700b. Each pixel701emits, from the upper surface700bof the image capturing and display apparatus700, light of an intensity corresponding to the intensity of incident light entering through the upper surface700bof the image capturing and display apparatus700. Therefore, the upper surface700bmay also be referred to as an incidence surface and a light-emitting surface. FIG.7Bshows a cross-sectional view that focuses on one pixel701. An equivalent circuit diagram for the pixel701may be similar to the equivalent circuit diagram for the pixel101.FIG.7Billustrates a case in which the pixel701is the pixel101bshown inFIG.2B. The image capturing and display apparatus700may be formed using a single substrate. The image capturing and display apparatus700includes a semiconductor layer750, an insulation layer757, a color filter layer762, and a microlens763. The semiconductor layer750includes a plurality of impurity regions, including impurity regions751and753. The impurity region751constitutes the photoelectric conversion element201. The impurity region753functions as the floating diffusion FD. In addition, the plurality of impurity regions also include regions that constitute the reset transistors204and205and the amplifier transistor202. By including a color filter for the photoelectric conversion element and a color filter for the light-emitting element in the same color filter layer762, it is possible to reduce the influence of variations in the thickness of the filters, and improve image quality. Electrodes752,754,755, and756, a conductive pattern759, a light-shielding portion761, a light waveguide758, and a light-emitting layer760are formed in the insulation layer757. The electrode752functions as the gate of the transfer transistor206. The electrode754functions as the gate of the reset transistor204. The electrode755functions as the gate of the amplifier transistor202. The electrode756functions as the gate of the reset transistor205. The light-emitting layer760constitutes the light-emitting element203. The light-shielding portion761is located between the light-emitting layer760and the impurity region751, and prevents light emitted from the light-emitting layer760from reaching the impurity region751. The light-shielding portion761is formed using metal or a polarizing member, for example. The light waveguide758collects incident light entering the insulation layer757via the microlens763, to the impurity region751. The color filter layer762, the microlens763, the light waveguide758, and so on may be omitted. As described above, the photoelectric conversion element201performs photoelectric conversion on incident light entering through the upper surface700bof the image capturing and display apparatus700. That is, the substrate of the image capturing and display apparatus700is a top side illumination type substrate. Alternatively, the substrate of the image capturing and display apparatus700may be formed as a back side illumination type substrate. FIGS.8A to8Cillustrate specific examples of planar layouts of impurity regions751and light-emitting layers760.FIGS.8A to8Ceach show the positions of impurity regions751and light-emitting layers760in the upper surface700bof the image capturing and display apparatus700in plan view, and focus on nine pixels701arranged in tree rows and three columns. In the layout shown inFIG.8A, rectangular impurity regions751and rectangular light-emitting layers760are arranged. In the layout shown inFIG.8B, rectangular (e.g. square) impurity regions751and L-shaped light-emitting layers760are arranged. In the layout shown inFIG.8C, frame-shaped light-emitting layers760are respectively positioned around rectangular (e.g. square) impurity regions751. In this way, in any of the layouts, the impurity regions751and the light-emitting layers760do not overlap each other in plan view of the upper surface700bof the image capturing and display apparatus700. In the above-described image capturing and display apparatus100, the impurity regions602that constitute the photoelectric conversion elements201and the light-emitting layers657that constitute the light-emitting elements203may be disposed so as not to overlap each other in plan view of the upper surface100bof the image capturing and display apparatus100. A long-wavelength component of incident light to the impurity region602easily passes through a silicon layer. Such an arrangement makes it possible to prevent light passing through the impurity region602from having an influence on the light-emitting layer760. Next, an example of a configuration of an image capturing and display apparatus900according to some embodiments will be described with reference toFIG.9. A plan view and a cross-sectional view of the image capturing and display apparatus900may be similar to the plan view and the cross-sectional view of the image capturing and display apparatus100shown inFIG.1A, and therefore they are omitted.FIG.9is an equivalent circuit diagram for four pixels arranged in two rows and two columns. A plurality of pixels901are disposed in an array in the pixel region of the image capturing and display apparatus900. The pixels901are different from the pixels101of the image capturing and display apparatus100in that the pixels901each include a switch transistor902and a switch transistor903. Furthermore, in the pixel region, a light-emitting element904and a buffer circuit905are disposed for every four pixels901that are arranged in two rows and two columns. In this way, the image capturing and display apparatus900includes, in the pixel region, light-emitting elements203that are disposed at the same pitch as the photoelectric conversion element201, and light-emitting elements904that are disposed at a pitch different from the aforementioned pitch. InFIG.9, a light-emitting element904and a buffer circuit905are disposed for every four pixels901. However, they may be disposed for every set of pixels901that includes another number of pixels901.FIG.9illustrates points that differ from the pixel101bshown inFIG.2B. However, such differences may be applied to other pixels shown inFIGS.2A to4B. The switch transistor902is connected between the amplifier transistor202and the light-emitting element203. The gate of the switch transistor902is supplied with a control signal CH1from the vertical scanning circuit102. The switch transistor903is connected between the amplifier transistor202and the buffer circuit905. The gate of the switch transistor903is supplied with a control signal CH2from the vertical scanning circuit102. The light-emitting element904is connected between the buffer circuit905and ground. The configuration and the functions of the buffer circuit905may be similar to those of the buffer circuit207. The configurations and the functions of the light-emitting element904may be similar to those of the light-emitting element203. The light-emitting element904may produce colors using the same method as the light-emitting element203, or a different method. For example, the light-emitting element203may emit light of each color using a color filter, and the light-emitting element904may emit light of each color by itself. The image capturing and display apparatus900can operate in two modes, namely a high-resolution mode and a low-resolution mode. In the high-resolution mode, the vertical scanning circuit102sets the control signal CH1to the high level to turn ON the switch transistor902of each pixel901, and sets the control signal CH2to the low level to turn OFF the switch transistor903of each pixel901. In this case, signals acquired by the photoelectric conversion elements201of the pixels901are supplied to the light-emitting elements203, and the light-emitting elements203emit light. On the other hand, no signals are supplied to the light-emitting elements904, and therefore the light-emitting elements904do not emit light. In this way, in the high-resolution mode, one light-emitting element203emits light for each pixel901. In the low-resolution mode, the vertical scanning circuit102sets the control signal CH1to the low level to turn OFF the switch transistor902of each pixel901, and sets the control signal CH2to the high level to turn ON the switch transistor903of each pixel901. In this case, signals acquired by the respective photoelectric conversion elements201of four pixels901are integrated into one, and are thus supplied to the light-emitting element904, and the light-emitting element904emits light. On the other hand, no signals are supplied to the light-emitting elements203, and therefore the light-emitting elements203do not emit light. In this way, in the low-resolution mode, one light-emitting element904emits light for every four pixels901. In this way, the switch transistors902and903function as switch elements that switch the transmission destination of signals acquired by one photoelectric conversion element201to a light-emitting element203or the light-emitting element904. The image capturing and display apparatus900may further be configured to cause pixels901included in a portion of the pixel region to operate in the low-resolution mode, and pixels901included in another region to operate in the high-resolution mode. Specifically, the image capturing and display apparatus900may be configured to set the control signal CH2for pixels901included in the region that is desired to operate in the low-resolution mode to the high level, and set the control signal CH1for pixels901included in the region that is desired to operate in the high-resolution mode to the high level. FIG.10is a cross-sectional view focusing on a portion of the pixel region of the image capturing and display apparatus900. An impurity region1001constitutes the photoelectric conversion element201. A light-emitting layer1002constitutes the light-emitting element203. A light-emitting layer1003constitutes the light-emitting element904. The impurity region1001performs photoelectric conversion on incident light entering through the lower surface of the image capturing and display apparatus900, and the light-emitting layer1002and the light-emitting layer1003each emit light toward the upper surface of the image capturing and display apparatus900. The light-emitting layer1003is disposed over four pixels that are arranged in two rows and two columns. An example of a configuration of an image capturing and display apparatus1100according to some embodiments will be described with reference toFIG.11.FIG.11is a plan view of the image capturing and display apparatus1100. A cross-sectional view of the image capturing and display apparatus1100is similar to the cross-sectional view of the image capturing and display apparatus100shown inFIG.1B, and therefore it is omitted. The image capturing and display apparatus1100has a pixel region inside a dotted line1110and a peripheral circuit region outside the dotted line1110. A plurality of pixels1101are disposed in an array in the pixel region. A vertical scanning circuit1102, a horizontal scanning circuit1105, and a control circuit1106are disposed in the peripheral circuit region. In addition, a power supply circuit (not shown) and so on are also disposed in the peripheral circuit region. Conductive lines1103are each disposed for a plurality of pixels that are lined up in a row direction. The pixels1101are supplied with control signals from the vertical scanning circuit1102via the conductive lines1103. InFIG.11, one conductive line1103is shown for each row of pixels. However, if each pixel is to be supplied with a plurality of kinds of control signals, a plurality of conductive lines1103are respectively disposed for the control signals. Conductive lines1104are each disposed for a plurality of pixels that are lined up in a column direction. Signals are read out from the pixels1101to the horizontal scanning circuit1105via the conductive lines1104. The control circuit1106controls operations of the vertical scanning circuit1102based on signals read out to the horizontal scanning circuit1105. A specific example of the configuration of a pixel1101will be described with reference to the equivalent circuit diagram shown inFIG.12A. The pixels1101includes two sets that each include the photoelectric conversion element201, the amplifier transistor202, the reset transistor204, and the transfer transistor206of the pixel101bshown inFIG.2B. One primary electrode of the amplifier transistor202in one set is connected to the reset transistor205and the light-emitting element203as in the pixel101b. One primary electrode of the amplifier transistor202in the other set is connected to the conductive lines1104. AlthoughFIG.12Aillustrates a configuration based on the example shown inFIG.2B, each set may have a configuration of the specific example of any of the other pixels shown inFIGS.2A to4B. In the image capturing and display apparatus1100, the light-emitting element203emits light corresponding to an electrical charge signal acquired by one photoelectric conversion element201, while the horizontal scanning circuit1105reads out a signal acquired by the other photoelectric conversion element201. The control circuit1106may generate image data from signals read out by the horizontal scanning circuit1105, and store the image data. One photoelectric conversion element201may detect visible light, and the other photoelectric conversion element201may detect infrared light. FIGS.12B and12Care cross-sectional views each focusing on one pixel1101of the image capturing and display apparatus1100. The image capturing and display apparatus1100includes two substrates1201and1202that are stacked on each other. Each substrate may be a top side illumination type substrate or a back side illumination type substrate. The substrate1201includes an impurity region1205that constitutes a photoelectric conversion element. The substrate1202includes an impurity region1204that constitutes a photoelectric conversion element, and a light-emitting layer1203that constitutes a light-emitting element. The impurity region1204and the impurity region1205may respectively constitute the upper photoelectric conversion element201and the lower photoelectric conversion element201shown inFIG.12A, or the other way around. In the embodiment shown inFIG.12B, both the impurity regions1204and1205perform photoelectric conversion on incident light from the upper side of the image capturing and display apparatus1100, and the light-emitting layer1203emits light toward the upper side of the image capturing and display apparatus1100. In the embodiment shown inFIG.12C, the impurity region1204performs photoelectric conversion on incident light from the upper side of the image capturing and display apparatus1100, the impurity region1205performs photoelectric conversion on incident light from the lower side of the image capturing and display apparatus1100, and the light-emitting layer1203emits light toward the upper side of the image capturing and display apparatus1100. In the example shown inFIG.12C, the impurity region1204constitutes the lower photoelectric conversion element201. When the image capturing and display apparatus1100is mounted on a wearable device such as a pair of glasses, the upper surface of the image capturing and display apparatus1100is located so as to face the eyes of the user. Therefore, the control circuit1106may perform gaze detection using image data acquired by the lower photoelectric conversion element201. Using components of the image capturing and display apparatus900, the control circuit1106may display a region where the user's gaze is detected in the pixel region of the image capturing and display apparatus1100at high resolution, and display the other region at low resolution. Also, the impurity region1205may be formed in a large portion of the substrate1201(e.g. 80% or more of the surface of the substrate in plan view). Also, the light-emitting layer1203may be formed in the substrate1201and the substrate1202may be removed where appropriate. In the above-described various embodiments, a photoelectric conversion element201and a light-emitting element203correspond one to one to each other. Alternatively, two or more photoelectric conversion elements201and one light-emitting element203may correspond to each other. Specifically, light of an intensity that is based on the sum of electrical charge signals acquired by two or more photoelectric conversion elements201may be emitted by one light-emitting element203. Furthermore, one photoelectric conversion element201and two or more light-emitting element203may correspond to each other. Specifically, light of an intensity that is based on an electrical charge signal acquired by one photoelectric conversion element201may be separately emitted by two or more light-emitting elements203. Furthermore, the correspondence between photoelectric conversion element(s)201and light-emitting element(s)203may be a mix of the above. For example, an electrical charge signal acquired by one photoelectric conversion element201that detects blue light may be separately supplied to two or more light-emitting elements203that emit blue light. Furthermore, in the same image capturing and display apparatus, electrical charge signals acquired by two or more photoelectric conversion elements201that detect green light may be integrated into one and supplied to one light-emitting element203that emits green light. Such configurations realize light-reception and light-emission by an optimum array for each color. Application examples of image capturing and display apparatuses according to the above-described embodiments will be described with reference toFIGS.13A to13C. The image capturing and display apparatuses can be applied to a wearable device such as smart glasses, an HMD, or smart contact lenses. An image capturing and display apparatus that can be used in such an application example includes a photoelectric conversion element that can perform photoelectric conversion on visible light, and a light-emitting element that can emit visible light. FIG.13Aillustrates a pair of glasses1300(smart glasses) according to one application example. An image capturing and display apparatus1302is mounted on a lens1301of the pair of glasses1300. The image capturing and display apparatus1302may be the above-described image capturing and display apparatus100, for example. The pair of glasses1300also includes a control apparatus1303. The control apparatus1303functions as a power supply that supplies power to the image capturing and display apparatus1302, and also controls operations of the image capturing and display apparatus1302. An optical system for collecting light onto the image capturing and display apparatus1302is formed in the lens1301. FIG.13Billustrates a pair of glasses1310(smart glasses) according to one application example. The pair of glasses1310includes a control apparatus1312, and an image capturing and display apparatus is mounted in the control apparatus1312. This image capturing and display apparatus may be the above-described image capturing and display apparatus100, for example. An optical system for projecting light emitted from the image capturing and display apparatus in the control apparatus1312is formed in a lens1311. The optical system projects an image onto the lens1311upon receiving this light. The control apparatus1312functions as a power supply that supplies power to the image capturing and display apparatus, and also controls operations of the image capturing and display apparatus. FIG.13Cillustrates a contact lens1320(a smart contact lens) according to one application example. An image capturing and display apparatus1321is mounted on a contact lens1320. The image capturing and display apparatus1321may be the above-described image capturing and display apparatus100, for example. The contact lens1320also includes a control apparatus1322. The control apparatus1322functions as a power supply that supplies power to the image capturing and display apparatus1321, and also controls operations of the image capturing and display apparatus1321. An optical system for collecting light onto the image capturing and display apparatus1321is formed in the contact lens1320. The image capturing and display apparatuses are also applicable to a night vision apparatus, a monitoring apparatus, binoculars, a telescope, a medical detector, and so on. An image capturing and display apparatus that can be used in such an application example includes a photoelectric conversion element that can perform photoelectric conversion on visible light and light other than visible light (ultraviolet light, infrared light, and so on), and a light-emitting element that can emit visible light. In such an application example, light that cannot be easily perceived by human eyes is displayed as visible light. The image capturing and display apparatuses are also applicable to monitoring and a security apparatus. An image capturing and display apparatus that can be used in such an application example includes a photoelectric conversion element that can perform photoelectric conversion on visible light, and a light-emitting element that can emit light other than visible light (ultraviolet light, infrared light, and so on). With such an application example, information regarding a subject can be made invisible. In a device according to any of the application examples, the period of time from light reception to light emission is short in the image capturing and display apparatuses according to the embodiments. Therefore, the user can use the device without feeling that something is amiss. The above-described embodiments make it easier to apply the image capturing and display apparatus to another apparatus. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
51,444
11943528
DESCRIPTION OF THE EMBODIMENTS Embodiments of the present disclosure will herein be described with reference to the drawings. The configurations described in the following embodiments are only examples and the present disclosure is not limited to the illustrated configurations. Embodiment FIG.1is a diagram illustrating an example of the configuration of a system in an embodiment. The system of the present embodiment includes an imaging apparatus100, an information processing apparatus200, a display210, and a network300. The imaging apparatus100and the information processing apparatus200are connected to each other over the network300. The network300is realized by multiple routers, switches, cables, and so on conforming to a communication standard, such as Ethernet (registered trademark). The network300may be realized by the Internet, a wired local area network (LAN), a wireless LAN, a wide area network (WAN), or the like. The imaging apparatus100is an apparatus that captures an image and functions as an imaging apparatus capable of changing the imaging range. The imaging apparatus100transmits image data about an image that is captured, information about an imaging date and time when the image is captured, identification information for identifying the imaging apparatus100, and information about the imaging range of the imaging apparatus100to an external apparatus, such as the information processing apparatus200, over the network300. The information processing apparatus200is, for example, a client apparatus, such as a personal computer, in which programs for realizing the functions of processes described below are installed. Although one imaging apparatus100is provided in the system in the present embodiment, multiple imaging apparatuses100may be provided in the system. Specifically, the multiple imaging apparatuses100may be connected to the information processing apparatus200over the network300. In this case, for example, the information processing apparatus200determines which imaging apparatus100, among the multiple imaging apparatuses100, has captured the transmitted image using the identification information associated with the transmitted image. The display210is composed of a liquid crystal display (LCD) or the like and displays the image captured by the imaging apparatus100and so on. The display210is connected to the information processing apparatus200via a display cable conforming to a communication standard, such as high-definition multimedia interface (HDMI) (registered trademark). The display210and the information processing apparatus200may be provided in a single casing. The imaging apparatus100according to the present embodiment will now be described with reference toFIG.2andFIG.3,FIG.2is an example of the external view of the imaging apparatus100according to the present embodiment.FIG.3is a diagram illustrating an example of the functional blocks of the imaging apparatus100and the information processing apparatus200according to the present embodiment. Among the functional blocks of the imaging apparatus100illustrated inFIG.3, the respective functions of an image processing unit112, a system control unit113, a pan-tilt-zoom control unit114, a storage unit115, a communication unit116, and so on are realized in the following manner. Specifically, the respective functions are realized by a central processing unit (CPU)700in the imaging apparatus100, which executes computer programs stored in a read only memory (ROM)720in the imaging apparatus100. The CPU700and the ROM720will be described below with reference toFIG.7. The direction to which the optical axis of a lens101is directed is an imaging direction of the imaging apparatus100. The light flux that has passed through the lens101forms an image on an imaging device in an imaging unit111in the imaging apparatus100. A lens drive unit102is composed of a drive system driving the lens101and varies the focal length of the lens101. The lens drive unit102is controlled by the pan-tilt-zoom control unit114. A pan drive unit103is composed of a mechanical drive system that performs a pan operation and a motor, which is a drive source. The pan drive unit103controls rotational driving for rotationally driving the imaging direction of the imaging apparatus100in a pan direction105. The pan drive unit103is controlled by the pan-tilt-zoom control unit114. A tilt drive unit104is composed of a mechanical drive system that performs a tilt operation and a motor, which is a drive source. The tilt drive unit104controls rotational driving for rotationally driving the imaging direction of the imaging apparatus100in a tilt direction106. The tilt drive unit104is controlled by the pan-tilt-zoom control unit114. The imaging unit111is composed of an imaging device (not illustrated), such as a charge coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) sensor. The imaging unit111performs photoelectric conversion of an image of a subject, which is formed through the lens101, to generate an electrical signal. The image processing unit112performs a process to convert the electrical signal subjected to the photoelectric conversion in the imaging unit111into a digital signal and image processing, such as encoding, to generate image data. The pan-tilt-zoom control unit114controls the pan drive unit103, the tilt drive unit104, and the lens drive unit102based on instructions supplied from the system control unit113to control pan-tilt-zoom of the imaging apparatus100. The storage unit115stores (holds), for example, information indicating the imaging range. The communication unit116communicates with the information processing apparatus200via an interface (I/F)740, which will be described below with reference toFIG.7. For example, the communication unit116transmits the image data about an image captured by the imaging apparatus100to the information processing apparatus200over the network300. In addition, the communication unit116transmits information indicating the current imaging range of the imaging apparatus100. Furthermore, the communication unit116receives a control command, which is transmitted from the information processing apparatus200and which is used to control the imaging apparatus100, and supplies the received control command to the system control unit113. The system control unit113controls the entire imaging apparatus100in accordance with the processes performed by the CPU700described below with reference toFIG.7to perform, for example, the following process. Specifically, the system control unit113analyzes the control command, which is transmitted from the information processing apparatus200and which is used to control the imaging apparatus100, to perform the process corresponding to the control command. In addition, the system control unit113issues an instruction about a pan-tilt-zoom operation to the pan-tilt-zoom control unit114. Furthermore, the system control unit113adds information about the imaging time when the image data has been captured and the information about the imaging range to the image data in transmission of the image data generated by the image processing unit112to the information processing apparatus200. The imaging range in the present embodiment is defined by a pan value, a tilt value, and a zoom value of the imaging apparatus100. The pan value is the angle of the imaging direction (the optical axis) in the pan direction105of the imaging apparatus100when one drive end of the pan drive unit103is set to 0°. The tilt value is the angle of the imaging direction (the optical axis) in the tilt direction106of the imaging apparatus100when one drive end of the tilt drive unit104is set to 0°. The zoom value of the imaging apparatus100when an image is captured by the imaging apparatus100is calculated from the focal length of the lens101. Furthermore, the system control unit113performs a process concerning a shot operation described below. For example, the system control unit113calculates parameters for realizing the shot operation from the current imaging range and a target imaging range, which is the imaging range to be attained, and further calculates a time required for the shot operation. At this time, the system control unit113calculates the difference (the amount of drive) between the current imaging range of the imaging apparatus100and the target imaging range of the shot operation for each of the pan value, the tilt value, and the zoom value. Furthermore, the system control unit113calculates the following parameters in driving of the pan-tilt-zoom from the current imaging range to the target imaging range. Specifically, the system control unit113calculates a pan required time, which is the time required to drive the pan, a tilt required time, which is the time required to drive the tilt, and a zoom required time, which is the time required to drive the zoom. Then, the system control unit113calculates the maximum value, among the pan required time, the tilt required time, and the zoom required time, as the required time to the target imaging range. Information processing in the information processing apparatus200according to the present embodiment will now be described with reference to the functional blocks of the information processing apparatus200illustrated inFIG.3. The respective functions of the information processing apparatus200are realized in the following manner using the ROM720and the CPU700, which will be described below with reference toFIG.7. Specifically, the respective functions illustrated inFIG.3are realized by the CPU700in the information processing apparatus200, which executes the computer programs stored in the ROM720in the information processing apparatus200. A display control unit201displays an image captured by the imaging apparatus100and a setting screen concerning the shot operation in the display210. An operation accepting unit202accepts information about an operation by a user with an input device (not illustrated), such as a keyboard, a mouse, or a touch panel. For example, buttons, the mouse, or a joystick is assumed as an input unit, which accepts various operations by the user or the like. Here, for example, the display control unit201displays the setting screen used to make settings concerning the shot operation in the display210and the operation accepting unit202accepts information about a user's operation on the setting screen displayed in the display210. A system control unit203transmits the control command to a remote camera via a communication unit204in response to the operation by the user or the like. The communication unit204transmits various setting commands supplied from the system control unit203and the control command for the imaging apparatus100to the imaging apparatus100via the I/F740described below with reference toFIG.7. In addition, the communication unit204receives image data transmitted from the imaging apparatus100and a response from the imaging apparatus100to the command transmitted from the information processing apparatus200to the imaging apparatus100and supplies the image data and the received response to the system control unit203. A storage unit205stores information about the shot operation, an image acquired by the communication unit204, and so on. The system control unit203creates the various setting commands based on the user's operations accepted by the operation accepting unit202and the control command and transmits the created commands to the imaging apparatus100via the communication unit204. An example of a user interface when the shot operation is performed will now be described with reference toFIG.4. A Setting screen401illustrated inFIG.4is displayed in the display210by the display control unit201and is used to accept the user's operations for settings concerning the shot operation. The Setting screen401in the present embodiment has a function to register a shot number associated with the setting of the target imaging range and a function to perform the shot operation. In order to use the function to register the shot number, the user controls the imaging range by controlling at least one of the pan, the tilt, and the zoom of the imaging apparatus100to set a desired imaging range, inputs the shot number with Registration of shot number402, and touches a Registration button403. At this time, information about the shot number input with the Registration of shot number402is transmitted from the information processing apparatus200to the imaging apparatus100. The imaging apparatus100stores the current imaging range as the target imaging range of the shot operation in the storage unit115in association with the value of the shot number input with the Registration of shot number402. Information about the pan value, the tilt value, and the zoom value, which indicates the target imaging range, is held as information about the stored target imaging range. The storage unit205in the information processing apparatus200may also hold the input shot number in association with the information about the target imaging range. In addition, the following process is performed as the function to perform the shot operation on the Setting screen401. Specifically, the operation accepting unit202accepts the user's operation to select one number, among the numbers registered with the Registration of shot number402, with Selection of shot number404and accepts the user's operation to specify a moving time with Input of shot time405. Upon acceptance of the user's operation to touch an Execution button406, the information processing apparatus200transmits information about the shot number selected with the Selection of shot number404and information about the moving time specified with the Input of shot time405to the imaging apparatus100. The imaging apparatus100performs the shot operation in the moving time specified for the shot operation in accordance with the information transmitted from the information processing apparatus200. A process concerning the shot operation according to the present embodiment will now be described with reference to a flowchart inFIG.5. Step S501to Step S510in the flowchart inFIG.5are performed by the information processing apparatus200. Step S501to Step S510are performed by the functional blocks illustrated inFIG.3, which are realized by the CPU700in the information processing apparatus200, which executes the computer programs stored in the ROM720in the information processing apparatus200. Step S511to Step S518are performed by the imaging apparatus100and are performed by the functional blocks illustrated inFIG.3, which are realized by the CPU700in the imaging apparatus100, which executes the computer programs stored in the ROM720in the imaging apparatus100. Referring toFIG.5, in Step S501, the system control unit203determines whether the shot number of the shot operation is newly registered. The process goes to Step S503if the shot number of the shot operation is not newly registered. (No in Step S501) and the process goes to Step S502if the shot number of the shot operation is newly registered (Yes in Step S501). At this time, the system control unit203determines that the shot number is newly registered when the operation accepting unit202accepts the user's operation to input the shot number with the Registration of shot number402and touch the Registration button403, as described above with reference toFIG.4, and the process goes to Step S502. In Step S502, the information about the shot number is transmitted to the imaging apparatus100. The imaging apparatus100stores the current imaging range as the target imaging range in association with the transmitted information about the shot number. When the information about the imaging range of the imaging apparatus100is transmitted to the information processing apparatus200, if needed, with the captured image, the following processing may be performed in Step S502. Specifically, in Step S502, the system control unit203may transmit the information about the shot number and the information about the imaging range at the time when the shot number is registered to the imaging apparatus100. Here, the imaging apparatus100records the transmitted information about the imaging range as the target imaging range associated with the transmitted shot number. The imaging apparatus100stores information about the pan value, the tilt value, and the zoom value in the target imaging range in the storage unit115in association with the transmitted shot number. Multiple target imaging ranges may be registered in Step S502. The shot number and the information about the pan value, the tilt value, and the zoom value in the target imaging range, which is associated with the shot number, are stored in the storage unit115for each of the multiple registered shot numbers. In Step S503, the operation accepting unit202accepts the user's operation to select the shot number of the target imaging range when the shot operation is performed from the registered shot numbers with the Selection of shot number404. In Step S501, the operation accepting unit202accepts the user's operation to specify the moving time in the shot operation with the Input of shot time405. The order of Step S503and Step S504is not limited to this and the user's operation to select the shot number may be performed after the moving time in the shot operation is specified. In Step S505, the information processing apparatus200acquires information about the required time required to realize the shot operation to cause the imaging apparatus100to reach the target imaging range associated with the shot number specified in Step S503. At this time, for example, the information processing apparatus200transmits a hypertext transfer protocol (HTTP) request for acquiring the required time to realize the shot operation of the specified shot number to the imaging apparatus100and acquires the information about the required time, transmitted from the imaging apparatus100, as an HTTP response. The required time in the present embodiment is the time required to reach the target imaging range corresponding to the selected shot number from the current imaging range of the imaging apparatus100when the pan-tilt-zoom of the imaging apparatus100is driven at maximum speeds. In Step S506, the system control unit203in the information processing apparatus200compares the required time required to reach the target imaging range corresponding to the selected shot number, which is acquired in Step S505, with the moving time specified by the user in Step S504. In Step S507, the display control unit201displays information about the result of the comparison in Step S506. For example, a case is assumed in which the required time required to reach the target imaging range corresponding to the shot number specified in Step S503from the current imaging range is 10 seconds and the moving time specified in Step S504is eight seconds. Since the required time is longer than the specified moving time in this case, the display control unit201displays information (for example, a message) indicating that the shot operation is not capable of being realized within the specified moving time in Step S507. When the required time acquired in Step S505is 10 seconds and the moving time specified in Step S504is 12 seconds, the required time is shorter than the specified moving time. In this case, the display control unit201displays information (for example, a message) indicating that the shot operation is capable of being realized within the specified moving time. In Step S508, the system control unit203determines whether a setting value concerning the shot operation is updated. For example, when the operation accepting unit202accepts the user's operation to update the moving time specified with the Input of shot time405, the system control unit203determines that the setting value is updated (Yes in Step S508) and the process goes to Step S509. In Step S509, the system control unit203updates the setting value. For example, when the operation accepting unit202accepts the user's operation to update the moving time specified with the Input of shot time405, the system control unit203updates the moving time to the updated value. In Step S510, when the operation accepting unit202accepts the users operation to touch the Execution button406, the communication unit204transmits the information about the shot number selected in Step S503and the information about the moving time specified in Step S504to the imaging apparatus100. The imaging apparatus100performs the shot operation in accordance with the transmitted information. A process of calculating the required time in the imaging apparatus100will now be described with reference to Step S511to Step S518inFIG.5. In Step S511, the system control unit113determines whether the shot number is newly registered by the information processing apparatus200. For example, when the information about the shot number is transmitted from the information processing apparatus200to the imaging apparatus100in Step S502, in Step S511, the system control unit113determines that the shot number is newly registered (Yes in Step S511) and the process goes to Step S513. In Step S513, the system control unit113stores the imaging range at the time when the shot number is registered as the target imaging range in the storage unit115in association with the information about the shot number that is newly registered. If the shot number is not newly registered (No in Step S511), the process goes to Step S512. In Step S512, the system control unit113acquires the information about the current imaging range, that is, the information about the current pan value, the current tilt value, and the current zoom value to determine whether at least one of the pan, the tilt, and the zoom is changed. If at least one of the pan, the tilt, and the zoom is changed (Yes in Step S512), the process goes to Step S514. If at least one of the pan, the tilt, and the zoom is not changed (No in Step S512), the process goes to Step S518. If at least one of the pan, the tilt, and the zoom is being changed in the determination in Step S512, the system control unit113may determine that at least one of the pan, the tilt, and the zoom is not changed (No in Step S512) until the changing is stopped and the process may go to Step S518. In Step S514, the system control unit113acquires the information about the pan value, the tilt value, and the zoom value in the target imaging range as the information about the target imaging range corresponding to the registered shot number from the storage unit115. At this time, the system control unit113may acquire only the information about the target imaging range corresponding to the shot number that is newly registered in Step S513from the storage unit115. Alternatively, the system control unit113may acquire the information about the target imaging range corresponding to each of all the shot numbers that are registered. For example, a case is assumed in which a first shot number and information about a first target imaging range corresponding to the first shot number, and a second shot number and information about a second target imaging range corresponding to the second shot number are stored in the storage unit115. In this case, the system control unit113may acquire the information about the first target imaging range corresponding to the first shot number and the information about the second target imaging range corresponding to the second shot number in Step S514. In Step S515, the system control unit113calculates the amount of drive required to reach the target imaging range based on the target imaging range acquired in Step S514and the current imaging range of the imaging apparatus100for each of the pan, the tilt, and the zoom. When the pan value in the current imaging range is 30 degrees and the pan value in the target imaging range is −150 degrees, the system control unit113calculates the amount of drive in the pan drive unit103as 180 degrees. When the tilt value in the current imaging range is 0 degrees and the tilt value in the target imaging range is 40 degrees, the system control unit113calculates the amount of drive in the tilt drive unit104as 40 degrees. When the zoom value in the current imaging range is 20 degrees and the zoom value in the target imaging range is 50 degrees, the system control unit113calculates the amount of drive in the lens drive unit102as 30 degrees. In Step S516, the system control unit113calculates the required time of each of the pan, the tilt, and the zoom to reach the target imaging range based on the amount of drive of each of the pan drive system, the tilt drive system, and the zoom drive system, calculated in Step S515, and the maximum speed defined for each of the pan drive system, the tilt drive system, and the zoom drive system. For example, a case is assumed in which the amount of drive in the pan drive unit103calculated in Step S515is 180 degrees and the maximum speed defined for the pan drive unit103is 30 degrees/sec. In this case, the system control unit113calculates the required time to reach the target imaging range from the current imaging range in the pan direction105(the pan required time) as six seconds when the pan drive unit103is driven at the maximum speed. Similarly, a case is assumed in which the amount of drive in the tilt drive unit104is 40 degrees and the maximum speed defined for the tilt drive unit104is 20 degrees/Sec. In this case, the system control unit113calculates the required time to reach the target imaging range from the current imaging range in the tilt direction106(the tilt required time) as two seconds when the tilt drive unit104is driven at the maximum speed. Similarly, a case is assumed in which the amount of drive in the lens drive unit102is 30 degrees and the maximum speed defined for the lens drive unit102is 15 degrees/sec. In this case, the system control unit113calculates the required time to reach the target imaging range from the current imaging range in the zoom (the zoom required time) as two seconds when the lens drive unit102is driven at the maximum speed. In Step S517, the system control unit113calculates the longest required time, among the pan required time, the tilt required time, and the zoom required time calculated in Step S516, as the required time from the current imaging range to the target imaging range in the shot operation. In the shot operation, the required time from the current imaging range to the target imaging range is calculated in accordance with the drive system having the longest required time to concurrently start and terminate the respective drive systems. For example, a case is assumed in which the pan required time is six seconds, the tilt required time is two seconds, and the zoom required time is two seconds. In this case, the system control unit113calculates the required time of the shot operation from the current imaging range to the target imaging range as six seconds and transmits the information about the calculated required time of the shot operation to the information processing apparatus200in accordance with the shot number of the target imaging range. The case is described in the description from Step S515to Step S517, in which the required time of the shot operation from the current imaging range to a certain target imaging range is calculated. However, when multiple different target imaging ranges are acquired in Step S514, Step S515to Step S517are performed for each of the multiple different target imaging ranges to calculate the required time of the shot operation from the current imaging range to each of the multiple different target imaging ranges. For example, a case is assumed in which the information about the first target imaging range and the second target imaging range is acquired in Step S514. In this case, the system control unit113calculates the required time of the shot operation from the current imaging range to the first target imaging range and the required time of the shot operation from the current imaging range to the second target imaging range. At this time, the system control unit113controls the communication unit116so that the required time of the shot operation to the first target imaging range is transmitted to the information processing apparatus200in association with the first shot number corresponding to the first target imaging range. In addition, the system control unit113controls the communication unit116so that the required time of the shot operation to the second target imaging range is transmitted to the information processing apparatus200in association with the second shot number corresponding to the second target imaging range. As described above, in Step S505, the system control unit203in the information processing apparatus200acquires the required time corresponding to the shot number specified by the user in Step S503from the information received from the imaging apparatus100. The processing from Step S515to Step S517may be performed only for the target imaging range corresponding to the shot number selected by the user in Step S503. Specifically, upon selection of the shot number by the user in Step S503, the information about the shot number is transmitted to the imaging apparatus100. In the processing from Step S515to Step S517, the required time of the shot operation may be calculated only for the target imaging range corresponding to the transmitted shot number. In Step S518, the system control unit113is on standby for a predetermined time period before making a transition to Step S511. The standby time may be set in consideration of the load at the camera side or the responsibility or may be set by the user. Information about the result of comparison displayed by the display control unit201in Step S507will now be described with reference toFIG.6AtoFIG.6C. A case is assumed in which the moving time input with the Input of shot time405is “five seconds” and the required time to the target imaging range corresponding to the shot number “3” selected with the Selection of shot number404is “six seconds”. In this case, as illustrated inFIG.6A, the display control unit201displays information601indicating that the shot operation is unavailable within the moving time input with the Input of shot time405. The display control unit201may display information indicating a factor causing the unavailability of the shot operation within the specified moving time, as illustrated inFIG.6A. At this time, not only the information indicating the required time “six seconds” to the target imaging range corresponding to the shot number “3” but also the information about the pan required time, the tilt required time, and the zoom required time in the target imaging range are transmitted from the imaging apparatus100to the information processing apparatus200. The system control unit203in the information processing apparatus200compares the specified moving time with each of the pan required time, the tilt required time, and the zoom required time. The system control unit203identifies the required time of the drive system, which exceeds the moving time, based on the result of comparison and determines that the identified drive system is the factor causing the unavailability of the shot operation within the moving time. For example, if the pan required time is longer than the specified moving time, the system control unit203identifies the pan drive unit103as the factor causing the unavailability of the shot operation within the moving time. At this time, the display control unit201displays information indicating the pan drive as the factor causing the unavailability of the shot operation within the specified moving time, as illustrated inFIG.6A. If the required time to the target imaging range is longer than the specified moving time, the shot operation is not capable of being completed within the moving time. Accordingly, the display control unit201may gray out the Execution button406and the system control unit203may not accept the execution of the shot operation. A case is assumed in which the moving time input with the Input of shot time405is “10 seconds” and the required time to the target imaging range corresponding to the shot number “3” selected with the Selection of shot number404is “six seconds”. In this case, the display control unit201displays information602indicating that the shot operation to the target imaging range is available within the specified moving time in Step S507, as illustrated inFIG.6B. At this time, when the Execution button406is touched by the user, the system control unit203transmits the information about the specified moving time “10 seconds” and the information about the shot number “3” selected with the Selection of shot number404to the imaging apparatus100. The imaging apparatus100performs the shot operation so as to reach the target imaging range corresponding to the shot number “3” within the specified moving time “10 seconds”. Here, a case is assumed in which the amount of drive in the pan drive unit103is 180 degrees, the amount of drive in the tilt drive unit104is 40 degrees, and the amount of drive in the lens drive unit102is 30 degrees as the amounts of drive of the respective drive system from the current imaging range to the target imaging range, which are calculated in Step S515. In this case, the system control unit113calculates the speeds of the respective drive systems at which the drive of the respective drive systems is completed within the specified moving time “10 seconds”, Specifically, the system control unit113calculates the driving speed in the pan drive unit103as 18 degrees/sec, the driving speed in the tilt drive unit104as four seconds/sec, and the driving speed in the lens drive unit102as three degrees/sec. The system control unit113concurrently starts the control of the respective drive systems and concurrently terminates the control of the respective drive systems in accordance with the calculated speeds of the respective drive systems to perform the shot operation to the target imaging range. A case is assumed in which the information about the shot number in the target imaging range and the information about the required time to the target imaging range are transmitted from the imaging apparatus100to the information processing apparatus200for each of the multiple different target imaging ranges from the current imaging range. In this case, the display control unit201may display the required time to each of the multiple different target imaging ranges on the Setting screen401. For example, the display control unit201displays the first required time (the required time “10 seconds”) to the first target imaging range corresponding to the first shot number (the shot number “3”) with the first shot number, as illustrated in a message603inFIG.6C. In addition, the display control unit201displays the second required time (the required time “20 seconds”) to the second target imaging range corresponding to the second shot number (the shot number “4”) with the second shot number, as illustrated in the message603inFIG.6C. As described above, in the present embodiment, the required time to the target imaging range is calculated based on the current imaging range and the target imaging range and the information indicating the result of comparison between the calculated required time and the moving time specified by the user is displayed. This enables the user to determine in advance whether the shot operation is available within the specified moving time in prior to the execution of the shot operation. Other Embodiments The hardware configuration of the imaging apparatus100for realizing the respective functions of the embodiment will now be described with reference toFIG.7. Although the hardware configuration of the imaging apparatus100is described below, the information processing apparatus200has the same hardware configuration. The imaging apparatus100according to the present embodiment includes the CPU700, a random access memory (RAM)710, the ROM720, a hard disk drive (HDD)730, and the I/F740. The CPU700controls the entire imaging apparatus100. The RAM710temporarily stores the computer programs executed by the CPU700. In addition, the RAM710provides a working area used by the CPU700to perform the processes. The RAM710functions as, for example, a frame memory or a buffer memory. The ROM720stores programs used by the CPU700to control the imaging apparatus100and so on. The HDD730is a storage device recording image data and so on. The I/F740communicates with an external apparatus via the network300in accordance with transmission control protocol/Internet protocol (TCP/IP), HTTP, or the like. Although the example is described above in which the CPU700performs the processes, at least part of the processes performed by the CPU700may be performed by dedicated hardware. For example, a process to display a graphical user interface (GUI) or image data in the display210may be performed by a graphics processing unit (GPU). A process to read out program code from the ROM720and decompose the program code in the RAM710may be performed by direct memory access (DMA) functioning as a transfer apparatus. The present disclosure is capable of being realized by a process to read out and execute a program realizing one or more functions of the above embodiment by one or more processors. The program may be supplied to a system or an apparatus including the processor via a network or a storage medium. The present disclosure is capable of being realized by a circuit (for example, an application specific integrated circuit (ASIC)) realizing one or more functions of the above embodiment. Each component in the imaging apparatus100may be realized by the hardware illustrated inFIG.7or may be realized by software. Another apparatus may have one or more functions of the imaging apparatus100according to the above embodiment. For example, the information processing apparatus200may have one or more functions of the imaging apparatus100according to the above embodiment. Although the present disclosure is described using the embodiments, the above embodiments are only examples to embody the present disclosure and the technical scope of the present disclosure is not limitedly interpreted by the embodiments. In other words, the present disclosure may be embodied in various aspects within the technical idea or the main features thereof. For example, embodiments resulting from combination of the respective embodiments are included in the disclosure of the present specification. According to the embodiments described above, it is possible to provide a technique to enable the user to determine in advance whether the imaging apparatus is capable of reaching the target imaging range from the current imaging range within the specified moving time. Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)), a flash memory device, a memory card, and the like. While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2021-058390 filed Mar. 30, 2021, which is hereby incorporated by reference herein in its entirety.
41,239
11943529
DETAILED DESCRIPTION In the following, embodiments will be described in detail with reference to the drawings as appropriate. However, a detailed description more than necessary may be omitted. For example, a detailed description of already well-known matters and an overlapping description for substantially the same configuration may be omitted. This is to avoid the unnecessary redundancy of the following description and to facilitate understanding by those skilled in the art. It should be noted that the inventor provides the accompanying drawings and the following description for a person skilled in the art to fully understand the present disclosure. Thus, the drawings and the description are not intended to limit the subject matter defined in the claims. First Embodiment In the first embodiment, as an example of the imaging apparatus according to the present disclosure, a box-type digital camera not including a display in the own apparatus will be described. 1. Configuration A configuration of a digital camera according to the first embodiment will be described with reference toFIGS.1to2.FIG.1is a diagram for illustrating an overview of a digital camera10according to the present embodiment. The digital camera10of the present embodiment is configured to be connectable to various external apparatuses in, for example, a box-shaped apparatus body. With this digital camera10, it is possible for the user to easily perform extension or the like on desired functions. FIG.1illustrates a state in which a monitor30in an external configuration is connected to the digital camera10of the present embodiment via a communication cable. The digital camera10of the present embodiment is not particularly provided with a monitor, a viewfinder, or the like in the apparatus body. When desiring to check a live view image captured in real time by the digital camera10, a setting menu, or the like, the user connects the external monitor30to the digital camera10to cause the external monitor30to perform desired display, for example. In addition, the digital camera10is, an interchangeable lens type, and is configured so that the interchangeable lens20can be mounted, for example. 1-1. Configuration of Digital Camera FIG.2is a diagram showing a configuration of a digital camera10according to the present embodiment. The digital camera10of the present embodiment includes a mount120for the interchangeable lens20, an image sensor140, a user interface150, an image processor160, a buffer memory170, and a controller180, for example. Furthermore, the digital camera10includes a flash memory130, a card slot190, and various connection terminals110,112, and114. The various connection terminals110to114include an HDMI terminal110, an SDI terminal112, and a USB terminal114, for example. The interchangeable lens20includes an optical system such as a zoom lens, a focus lens, and a diaphragm, a driver, and the like. The zoom lens is a lens for changing the magnification of the subject image formed by the optical system. The focus lens is a lens for changing the focus state of the subject image formed on the image sensor140. The zoom lens and the focus lens are formed of one or more lenses. The driver of the interchangeable lens20includes configurations each for driving a corresponding one element of the optical system, such as the focus lens. For example, the driver of the focus lens includes a motor, and moves the focus lens along the optical axis of the optical system under the control of the controller180. The configuration for driving each element of the optical system in the driver of the interchangeable lens20can be achieved by a DC motor, a stepping motor, a servo motor, an ultrasonic motor, or the like. The mount120is an attachment unit for detachably mounting the interchangeable lens20in the digital camera10. The mount120includes an interface circuit that performs various data communication between the digital camera10and the interchangeable lens20. The image sensor140captures a subject image incident via the optical system of the interchangeable lens20to generates RAW data, for example. The RAW data is an example of image data in RAW format corresponding to a state of an imaging result by the image sensor140. For example, the RAW data includes information on the light quantity exposed for each pixel in the Bayer array, and indicates an image of an imaging result. The image sensor140performs an imaging operation of an image constituting each frame of a moving image, for example. The image sensor140is an example of an image sensor in the present embodiment. The image sensor140generates RAW data of a new frame at a predetermined frame rate (e.g., 30 frames/sec). The RAW data generation timing and the electronic shutter operation in the image sensor140are controlled by the controller180. As the image sensor140, various image sensors such as a CMOS image sensor, a CCD image sensor, or an NMOS image sensor can be used. The user interface150is a general term for operation members that receive an operation (instruction) from a user. The user interface150includes buttons, levers, dials, switches, and the like that receive user operations. A specific example of the user interface150will be described below. The image processor160performs predetermined processing on the RAW data output from the image sensor140to generate image data of a shooting result. For example, the image processor160may perform processing for development and may generate an image to be displayed on the external monitor30or the like. Examples of the predetermined processing include debayer processing, white balance correction, gamma correction, YC conversion processing, electronic zoom processing, compression processing, expansion processing, and the like, but are not limited thereto. The image processor160may be configured with a hard-wired electronic circuit, or may be configured with a microcomputer, a processor, or the like using a program. The buffer memory170is a recording medium that functions as a work memory for the image processor160and the controller180. The buffer memory170is implemented with a dynamic random-access memory (DRAM) or the like. The flash memory130is a non-volatile recording medium. Each of the memories130and170is an example of a storage in the present embodiment. The controller180controls the operations whole of the digital camera10. The controller180uses the buffer memory170as a work memory during a control operation or an image processing operation. The controller180includes a CPU or an MPU, and the CPU or MPU implements a predetermined function by executing a program (software). The controller180may include a processor including a dedicated electronic circuit designed to achieve a predetermined function instead of the CPU or the like. That is, the controller180can be implemented with various processors such as a CPU, an MPU, a GPU, a DSU, an FPGA, and an ASIC. The controller180may include one or more processors. The card slot190can mount the memory card200, and accesses the memory card200based on the control from the controller180. The digital camera10can record image data in the memory card200and read the recorded image data from the memory card200. In the present embodiment, an example in which the external monitor30is connected to the HDMI terminal110of the digital camera10will be described. The external monitor30is an example of an external apparatus that displays an image and the like. The external monitor30has a function of performing development processing on an image of RAW data input from the digital camera10in a displayable manner, for example. In addition, the external monitor30may have a recorder function of recording, for example, moving image data or the like indicating a displayed image in a recording medium. In the digital camera10, the HDMI terminal110is a connection terminal connected to an external apparatus such as the external monitor30via, for example, a communication cable and for outputting a digital signal such as an image signal in data communication conforming to the HDMI standard. The HDMI terminal110is an example of an output interface in the digital camera10of the present embodiment. The SDI terminal112is a connection terminal for outputting a digital signal to the connected external apparatus in conformity with the SDI standard. A display apparatus such as a monitor32different from the external monitor30connected to the HDMI terminal110is connected to the SDI terminal112, for example. A plurality of monitors may be connected to the digital camera10according to various uses. The USB terminal114is a connection terminal that performs data communication between the connected external apparatus and the digital camera10in conformity with the USB standard. For example, an information terminal34functioning as a display apparatus of a personal computer (PC), a smartphone, or the like is connected to the USB terminal114. Each of the terminals112and114is an example of a communication interface in the digital camera10. The information terminal34may transmit various control signals to the digital camera10. Hereinafter, the communication connection between the digital camera10and the information terminal34may be referred to as a Tether connection. The output interface and the communication interface of the digital camera10are not particularly limited to the above. For example, the digital camera10may be provided with an interface circuit that performs data communication in conformity with a communication standard corresponding to the various terminals110to114, and the output interface or the communication interface may include such an interface circuit. In addition, the communication standards are not limited to the above communication standards, and various wired or wireless communication standards may be adopted. 1-2. User Interface A specific example of the user interface150in the digital camera10will be described with reference toFIG.3.FIG.3is a diagram illustrating a user interface150and a setting menu of the digital camera10. FIG.3shows a shooting button151, a selection button154, and a plurality of function buttons (Fn buttons)156and158as an example of the user interface150provided on the side surface of the digital camera10. When receiving an operation by the user, the user interface150transmits various instruction signals to the controller180. For example, the shooting button151is a depression type button for starting/ending shooting recording of a moving image. The controller180controls recording of a moving image data in the memory card200or the like in response to the depressing operation of the shooting button151, for example. The selection button154includes a depression type buttons provided in the up, down, left, and right directions and a depression type MENU/SET button155provided at the center. For example, when the user depresses the MENU/SET button155with the digital camera10being in a predetermined operation mode, the controller180causes the external monitor30to display a setting menu. FIG.3illustrates a state in which a setting menu screen is displayed on the external monitor30connected to the digital camera10. The setting menu is an example of setting information for setting various operations such as shooting in the digital camera10. The user can select various menu items or options displayed on the screen of the setting menu or move the cursor by depressing any one of the selection buttons154of the up, down, left, and right directions. When the MENU/SET button155is depressed in a state where a specific menu item is selected in the setting menu, the controller180causes the external monitor30to display an option of changing the setting of the selected menu item. Furthermore, when the MENU/SET button155is depressed in a state where a specific option is selected, the controller180establishes the setting of the selected menu item reflecting the option. Each of the Fn buttons156and158is a depression type button to which a specific function in the digital camera10can be assigned by, for example, a user operation in a setting menu. In the present embodiment, a case where a function of turning ON/OFF the RAW output setting described below is assigned to the Fn button156will be described. 2. Operation The operation of the digital camera10configured as described above will be described in the following. 2-1. Outline of Operation The digital camera10of the present embodiment has a RAW output setting that is a setting for outputting RAW data such as a moving image to the external monitor30connected to the HDMI terminal110, for example. When the RAW output setting is applied, a moving image of RAW data during shooting in the digital camera10can be displayed or recorded on the external monitor30. For example, as shown inFIG.3, the setting menu of the digital camera10includes a menu item for allowing the user to select ON/OFF of whether or not to apply the RAW output setting. In this case, the user can cause the external monitor30to display a setting menu from a state in which the RAW output setting of the digital camera10is OFF, and turn ON the RAW output setting from the setting menu displayed on the external monitor30. With regard to the ON/OFF of the RAW output setting as described above, the inventors of the present application have revealed the following problems through energetic research. That is, in a state where the RAW output setting is turned ON, the setting menu cannot be displayed on the external monitor30as the output destination of the RAW data. This is because OSD information such as a setting menu cannot be superimposed on the output RAW data. In order for the user to use the setting menu in this state, it is conceivable to additionally prepare a display apparatus (e.g., the monitor32or the information terminal34) different from the external monitor30as an output destination of the RAW data and connect the display apparatus to the digital camera10. However, when the additional display apparatus as described above is not prepared, it is conceivable to have a situation in which the RAW output setting cannot be returned to OFF temporarily the user turns ON the RAW output setting. Thus, in the digital camera10in the present embodiment, an input interface that responds to a user's instruction regarding RAW output setting is provided separately from the setting menu. Accordingly, in the digital camera10of the present embodiment, the RAW output setting can be returned to OFF by the input interface when the user desires even without using the display of the setting menu in the state where the RAW output setting is ON. Hereinafter, the operation of the digital camera10will be described in detail. 2-2. Details of Operation In the digital camera10of the present embodiment, the Fn button156, to which the ON/OFF function of the RAW output setting is assigned, is an example of the above-described input interface. The operation of the digital camera10when the assignment is performed in advance will be described with reference toFIGS.4to5. FIG.4is a flowchart illustrating an operation of RAW output setting in the digital camera10. The processing shown in the flow inFIG.4is started in a state where the RAW output setting of the digital camera10is OFF, and is executed by the controller180. Hereinafter, a case where the external monitor30is connected to the HDMI terminal110and the other display apparatuses32and34are not connected to the digital camera10will be described. First, the controller180detects an input of a user operation for switching the RAW output setting from OFF to ON in the digital camera10(S1). For example, the user operation to be detected in step S1may be an operation of depressing the Fn button156or an operation of RAW output setting in a setting menu. When the ON operation of the RAW output setting is not input (NO in S1), the controller180does not particularly output the RAW data of the imaging result of the image sensor140to the external monitor30even when the RAW data is generated, and repeats the detection in step S1. At this time, various information other than RAW data may be output from the digital camera10to the external monitor30via the HDMI terminal110. For example, image data obtained by performing development processing on RAW data, or data for displaying setting information such as a setting menu may be output as an image signal. On this occasion, in the digital camera10, the development processing described above, processing of superimposing a setting menu on a developed image, or the like are performed. When the ON operation of the RAW output setting is input (YES in S1), the controller180causes the external monitor30to display a screen of a predetermined caution message by data communication via the HDMI terminal110, for example (S2). A display example in step S2will be described with reference toFIG.5. FIG.5shows a display example of a caution message screen displayed on the external monitor30connected to the digital camera10. In the present example, the caution message screen is displayed as a pop-up or the like on the external monitor30under the control (S2) of the controller180, and includes a caution message50and YES/NO options51and52. The caution message50includes three notification contents as shown inFIG.5, for example. The first notification contents are a warning of notifying the user beforehand that the setting menu is not displayed on the external monitor30connected to the HDMI terminal110in the ON state of the RAW output setting. The second notification contents are a proposal for urging the user to separately prepare a display apparatus for displaying the setting menu by the SDI terminal112or the Tether connection. The third notification contents are confirmation for causing the user to select whether or not to change the RAW output setting from OFF to ON. For example, in a state where the caution message screen shown inFIG.5is displayed on the external monitor30, the controller180receives a user operation of selecting any one of the YES/NO options51and52using the selection button154or the like (S3). For example, when the option52of “NO” is selected by the user operation and the option51of “YES” is not selected (NO in S3), the controller180controls the external monitor30so that the external monitor30deletes the caution message screen displayed from the HDMI terminal110, and returns to the processing in step S1. On the other hand, when the option51of “YES” is selected by the user operation (YES in S3), the controller180turns ON (i.e., applies) the RAW output setting to the digital camera10(S4). For example, the controller180performs various setting changes for enabling the digital camera10to output RAW data of a moving image from the HDMI terminal110to the external monitor30. On this occasion, when a display apparatus different from the external monitor30is not particularly connected to the digital camera10, the controller180may set the user operation for displaying the setting menu as disabled. The controller180outputs the moving image RAW data generated in the digital camera10to the external monitor30via the HDMI terminal110(S5). For example, in the digital camera10, the image sensor140performs an imaging operation and sequentially generates RAW data indicating an imaging result. The controller180outputs RAW data for each frame at a predetermined frame rate to the external monitor30as moving image RAW data, for example. The moving image RAW data may be generated by appropriately performing processing different from the development processing on the imaging result of the image sensor140in the image processor160, for example. In the ON state of the RAW output setting, the controller180detects an input of a user operation of depressing the Fn button156at any time (S6). The detection target in step S6is a user operation for switching the RAW output setting from ON to OFF (i.e., canceling the RAW output setting). When the depressing operation on the Fn button156is not input in the ON state of the RAW output setting (NO in S6), the controller180continues to output the moving image RAW data from the HDMI terminal110to the external monitor30(S5). On the other hand, when the depressing operation on the Fn button156is input in the ON state of the RAW output setting (YES in S6), the controller180turns OFF (i.e., cancels) the RAW output setting in the digital camera10(S7). For example, in step S7, the controller180stops outputting of the moving image RAW data from the HDMI terminal110to the external monitor30. The controller180may output, to the external monitor30, an image signal indicating the developed image data in YC format or the like instead of RAW format. The controller180controls the digital camera10to be in the OFF state of the RAW output setting (S7), and ends the processing shown in the present flow. According to the above processing, even when the menu is not displayed on the external monitor30with the moving image RAW data being output from the HDMI terminal110in the digital camera10to the external monitor30, the RAW output setting can be turned OFF at any time by the depressing operation on the Fn button156(S5to S7). The cancellation of the RAW output setting can be performed even in a state where a display apparatus different from the external monitor30cannot be prepared and a menu cannot be displayed in particular. Thus, and this can facilitate the user to use the RAW output settings. In the above description, as illustrated inFIG.5in step S2, an example in which three notification contents are included in the caution message50has been described. The caution message50displayed in step S2is not particularly limited to the example inFIG.5, and may include any one or two notification contents without including all of the three notification contents, for example. 2-2-1. Forced OFF Processing In the digital camera10, a case is assumed where the ON/OFF function of the RAW output setting described above is not assigned to the Fn button156due to user setting or the like. The digital camera10of the present embodiment receives an instruction to forcibly turn OFF the RAW output setting even in such a case. The forced OFF processing will be described with reference toFIG.6. FIG.6is a flowchart for illustrating forced OFF processing of the digital camera10. The processing shown in the flowchart inFIG.6is executed by the controller180in parallel with the processing illustrated inFIG.4in a state where the RAW output setting is ON, for example. First, the controller180detects whether or not a predetermined reset operation is input in the digital camera10(S21). The reset operation is a user operation preset to forcibly return the operation setting related to information output or the like of the digital camera10to an initial state or the like. For example, the reset operation is a multiplex pressing operation of simultaneously depressing a plurality of buttons155,156, and158of a specific combination. When the reset operation is not input (NO in S21), the controller180does not particularly change the setting of the digital camera10, and repeats the detection in step S21. For example, when the RAW output setting is in the ON state, outputting of the moving image RAW data is continued. When the reset operation is input (YES in S21), the controller180switches the RAW output setting of the digital camera10from ON to OFF (S22). The processing in step S22is performed similarly to step S7inFIG.4. In addition, the controller180resets a setting different from the RAW output setting regarding the information output from the digital camera10, for example (S23). For example, the digital camera10has an information display setting being a setting for preselecting an apparatus caused to display information such as a setting menu (seeFIG.3). In step S23, the controller180forcibly changes the information display setting to “AUTO” being the initial setting, for example. The “AUTO” of the information display setting is a setting state in which a target for displaying setting information such as a setting menu is automatically selected from connection destinations of the HDMI terminal110and the SDI terminal112, for example. For example, in addition to “AUTO”, the information display setting has setting states such as “HDMI” for fixing an apparatus to be displayed to a connection destination of the HDMI terminal110, “SDI” for fixing an apparatus to be displayed to a connection destination of the SDI terminal112, and “OFF” for not displaying information to any connection destination. When resetting the setting for the information output of the digital camera10(S23), the controller180ends the processing shown in the present flow. According to the forced OFF processing of the digital camera10described above, for example, even in an emergency situation where the RAW output setting is turned ON with the ON/OFF function of the RAW output setting not assigned to the Fn button156, the RAW output setting can be forcibly turned OFF (S21to S22). In addition, resetting the information display setting together with the RAW output setting (S23) allows a setting menu to be displayed on the connection destination of the digital camera10, such as the external monitor30, after the reset operation, for example. As described above, it is possible for the user to collectively handle, by the reset operation, the emergency situation in which the setting menu or the like cannot be displayed. Thus, it is possible for the user to easily use the settings of various outputs of the digital camera10. 3. Effects and the Like As described above, the digital camera10according to the present embodiment is an example of a digital camera10that does not include a display that displays a setting menu (seeFIG.3) being an example of setting information for operation setting in the own apparatus. The digital camera10includes an image sensor140being an example of an image sensor, an HDMI terminal110being an example of an output interface, and a controller180. The image sensor140captures a subject image to generate image data. The HDMI terminal110is connected to an external monitor30being an example of an external apparatus, to output image data. The controller180controls a RAW output setting being an example of an output setting for outputting image data in a RAW format being an example of a predetermined format, that is, RAW data from the HDMI terminal110to the external monitor30. In a state where the RAW output setting is applied (ON), the setting menu for canceling the RAW output setting is not displayed on the external monitor30being the output destination of the RAW data by the HDMI terminal110. The digital camera10further includes a user interface150such as an Fn button156as an example of an input interface that receives an instruction to cancel (OFF) the RAW output setting from the state where the RAW output setting is applied. According to the above digital camera10, even when the setting menu is not displayed on the external monitor30in a state where the RAW output setting is turned ON, the input interface that responds to the instruction to cancel the RAW output setting can avoid a situation where the RAW output setting cannot be returned to OFF. Accordingly, in the digital camera10in which the setting menu and the like are not displayed, it is possible to suppress a situation in which it is difficult to use the RAW output setting, and possible to facilitate use of the setting of outputting data to the external apparatus. In the digital camera10of the present embodiment, in a state where the RAW output setting is applied (S4to S6, S21), the input interface responds to the instruction to cancel the RAW output setting without causing the external display apparatuses32and34to display a setting menu for canceling the RAW output setting (S7, S22). Accordingly, the user can return the RAW output setting of the digital camera10to OFF without particularly preparing the additional display apparatuses32and34in addition to the external monitor30being the output destination of the moving image RAW data, and the RAW output setting can be easily used. In the digital camera10of the present embodiment, when an instruction to apply the RAW output setting is input in a state where the RAW output setting is not applied (YES in S1), the controller180causes the external monitor30being an output destination to display a predetermined message such as a caution message50(S2). The predetermined message includes at least one of a notification of giving an advance notice that the setting menu is not displayed on the external monitor30being the output destination when the RAW output setting is applied, a notification of urging preparation of display apparatuses32and34different from the external monitor30being the output destination, or a notification of confirming whether or not to apply the RAW output setting (seeFIG.5). The notification allows the user to easily use the RAW output setting. The digital camera10of the present embodiment further includes a user interface150that receives a user operation according to a setting menu displayed on the external monitor30in a state where the RAW output setting is not applied. That is, the external monitor30can display a setting menu in a state where the RAW output setting is not applied. When the RAW output setting is OFF, the user can display a setting menu on the external monitor30to use various settings. In the digital camera10of the present embodiment, the input interface includes an Fn button156as an example of an operation member to which a function of switching whether or not to apply the RAW output setting is assigned, in the operation member in the user interface150provided in the digital camera10. An instruction to cancel the RAW output setting when the RAW output setting is ON can be input using the Fn button156to which the above function is assigned (S6). In the digital camera10of the present embodiment, the user interface150being an example of the input interface receives an instruction to cancel the output setting according to a predetermined user operation such as a preset reset operation regardless of whether or not the switching function is assigned to the Fn button156(S21). Accordingly, even when the RAW output setting is turned ON with the function described above not assigned to the Fn button156, the RAW output setting can be forcibly returned to OFF. In the digital camera10of the present embodiment, the predetermined format is a RAW format corresponding to a state where the image sensor140has performed imaging. The target image data output from the output interface in the RAW output setting is, for example, moving image RAW data. The image data being an output target is not necessarily limited to moving image RAW data, and may be still image RAW data, for example. The predetermined format may be a format incapable of superposing OSD or GUI (e.g. caption or setting information) with the data such as RAW data. In the digital camera10of the present embodiment, the HDMI terminal110being an example of an output interface outputs image data to the external monitor30in conformity with the HDMI standard. The communication standard by the output interface is not necessarily limited to the HDMI standard, and may be various communication standards in which predetermined format image data such as RAW data can be output. Other Embodiments As described above, the first embodiment has been described as an exemplification of the technique disclosed in the present application. However, the technique in the present disclosure is not limited thereto, and can also be applied to embodiments in which changes, substitutions, additions, omissions, and the like are made as appropriate. In addition, it is also possible to combine each component described in the above embodiment to form a new embodiment. Thus, in the following, other embodiments will be exemplified. In the first embodiment described above, an operation example when only the external monitor30being the output destination of the moving image RAW data is connected to the digital camera10has been described. In the present embodiment, in addition to the external monitor30, a display apparatus such as the monitor32or the information terminal34may be connected to the digital camera10. For example, with the moving image RAW data being output from the HDMI terminal110to the external monitor30in the digital camera10, the setting menu of the digital camera10may be separately displayed on the display apparatuses32and34connected to the various terminals112and114. In this case, a user operation for checking or changing various settings such as video quality of moving image RAW data in the digital camera10can be input using the displayed setting menu. In addition, a user operation of turning OFF the moving image RAW setting may be input. As described above, the digital camera10of the present embodiment further includes the various connection terminals112and114as an example of a communication interface different from the output interface, and the user interface150. The communication interface is connected to the external display apparatuses32and34separately from the output destination of the output interface to perform data communication, and the user interface150receives a user operation corresponding to the setting menu displayed on the display apparatus in a state where the RAW output setting is applied and the setting menu is not displayed on the external monitor30being the output destination. Accordingly, for example, by using the various display apparatuses32and24with a combined use of the external monitor30that is the output destination of the moving image RAW data, the user can easily use the digital camera10such as being able to use the setting menu. For example, an instruction to turn OFF the moving image RAW setting from the setting menu may be input by a user operation of the user interface150according to the setting menu, or may be input to the digital camera10via the connection terminal114from an operation of the information terminal34or the like. In the above embodiments, the example in which the ON/OFF function of the RAW output setting is assigned to the Fn button156has been described. To the Fn button156, the function of turning ON the RAW output setting does not need to be assigned, and only the function of turning OFF the RAW output setting may be assigned. When the digital camera10is in an operation state where ON/OFF of the RAW output setting cannot be executed even with various display apparatuses used, the operation of the above Fn button156may be disabled. In the above embodiments, the Fn button156has been exemplified as an example of the input interface in the digital camera10. In the present embodiment, the input interface is not limited to the Fn button156, and may be various user interfaces150in the digital camera10. The input interface is not limited to the user interface150, and may be configured to receive an instruction to cancel RAW output setting by voice input, for example. Alternatively, an interface circuit connected to a remote control unit or the like to which such a cancelation instruction can be input may be used. In the above embodiments, the setting menu has been exemplified as an example of the setting information in the digital camera10. In the digital camera10of the present embodiment, the setting information is not limited to the setting menu, and may be various information that can be operated and set in the digital camera10, or may be various on-screen display (OSD) information. In the above embodiments, the external monitor30has been described as an example of the external apparatus being the output destination of the output interface. In the present embodiment, the external apparatus is not particularly limited to the external monitor30, and may be an information terminal such as a PC or an external recorder, for example. In the above embodiments, an example of the configuration of the digital camera10has been described as an example of the imaging apparatus. In the present embodiment, the digital camera10is not particularly limited to the above configuration, and may have various configurations. For example, the digital camera10does not need to be an interchangeable lens type, and may be an integrated lens type. In addition, the digital camera10does not need to be particularly a box camera. As described above, the embodiments are described as the exemplification of the technique in the present disclosure. To that end, the accompanying drawings and the detailed description are provided. Therefore, among the components described in the accompanying drawings and the detailed description, not only the component essential for solving the problem, but also the component not essential for solving the problem may be included in order to exemplify the above technique. Therefore, it should not be recognized that these non-essential components are essential immediately because these non-essential components are described in the accompanying drawings and the detailed description. In addition, since the above embodiment is for illustrating the technique in the present disclosure, various changes, substitutions, additions, omissions, and the like can be made within the scope of the claims or the equivalent thereof. The present disclosure is applicable to an imaging apparatus that does not include a display unit for displaying setting information for operation setting.
37,753
11943530
DETAILED DESCRIPTION Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings. The electronic device and method according to various embodiments of the disclosure can provide a zoom in/out speed to meet a user's intention by setting various sections corresponding to at least one camera. FIG.1illustrates an electronic device101in a network environment100according to various embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input device150, a sound output device155, a display device160, an audio module170, a sensor module176, an interface177, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one (e.g., the display device160or the camera module180) of the components may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components may be implemented as single integrated circuitry. For example, the sensor module176(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be implemented as embedded in the display device160(e.g., a display). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may load a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor123(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. Additionally or alternatively, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control at least some of functions or states related to at least one component (e.g., the display device160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor123(e.g., an ISP or a CP) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input device150may receive a command or data to be used by another component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input device150may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The sound output device155may output sound signals to the outside of the electronic device101. The sound output device155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing a record, and the receiver may be used for an incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display device160may visually provide information to the outside (e.g., a user) of the electronic device101. The display device160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display device160may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input device150, or output the sound via the sound output device155or a headphone of an external electronic device (e.g., an electronic device102) directly (e.g., wiredly) or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, ISP, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more CPs that are operable independently from the processor120(e.g., the AP) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or Infrared Data Association (IrDA)) or the second network199(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify and authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., PCB). According to an embodiment, the antenna module197may include a plurality of antennas. In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190(e.g., the wireless communication module192) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102and104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of the operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. FIG.2illustrates an electronic device including a camera according to an embodiment. Referring toFIG.2, the electronic device200(e.g., the electronic device101inFIG.1) includes a processor210(e.g., the processor120inFIG.1), a memory220(e.g., the memory130inFIG.1), at least one camera230(e.g., the camera module180inFIG.1), and a display240(e.g., the display device160inFIG.1). The above-listed components of the electronic device are exemplary only. The electronic device according to various embodiments may further include any component for performing a particular function, or may not include one or more of the above-listed components. The electronic device according to various embodiments may include at least some of the configuration and/or functions of the electronic device101shown inFIG.1. The processor210may perform various instructions to adjust a camera magnification. When a first user input is detected from the display240, the processor210may control to display, on at least a portion of the display240, a zoom bar UI for adjusting the camera magnification. Controlling the electronic device200through the processor210executing various instructions will be described throughout this disclosure. The processor210is a component capable of controlling the respective components of the electronic device200and processing various data. The processor210may include at least some of the configuration and/or functions of the processor120shown inFIG.1. The processor210may be functionally, operatively, and/or electrically connected to components of the electronic device200, including the memory220, the at least one camera230, and the display240. The memory220may store various software programs (or applications, application programs, etc.), data, and instructions for the operation of the electronic device200. At least some of such programs may be downloaded from external servers through wireless or wired communication. The instructions may be defined by, for example, but not limited to, a camera application, or may be defined by a framework, a hardware abstraction layer, or an operating system. The memory220may include at least some of the configuration and/or functions of the memory130shown inFIG.1. The memory220may store instructions that cause the processor210, upon executing the instructions, to identify a current camera magnification in response to an execution of a camera application of the electronic device200, to receive a first user input for adjusting the camera magnification (or utilizing a camera zoom function), and to display a UI for adjusting the camera magnification in response to receiving the first user input. In addition, the instructions may cause the processor210to calculate a scroll speed on the displayed UI by using an acceleration factor corresponding to the received first user input or a second user input in a section allocated to the identified camera magnification, and to adjust the camera magnification by using the calculated scroll speed. The electronic device200may include the at least one camera230, for example, a first camera, a second camera, a third camera, and/or the like. The respective cameras may have different angles of view. The first camera may be ultra-wide, the second camera may be wide, and the third camera may be a tele-camera. This is, however, exemplary only and not to be considered as a limitation. Alternatively, each camera may have any other angle of view and thus be named differently. The electronic device200may acquire an image using any one of the cameras230at a specific magnification and display it on the display240. The camera230may be composed of various essential or optional sub-components. The camera230may include at least some of the configuration and/or functions of the camera module180shown inFIG.1. The display240may display a variety of information, data, and/or contents under the control of the processor210. The display240may include at least some of the configuration and/or functions of the display device160illustrated inFIG.1. The display240may be a touch-sensitive display based on a touch panel or a touch screen. The display240may include a touch sensor, which may be implemented in any one of various manners including an in-cell manner and an on-cell manner. In addition, the display240may detect various types of user input entered thereon. For example, the display240may detect a user's direct touch input through the touch sensor, and/or detect a user's indirect touch input using a stylus pen, a touch pencil, a bluetooth low energy pencil (BLE pen), etc. FIG.3is a flow diagram illustrating a method for adjusting a camera magnification of an electronic device, according to an embodiment. InFIG.3, respective operations corresponding to respective blocks depicted in the flow diagram may be performed sequentially, but this is not necessary. For example, the order of such operations may be changed at least in part, and at least two operations may be performed in parallel or concurrently. In addition, at least one operation may be omitted if necessary. The electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may execute a camera application at a user's request to start shooting. At step301, the processor executes (e.g., the processor120inFIG.1or the processor210inFIG.2) may recognize the execution of the camera application of the electronic device. At step302, the processor receives a first user input from the display (e.g., the touch screen) or a button (e.g., a hardware button for volume control) equipped on the electronic device. The first input, received by the processor at the step302, may include an input of selecting a camera (e.g., the camera module180inFIG.1or the camera220inFIG.2) or a shooting mode, an input of clicking a volume-up button, an input of clicking a volume-down button, a pinch-in input, a pinch-out input, and/or an input (e.g., a gesture input) using the BLE pen. The instructions stored in the memory may cause the processor to display a preview screen on a portion of the display at step304when the first user input received at step302is a hold input for selecting a camera, a click input of a volume-up/down button, or a pinch-in/out input. The above types of the first user input for displaying the preview screen are exemplary only and not to be considered as a limitation. Any other type of the first user input for displaying the preview screen may be defined and used. At step303, the instructions stored in the memory cause the processor to identify a current camera magnification. The identified camera magnification will be used at step309for calculating a scroll speed for a section allocated for each camera. At step304, the instructions stored in the memory cause the processor to display the preview screen on a portion of the display as described above. The preview screen offers a real-time image to assist shooting. Normally, the preview screen may occupy the entire area of or a selected portion of the display. At step305, the instructions stored in the memory cause the processor to display, on a portion of the display, a zoom bar UI for adjusting the camera magnification and a related UI showing specific camera magnification indications. At the step305, the above-mentioned UIs may be overlaid partially on the preview screen being previously displayed. Alternatively, the above UIs may be displayed outside the preview screen. The zoom bar UI displayed at step305may be implemented, for example, in the form of a gradation bar. Any form of the zoom bar UI is possible such as a form according to a user setting, a form offered by or selected in related software, and/or a default form provided by a manufacturer of the electronic device. The related UI showing specific camera magnification indications displayed at step305may be implemented, for example, so as not to be overlapped with the zoom bar UI. This UI may be set to show a plurality of specific camera magnifications, for example, five camera magnifications. The instructions stored in the memory may cause the processor to selectively show default camera magnifications or frequently used camera magnifications in the related UI. The specific camera magnification indications may be a 0.5 magnification, a 1.0 magnification, a 2.0 magnification, a 4.0 magnification, and an 8.0 magnification. This is exemplary only and may be varied depending on the performance or types of cameras equipped in the electronic device. The zoom bar UI may show only a part of the entire range of camera magnifications that the electronic device can provide. In addition, a currently displayed range of the zoom bar UI may be changed after the camera magnification is adjusted in response to the first user input or in response to a second user input to be described later. At step306, the instructions stored in the memory cause the processor to receive the first user input or the second user input. The first user input received at the step306may include an input of performing a scroll or navigation through the zoom bar UI simultaneously with displaying the zoom bar UI. For example, at step306, the processor receives, as the first user input, an input of clicking a volume-up button, an input of clicking a volume-down button, a pinch-in input, a pinch-out input, or an input using the BLE pen. The volume-related inputs and the pinch-related inputs may be regarded as the second user input as well as the first user input, so that the processor may perform steps302and306simultaneously. At step307, the instructions stored in the memory cause the processor to determine whether the received user input is a selection of a specific camera magnification in the related UI. If the previously received first user input relates to camera magnification adjustment, the processor may determine, at step307, whether the second user input is to select a specific camera magnification in the related UI. If the previously received first user input relates to the volume-related input or the pinch-related input, the processor may determine, at step307, that the received user input is not a selection of a specific camera magnification in the related UI. In this case, the processor may perform a scroll or navigation through the zoom bar UI, based on the received second or first user input. When it is determined at step307that the received user input is a selection of a specific camera magnification in the related UI, the instructions cause the processor to identify the selected specific camera magnification at step311. When it is determined at step307that the received user input is not a selection of a specific camera magnification in the related UI, the instructions cause the processor to perform step308. At step308, the instructions stored in the memory cause the processor to identify an acceleration factor in a section allocated to the camera magnification corresponding to the received first or second user input. Such sections allocated to the respective camera magnifications and related acceleration factors will be described below with reference toFIG.4. At step309, the instructions stored in the memory cause the processor to calculate a scroll speed on the zoom bar UI by using the acceleration factor identified at step308. In this disclosure, a scroll or scrolling on the zoom bar UI may also be referred to as a navigation or navigating. The acceleration factor is defined as an amount of change in camera magnification with respect to the size (e.g., the speed of a touch or button input, or the distance of a drag input) of the user input (e.g., the first user input or the second user input). For example, when different acceleration factors are used in response to a specific user input, scroll speeds on the zoom bar UI may be different. At step310, the instructions stored in the memory cause the processor to adjust the camera magnification while navigating the zoom bar UI by using the scroll speed calculated at step309. At step operation311, the instructions stored in the memory cause the processor to identify the specific camera magnification selected through the second user input. At step312, the instructions stored in the memory cause the processor to calculate a scroll speed on the zoom bar UI from the current camera magnification identified at step303to the selected camera magnification identified at step311. The calculating process of step312may follow the calculating process of steps308and309. Additionally or alternatively, the calculating process may be set to perform rapid navigating at the maximum speed by the greatest factor among the acceleration factors in the allocated section for each camera magnification. FIG.4illustrates a zoom bar UI and sections allocated to magnifications of at least one camera of an electronic device, according to an embodiment. The plurality of cameras (e.g., the camera module180inFIG.1or the camera230inFIG.2) of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may have different magnification-adjustable sections. The plurality of cameras may have different functions and/or different performances depending on their uses, so that sections in which camera magnification can be adjusted may be different. The plurality of cameras may include a first camera, a second camera, and a third camera. This is, however, exemplary only. The plurality of cameras equipped in the electronic device may be increased, if necessary, up to the nth camera (“n” means an arbitrary natural number). The first camera may be an ultra-wide camera, the second camera may be a wide camera, and the third camera may be a tele-camera. The first camera may have a wider viewing angle than the second camera, and the second camera may have a wider viewing angle than the third camera. The electronic device may utilize the first camera at a lower magnification than the second camera, and may utilize the second camera at a lower magnification than the third camera. In other words, the electronic device may capture an image using any one of the plurality of cameras in accordance with a camera magnification set by the user (or a default camera magnification). The memory (e.g., the memory130inFIG.1, or the memory220inFIG.2) of the electronic device may store data for sections400divided and allocated to a plurality of camera magnifications. The allocated sections may be set to provide convenience when navigating through the zoom bar UI460for adjusting the camera magnification, and may be set for efficient magnification adjustment between a low magnification and a high magnification different scroll speeds (or navigating speeds) for the respective sections. The sections divided and allocated to the camera magnifications may be a first section410, a second section420, a third section430, a fourth section440, and a fifth section450. This is, however, exemplary only, and the number of the sections may be extended to be equal to or greater than the number of cameras equipped in the electronic device. At least some of sections (e.g., the third section430, the fourth section440, and the fifth section450) may be set to capture an image by the same camera. The allocated sections for the camera magnifications may be set to allocate a magnification adjustment range of the first camera to the first section, allocate a magnification adjustment range of the second camera to the second section, and allocate a magnification adjustment range of the third camera to the third section. The fourth section may be set to allocate a magnification adjustment range using the third camera and a magnification adjustment function of the camera application together. The fifth section may be set to allocate a magnification adjustment range up to the maximum magnification adjustment range (i.e., a limit value of the magnification adjustment) of the electronic device together with the third camera. Although the allocated sections for the camera magnifications are illustrated by expressing the low magnification section as a low number and the high magnification section as a high number, this is exemplary only. Alternatively, the allocated sections may be expressed by letters, expressed in the reverse order, or expressed in any other manner. The types of the first user input that can be received by the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The first user input may include an input of selecting a camera (or a shooting mode) from among the plurality of cameras, an input of clicking a volume-up button, an input of clicking a volume-down button, a pinch-in input, a pinch-out input, and/or an input using the BLE pen. The processor may receive, at step305inFIG.3, the first user input as an input for displaying the zoom bar UI. The input for displaying the zoom bar UI at step305may be the first user input of selecting one of the plurality of cameras. In addition, this input may be a touch input (e.g., a tab input or a touch-and-hold input for a certain time) on a desired camera icon. The processor does not adjust the camera magnification immediately upon receiving the first user input (i.e., the touch input) for selecting the camera, but displays the zoom bar UI for camera magnification adjustment. The types of the second user input that can be received by the processor may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The second user input may include an input of clicking a volume-up button, an input of clicking a volume-down button, a pinch-in input, a pinch-out input, a swipe input, a long press input on the zoom bar UI, a touch on a specific camera magnification indication in the separately displayed UI, and/or an input using the BLE pen. When any one of an input of clicking a volume-up button, an input of clicking a volume-down button, a pinch-in input, a pinch-out input, or an input using the BLE pen is received as the first user input, the processor may regard this input as the second user input as well as the first user input. That is, from among the above-listed types of the first user input, the input of selecting a camera may be regarded as an input for displaying the zoom bar UI (i.e., step305inFIG.3) before adjusting the camera magnification. In contrast, the volume-up/down button click input, the pinch-in/out input, and/or the BLE pen input may be regarded as an input for displaying the zoom bar UI and simultaneously adjusting the camera magnification. When the allocated section is changed from an n1 section to an n2 section during the camera magnification adjustment, the processor may select an n2 camera used in the n2 section to acquire an image. For example, when the first camera is used to acquire an image in the first section and then the camera magnification is changed to the second section in response to the first or second user input, the processor may change the used camera to the second camera to acquire an image in the second section. The processor may acquire an image using the first camera in the first section, acquire an image using the second camera in the second section, and acquire an image using the third camera in the third section. Referring toFIG.4, the fourth section440may be referred to as a hybrid, which is set to use a magnification adjustment function of the camera application together with the third camera. In addition, the fifth section450may be referred to as digital, which is set to use a digital zoom of the electronic device together with the third camera. The digital zoom is to obtain a zoom effect by enlarging a partial screen of image data for shooting through digital processing (e.g., cropping). The digital zoom may be utilized as a supplementary means of optical zoom, and is available using software without any change in hardware of the electronic device. The sections allocated to the camera magnifications or cameras in the zoom bar UI are exemplary only and not to be considered as a limitation. The sections may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The sections allocated to the camera magnifications may be displayed in the form of gradations on the zoom bar UI. However, the allocated sections may not be one-to-one matching with the actual camera magnifications, and the navigating speeds on the camera magnifications may be different for the sections. In the zoom bar UI, the gradations may have the same interval. However, a difference in the camera magnification indicated by the gradation interval may be varied depending on the allocated sections. That is, the camera magnification corresponding to one gradation may be set differently for each section for efficient display of the zoom bar UI. For example, if the first section is allocated to the first camera capable of adjusting the camera magnification from 0.25 to 0.5, and if the second section is allocated to the second camera capable of adjusting the camera magnification from 0.5 to 2.0, the number of gradations on the zoom bar UI may be set differently for the first section and the second section. Differently setting the size of the camera magnification corresponding to one gradation may provide convenience for adjusting the camera magnification. This example for the first and second sections may be similarly applied to all of the sections. This may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The length of the section allocated to each camera magnification may be set as “the length per section”=“the number of gradations”דthe pixels of gradation interval”. This may be set or customized when the electronic device is initially provided, when the software is updated, and/or by the user. The first, second, third, fourth, and fifth sections may be allocated to camera magnifications of 0.25 to 0.5, 0.5 to 1.0, 1.0 to 2.0, 2.0 to 4.0, and 4.0 to 8.0, respectively. As described above, the camera magnification indicated by the gradation may be different depending on the allocated sections. For example, if a user input moving by “d” in the nth section is received, it may be set to move along gradations by “m” (e.g., “n” denotes 1, 2, 3, 4, 5, etc., and “m” denotes 3, 5, 8, 10, 15, etc.). The length of the nth section is varied depending on the number “n”, and thus the number “m” may be set to increase in proportion to the length of the corresponding section. This may be set or customized when the electronic device is initially provided, when the software is updated, and/or by the user. The processor may receive a result of detecting the first or second user input as one of classified speeds such as slow, normal, fast, and very fast speeds. The memory may store related instructions. The above classified speeds of input are, however, exemplary only and may be differently set more than or less than four. In connection with the steps308and309inFIG.3, the speeds of the first or second user input may be defined as a slow speed less than 100 dp/s, a normal speed between 100 dp/s and 200 dp/s, a fast speed between 200 dp/s and 400 dp/s, and a very fast speed of 400 dp/s or more. The dp/s is dots per inch (dpi) per second which means the touched dpi by the user on the display per second. A high dp/s could may mean that the scroll speed is fast. This definition is, however, exemplary only and may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The instructions stored in the memory may cause the processor to calculate a scroll speed on the zoom bar UI for adjusting the camera magnification by using an acceleration factor of the section allocated to the camera magnification. The processor may use the acceleration factor designated for each user input speed in the section. In one embodiment, the acceleration factors of the first, second, and third sections may be set to 0.5 for a slow user input, 0.5 to 1.0 for a normal user input, 1.0 to 1.5 for a fast user input, and 2.0 for a very fast user input. The acceleration factors of the fourth and fifth sections may be set to 0.5 for a slow user input, 0.5 to 1.0 for a normal user input, 1.0 to 1.5 for a fast user input, and 3.0 for a very fast user input. These acceleration factors are, however, exemplary only and may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The scroll speed for adjusting the camera magnification (calculation of the scroll distance) may be calculated using “the scroll distance”=“the distance of a user input”דthe acceleration factor”. That is, the processor may calculate the scroll distance from both the received user input and the acceleration factor and then regard (or convert) the calculated scroll distance as (or into) the scroll speed because it is assumed that the calculated scroll distance is obtained by scrolling for the same period of time. In addition, “the distance of a user input” may be defined as a pinch in/out distance, a drag distance of swipe, a distance corresponding to a click speed or time of the volume control button, a distance corresponding to a input speed (or gesture speed) using the BLE pen, a distance corresponding to a long press time, and the like. This definition between the user input distance and the user input type may be implemented with a separate table and may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The instructions stored in the memory may cause the processor to adjust the camera magnification at step310inFIG.3by using the calculation result obtained through step309or312inFIG.3. In order to calculate the scroll speed on the zoom bar UI (or calculating the navigating speed or the scroll distance), the processor may apply the acceleration factor designated for each section allocated to each camera magnification. The acceleration factor for each section may be set differently depending on the speed of the first or second user input (e.g., slow, normal, fast, or very fast) stored in the memory. Calculating the scroll speed (or navigating speed) on the zoom bar UI may be the same as or similar to calculating the scroll distance (or navigating distance). This is because adjusting the camera magnification on the zoom bar UI is made by scrolling (or navigating) through the gradations on the zoom bar UI. In addition, a scrolling process on the zoom bar UI is performed through continuous actions along the sections allocated to the camera magnifications, but the scroll speed may be calculated as a discrete value. FIG.5Aillustrates a screen of receiving a first user input for displaying a zoom bar UI in an electronic device, according to an embodiment. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive a first user input of selecting one of the plurality of cameras (e.g., the camera module180inFIG.1or the camera230inFIG.2) (or selecting a shooting mode). Referring toFIG.5A, a UI510containing threes camera icons511,512, and513is displayed, and the first user input may select one of the camera icons511,512, and513. The number of camera icons contained in the UI510is exemplary only, and may be varied depending on the cameras equipped in the electronic device. In the UI510that contains three camera icons511,512, and513, the first user input may be an input of selecting the first camera icon511, selecting the second camera icon512, or selecting the third camera icon513. In response to the first user input, the processor may select one of three cameras equipped therein. As illustrated inFIG.5A, the screen displayed on the display (e.g., the display device160inFIG.1or the display240inFIG.2) may be a default screen offered when the camera application is executed. The default screen may contain, for example, a shooting button, a front/rear camera switching icon, a preview screen, a setting icon related to a shooting function, the UT510indicating the plurality of camera icons, and the like. The instructions stored in the memory may cause the processor to switch the default screen to a screen of displaying the zoom bar UI460in response to receiving, as the first user input, a touch-and-hold input of selecting one of the camera icons displayed on the default screen. In this case, an elapsed time of the touch-and-hold input for switching to the zoom bar UI display screen may be settable or changeable. FIG.5Billustrates a screen related to camera magnification adjustment including displaying a zoom bar UI in an electronic device, according to an embodiment. In response to receiving the first user input on the default screen as shown inFIG.5A, the processor may switch the default screen to a screen as shown inFIG.5B. The screen inFIG.5Bmay contain the zoom bar UI460, instead of the UI510inFIG.5A, and a related UI showing specific camera magnification indications (e.g., 0.5 magnification, 1.0 magnification, 2.0 magnification, 4.0 magnification, and 8.0 magnification). The processor may display a current camera magnification, identified at step303inFIG.3, above the zoom bar UI. Entry to theFIG.5Bscreen from theFIG.5Ascreen is to merely display the zoom bar UI460without accompanying camera magnification adjustment. Thus, zooming in/out does not occur yet. The instructions stored in the memory may cause the processor to return to the default screen ofFIG.5Awhen any user input is not received for a certain time (e.g., 2 seconds) in a state where the screen ofFIG.5Bis displayed. The above time for returning to the default screen ofFIG.5Amay restart whenever any user input is entered during the camera magnification adjustment. FIG.6Aillustrates a screen of receiving a first user input for displaying a zoom bar UI in an electronic device, according to an embodiment. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive, as the first user input, a pinch-in or pinch-out input. Referring toFIG.6A, the first user input is the pinch-out input on the preview screen contained in the default screen displayed on the display (e.g., the display device160inFIG.1or the display240inFIG.2). The pinch-in input may also be received as the first user input. In response to receiving the pinch-in or pinch-out input on the default screen shown inFIG.6A, the processor of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may switch the default screen to a screen as shown inFIG.6B. If the first user input is the pinch-out input, the processor may perform zoom-in to adjust the camera magnification to a higher camera magnification than the current camera magnification identified at step303inFIG.3. In contrast, if the first user input is the pinch-in input, the processor may perform zoom-out. FIG.6Billustrates a screen related to camera magnification adjustment including displaying a zoom bar UI in an electronic device, according to an embodiment. In response to receiving the first user input on the default screen as shown inFIG.6A, the processor may switch the default screen to a screen as shown inFIG.6B. The screen inFIG.6Bmay contain the zoom bar UI460, instead of the UI510inFIG.6A, and a related UI showing specific camera magnification indications. Contrary to the first user input ofFIG.5A, the first user input (i.e., pinch-in/out input) ofFIG.6Aaccompanies camera magnification adjustment. Thus, the processor may perform zoom-in/out simultaneously with switching the screen in response to receiving the first user input. For example, if the current camera magnification identified at step303inFIG.3is 1.0, and if the first user input received inFIG.6Ais the pinch-out input, the processor may perform zoom-in at a scroll speed corresponding to the received input while simultaneously switching the screen fromFIG.6AtoFIG.6B. Instead of the pinch-in/out input, an input of clicking the volume button may be entered as the first user input. Specifically, an input of clicking the volume-down button may be set to replace the pinch-in input, and an input of clicking the volume-up button may be set to replace the pinch-out input. The above-described zoom-in/out process may be performed simultaneously even in case of the volume button input. In addition, an input using the BLE pen may also be used instead of the pinch-in/out input. For example, a gesture input of pressing the BLE pen on the screen and rotating the BLE pen clockwise may replace the pinch-out input. Similarly, a gesture input of pressing the BLE pen on the screen and rotating the BLE pen counter-clockwise may replace the pinch-in input. This gesture input using the BLE pen may be set, customized, or changed in a user setting, in the camera application, and/or by a manufacturer of the electronic device. The above-described zoom-in/out process may be performed simultaneously even in case of the gesture input using the BLE pen. The instructions stored in the memory may cause the processor to return to the default screen ofFIG.6Awhen any user input is not received for a certain time (e.g., 2 seconds) in a state where the screen ofFIG.6Bis displayed. The above time for returning to the default screen ofFIG.6Amay restart whenever any user input is entered during the camera magnification adjustment. FIG.7Aillustrates a swipe touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive, as the second user input, a swipe touch input on the screen ofFIG.5B or6Bdisplayed in response to receiving the first user input. In addition, the instructions may cause the processor to calculate a scroll speed corresponding to the received swipe touch input and thereby adjust the camera magnification. Referring toFIG.7A, the processor of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) receives the swipe touch input on the zoom bar UI460. This is, however, exemplary only. Alternatively, the processor may be set to adjust the camera magnification by receiving the swipe touch input on preview screen. The swipe touch input in the left direction may be set to the zoom-in process of adjusting the camera magnification to a higher magnification, and the swipe touch input in the right direction may be set to the zoom-out process of adjusting the camera magnification to a low magnification. This is, however, exemplary only. FIG.7Billustrates a result of zooming in response to a swipe touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. In response to receiving the swipe input as the second user input on theFIG.7Ascreen, the processor may display an adjusted camera magnification corresponding to the received swipe input as shown inFIG.7Band also display the preview screen corresponding to the adjusted camera magnification. This process of the processor may correspond to the above steps306,308,309, and310inFIG.3. The instructions stored in the memory may cause the processor to return to the default screen ofFIG.5A or6Awhen any user input is not received for a certain time (e.g., 2 seconds) in a state where the screen ofFIG.7A or7Bis displayed. The above time for returning to the default screen may restart whenever any user input is entered during the camera magnification adjustment. FIG.8Aillustrates a pinch-in touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive, as the second user input, a pinch-in input on the screen ofFIG.5B or6Bdisplayed in response to receiving the first user input. In addition, the instructions may cause the processor to calculate a scroll speed corresponding to the received pinch-in input and thereby adjust the camera magnification. In response to receiving the pinch-in input as the second user input as shown inFIG.8A, the processor of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may perform a zoom-out process of adjusting the camera magnification to a lower camera magnification. FIG.8Billustrates a result of zooming-out in response to a pinch-in touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. In response to receiving the pinch-in input as the second user input on theFIG.8Ascreen, the processor may display an adjusted camera magnification corresponding to the received pinch-in input as illustrated inFIG.8Band also display the preview screen corresponding to the adjusted camera magnification. This process of the processor may correspond to the above steps306,308,309, and310inFIG.3. The instructions stored in the memory may cause the processor to return to the default screen ofFIG.5A or6Awhen any user input is not received for a certain time (e.g., 2 seconds) in a state where the screen ofFIG.8A or8Bis displayed. The above time for returning to the default screen may restart whenever any user input is entered during the camera magnification adjustment. FIG.9Aillustrates a pinch-out touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive, as the second user input, a pinch-out input on the screen ofFIG.5B or6Bdisplayed in response to receiving the first user input. In addition, the instructions may cause the processor to calculate a scroll speed corresponding to the received pinch-out input and thereby adjust the camera magnification. In response to receiving the pinch-out input as the second user input as shown inFIG.9A, the processor of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may perform a zoom-in process of adjusting the camera magnification to a higher camera magnification. FIG.9Billustrates a result of zooming-in in response to a pinch-out touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. In response to receiving the pinch-out input as the second user input on theFIG.9Ascreen, the processor may display an adjusted camera magnification corresponding to the received pinch-out input as shown inFIG.9Band also display the preview screen corresponding to the adjusted camera magnification. This process of the processor may correspond to the above steps306,308,309, and310inFIG.3. As also described with reference toFIGS.6A and6B, an input of clicking the volume button may be utilized as the second user input as well as the first user input. Specifically, an input of clicking the volume-down button may be set to replace the pinch-in input, and an input of clicking the volume-up button may be set to replace the pinch-out input. As described above, displaying the zoom bar UI and zooming in/out may be performed simultaneously even in case of the volume button input. In response to receiving a user input of clicking the volume-up/down button, the processor may adjust the camera magnification through a calculation process of a zoom bar UI scroll speed as shown inFIG.3and also scroll along the zoom bar UI up to the camera magnification determined as the calculation result. A setting for converting a user input distance (i.e., a drag distance) corresponding to a user input type in order to calculate a scroll speed may be implemented with a separate table. In this embodiment, the speed of scrolling (or navigating) through the zoom bar UI in response to the user input of clicking the volume-up/down button may be determined through calculation in consideration of the acceleration factor for each section. Using the volume button click as the second user input may be interpreted as a desire to quickly adjust the camera magnification, so that it may be set to navigate through the zoom bar UI at the maximum speed for each allocated section. The calculation of the scroll distance (or speed) in case of the button click input may be performed through conversion into a distance corresponding to a button click speed by setting of software as described above with reference toFIG.4, which may be changed and/or customized. As also described with reference toFIGS.6A and6B, an input using the BLE pen may be utilized as the second user input as well as the first user input. For example, a gesture input of pressing the BLE pen on the screen and rotating the BLE pen clockwise may replace the pinch-out input, and a gesture input of pressing the BLE pen on the screen and rotating the BLE pen counter-clockwise may replace the pinch-in input. As described above, displaying the zoom bar UI and zooming in/out may be performed simultaneously even in case of the gesture input using the BLE pen. In response to receiving a user input using the BLE pen (e.g., one of various gesture inputs in a state where a pen button is pressed), the processor may adjust the camera magnification through a calculation process of a zoom bar UI scroll speed as shown inFIG.3and also scroll along the zoom bar UI up to the camera magnification determined as the calculation result. A setting for converting a user input distance (i.e., a drag distance) corresponding to a user input type in order to calculate a scroll speed may be implemented with a separate table. The speed of scrolling (or navigating) through the zoom bar UI in response to the user input using the BLE pen may be determined through calculation in consideration of the acceleration factor for each section. The calculation of the scroll distance (or speed) in case of the BLE pen input may be performed through conversion into a distance corresponding to a button click speed by setting of software as described above with reference toFIG.4, which may be changed and/or customized. The instructions stored in the memory may cause the processor to return to the default screen ofFIG.5A or6Awhen any user input is not received for a certain time (e.g., 2 seconds) in a state where the screen ofFIG.9A or9Bis displayed. The above time for returning to the default screen may restart whenever any user input is entered during the camera magnification adjustment. FIG.10Aillustrates a long press touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive, as the second user input, a long press touch input on the screen ofFIG.5B or6Bdisplayed in response to receiving the first user input. In addition, the instructions may cause the processor to calculate a scroll speed corresponding to the received long press touch input and thereby adjust the camera magnification. In response to receiving the long press touch input as the second user input as shown inFIG.10A, the processor of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may adjust the camera magnification. The instructions stored in the memory may distinguish the long press touch input from other types of the second user input when the processor adjusts the camera magnification. For example, the instructions may be set to recognize a press touch input of a predetermined time (e.g., 1 second) or more as the long press touch input. This is, however, exemplary only. Referring toFIG.10A, the processor receives the long press touch input on the zoom bar UI460. This is, however, exemplary only. Alternatively, the processor may be set to adjust the camera magnification by receiving the long press touch input on preview screen displayed on the display. In this case, the long press touch input on a right region of the display may be set to increase the camera magnification, and the long press touch input on a left region of the display may be set to reduce the camera magnification. As illustrated inFIG.10A, the long press touch input on the right region may be set to the zoom-in process of adjusting the camera magnification to a higher magnification. Similarly, although not shown, the long press touch input on the left direction may be set to the zoom-out process of adjusting the camera magnification to a low magnification. This is, however, exemplary only. In response to receiving the long press touch input, the processor may adjust the camera magnification through a calculation process of a zoom bar UI scroll speed as shown inFIG.3and also scroll along the zoom bar UI up to the camera magnification determined as the calculation result. A setting for converting a user input distance (i.e., a drag distance) corresponding to a user input type in order to calculate a scroll speed may be implemented with a separate table. The speed of scrolling (or navigating) through the zoom bar UI in response to the long press touch input may be determined through calculation in consideration of the acceleration factor for each section. Using the long press touch input as the second user input may be interpreted as a desire to quickly adjust the camera magnification, so that it may be set to navigate through the zoom bar UI at the maximum speed for each allocated section. The calculation of the scroll distance (or speed) in case of the long press touch input may be performed through conversion into a distance corresponding to a long press time by setting of software as described above with reference toFIG.4, which may be changed and/or customized. FIG.10Billustrates a result of zooming in response to a long press touch input in a method for adjusting a camera magnification of an electronic device, according to an embodiment. In response to receiving the long press touch input as the second user input on theFIG.10Ascreen, the processor may display an adjusted camera magnification corresponding to the received long press touch input as shown inFIG.10Band also display the preview screen corresponding to the adjusted camera magnification. This process of the processor may correspond to the above steps306,308,309, and310inFIG.3. As described above,FIGS.4and7A to10Billustrate various methods of adjusting the camera magnification of the electronic device in response to receiving the first and/or second user input(s). The camera magnification adjustment in case of receiving the second user input of selecting a specific camera magnification in the UI containing specific magnification indications as shown inFIGS.5B and6Bmay also be performed through the same and/or similar process as the above embodiments. In response to receiving a user input of selecting a specific camera magnification, the processor may adjust the camera magnification in accordance with the selected camera magnification and also scroll along the zoom bar UI up to the selected camera magnification. The speed of scrolling (or navigating) through the zoom bar UI up to the selected camera magnification may be determined through calculation in consideration of the acceleration factor for each section. Selecting the specific camera magnification as the second user input may be interpreted as a desire to quickly adjust the camera magnification, so that it may be set to navigate through the zoom bar UI at the maximum speed for each allocated section. The instructions stored in the memory may cause the processor to return to the default screen ofFIG.5A or6Awhen any user input is not received for a certain time (e.g., 2 seconds) in a state where the screen ofFIG.10A or10Bis displayed. The above time for returning to the default screen may restart whenever any user input is entered during the camera magnification adjustment. FIG.11Aillustrates a screen of receiving a first user input for displaying a zoom bar UI in a landscape mode of an electronic device, according to an embodiment. FIG.11Billustrates a screen related to camera magnification adjustment including displaying a zoom bar UI in a landscape mode of an electronic device, according to an embodiment. The electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may have two display modes, that is, a portrait mode as shown inFIG.5Aand a landscape mode as shown inFIG.11A. When the camera (e.g., the camera module180inFIG.1or the camera inFIG.2) is activated in the landscape mode, that is, when the camera application is executed, a default screen is displayed as if the electronic device is rotated by 90 degrees as shown inFIG.11A. A method of adjusting the camera magnification in the landscape mode may be same as that described with reference toFIGS.5A to10Bexcepting that the electronic device is rotated by 90 degrees. FIG.12is a flow diagram illustrating a process of highlighting a specific magnification display in camera magnification adjustment of an electronic device, according to an embodiment. In this process, respective operations corresponding to respective blocks depicted in the flow diagram may be performed sequentially, but this is not necessary. For example, the order of such operations may be changed at least in part, and at least two operations may be performed in parallel or concurrently. In addition, at least one operation may be omitted if necessary. The instructions stored in the memory (e.g., the memory130inFIG.1or the memory220inFIG.2) may cause the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) to receive a user input (e.g., a first user input and/or a second user input) for adjusting a magnification of a camera (e.g., the camera module180inFIG.1or the camera230inFIG.2) and navigate through the zoom bar UI460at a scroll speed calculated at step1210. The instructions stored in the memory cause the processor to identify, at step1220, the camera magnification adjusted at step1210(e.g., adjusted to the camera magnification 4.0). The instructions stored in the memory cause the processor to, when one of specific camera magnifications (e.g., 1.0, 2.0, 4.0, etc.), separately indicated, is identical with the camera magnification (e.g., 4.0) identified at step1220, highlight the identical camera magnification at step1230. FIG.13Aillustrates a process of highlighting a specific magnification display in a method for adjusting a camera magnification of an electronic device, according to an embodiment. FIG.13Billustrates a process of highlighting a specific magnification display in a method for adjusting a camera magnification of an electronic device, according to an embodiment. FIG.13Cillustrates a process of highlighting a specific magnification display in a method for adjusting a camera magnification of an electronic device, according to an embodiment. FIG.13Dillustrates a process of highlighting a specific magnification display in a method for adjusting a camera magnification of an electronic device, according to an embodiment. Referring toFIGS.13A to13D, the processor (e.g., the processor120inFIG.1or the processor210inFIG.2) may perform instructions configured to receive a first user input and/or a second user input and then adjust a magnification of a camera (e.g., the camera module180inFIG.1or the camera230inFIG.2).FIGS.13A to13Dillustrate a process of camera magnification adjustment (zoom-in or enlargement) using a swipe touch input as the second user input. Referring toFIG.13A, the processor of the electronic device (e.g., the electronic device101inFIG.1or the electronic device200inFIG.2) may identify a camera magnification of 1.0 on the zoom bar UI, and thereby highlight a specific camera magnification of 1.0 among separately indicated specific camera magnifications. The swipe touch input illustrated inFIGS.13A to13Dis merely one example of user inputs. Referring toFIG.13B, the processor may identify a camera magnification of 1.6 on the zoom bar UI, determine that the identified camera magnification does not match the separately indicated specific camera magnifications, and thereby display no highlight. A preview screen ofFIG.13Bis zoomed-in (or enlarged) by the camera magnification adjustment than a preview screen ofFIG.13A. Referring toFIG.13C, the processor may identify a camera magnification of 2.0 on the zoom bar UI, and thereby highlight a specific camera magnification of 2.0 among the separately indicated specific camera magnifications. A preview screen ofFIG.13Cis further zoomed-in (or enlarged) by the camera magnification adjustment than the preview screen ofFIG.13B. Referring toFIG.13D, the processor may identify a camera magnification of 3.7 on the zoom bar UI, determine that the identified camera magnification does not match the separately indicated specific camera magnifications, and thereby display no highlight. A preview screen ofFIG.13Dis further zoomed-in (or enlarged) by the camera magnification adjustment than the preview screen ofFIG.13C. AlthoughFIGS.13A to13Dillustrate an enlargement process (or zoom-in process) by the camera magnification adjustment, a reduction process (or zoom-out process) may be performed in a similar manner. In addition, the above-described steps1210to1230inFIG.12may also be applied to a case of receiving a touch input, as the second user input, for selecting a specific camera magnification, and in this case, it is possible to directly display the screen such asFIG.13A or13Cwhile navigating through the zoom bar UI at a maximum speed. While the disclosure has been particularly shown and described with reference to certain embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
73,617
11943531
DESCRIPTION OF THE EMBODIMENTS Exemplary embodiments of the present disclosure will be described below with reference to the drawings. <External Views of Digital Camera100> FIGS.1A and1Bare diagrams illustrating external views of a digital camera100(imaging apparatus) that is an example of an apparatus (electronic apparatus) to which an exemplary embodiment of the present disclosure can be applied.FIG.1Ais a front perspective view of the digital camera100.FIG.1Bis a rear perspective view of the digital camera100. A display unit28is a display unit on the rear of the digital camera100, and displays images and various types of information. A touch panel70acan detect a touch operation performed on the display surface (touch operation surface) of the display unit28. An external-viewfinder display unit43is a display unit on the top of the digital camera100, and displays various setting values of the digital camera100, including a shutter speed and an aperture. A shutter button61is an operation member for issuing an imaging instruction. A mode selection switch60is an operation member for switching between various modes. Terminal covers40are covers for protecting connectors (not illustrated) for connection cables and the like that connect the digital camera100to external devices. A main electronic dial71is a rotary operation member. Setting values, such as the shutter speed and the aperture, can be changed by rotating the main electronic dial71. A power switch72is an operation member for switching the power of the digital camera100on and off. A sub electronic dial73is a rotary operation member. A selection frame (cursor) can be moved and images can be fast-forwarded by rotating the sub electronic dial73. A four-way directional pad74is configured in such a way that top, bottom, left, and right portions can be separately pressed, and the digital camera100can perform processing corresponding to a pressed portion of the four-way directional pad74. A set button75is a push button and mainly used to determine a selection item. A moving image button76is used to issue instructions to start and stop moving image capturing (recording). An automatic exposure (AE) lock button77is a push button. An exposure state can be fixed by pressing the AE lock button77in an imaging standby state. A zoom button78is an operation button for switching on and off a zoom mode during a live view (LV) display in an imaging mode. An LV image can be zoomed in and out by operating the main electronic dial71with the zoom mode on. In a playback mode, the zoom button78functions as an operation button for magnifying a playback image or increasing the magnification ratio. A playback button79is an operation button for switching between an imaging mode and the playback mode. In a case where the playback button79is pressed during the imaging mode, the imaging mode transitions to the playback mode, where the latest image among images recorded on a recording medium200(to be described below) can be displayed on the display unit28. A menu button81is a push button to be used for an instruction operation to display a menu screen. In a case where the menu button81is pressed, a menu screen on which various settings can be performed is displayed on the display unit28. The user can intuitively perform various settings by using the menu screen displayed on the display unit28, the four-way directional pad74, and the set button75. A touch bar82(multifunction bar, or M-Fn bar) is a line-shaped touch operation member (touch line sensor) capable of receiving a touch operation. The touch bar82is disposed at a position where a touch operation can be performed (the touch bar82can be touched) with a right thumb while a grip portion90is gripped with the right hand (gripped with the right little finger, ring finger, and middle finger) in such a manner that the shutter button61can be pressed with the right index finger. In other words, the touch bar82is disposed at a position where the user in a state of putting an eye on an eyepiece unit16and looking through a viewfinder to be ready for pressing the shutter button61anytime (in an imaging attitude) can operate the touch bar82. The touch bar82is a receiving unit capable of receiving a tap operation (operation of a touch and releasing the touch within a predetermined period without a move) and left and right slide operations (operations of a touch and then moving the touch position without releasing the touch) on the touch bar82. The touch bar82is an operation member different from the touch panel70aand does not have a display function. A communication terminal10is a communication terminal for the digital camera100to communicate with a lens unit150(to be described below) which is a detachable unit. The eyepiece unit16is an eyepiece unit of an eyepiece viewfinder17(look-through viewfinder). The user can visually observe a video image displayed on an electronic viewfinder (EVF)29inside the eyepiece viewfinder17via the eyepiece unit16. An eye proximity detection unit57is an eye approach detection sensor that detects whether the user (photographer) is putting an eye on the eyepiece unit16. A lid202is a lid for a slot accommodating the recording medium200(to be described below). The grip portion90is a holding portion having a shape easy to be gripped with a right hand when a user holds the digital camera100in position. The shutter button61and the main electronic dial71are disposed at positions where the shutter button61and the main electronic dial71can be operated with the right index finger while the digital camera100is held with the grip portion90gripped with the right little finger, ring finger, and middle finger. The sub electronic dial73and the touch bar82are disposed at positions where the sub electronic dial73and the touch bar82can be operated with the right thumb in the same state. A thumb rest portion91(thumb standby position) is a grip member at a position on the rear of the digital camera100where the user can easily rest the thumb of the right hand gripping the grip portion90when not operating any of the operation members. The thumb rest portion91is made of a rubber member for an improved hold (gripping feel). <Configuration Block Diagram of Digital Camera100> FIG.2is a block diagram illustrating a configuration example of the digital camera100. A lens unit150is a lens unit including an interchangeable imaging lens. A lens103typically includes a plurality of lenses, but is represented by a single lens inFIG.2for the sake of simplicity. A communication terminal6is a communication terminal for the lens unit150to communicate with the digital camera100. The communication terminal10is a communication terminal for the digital camera100to communicate with the lens unit150. The lens unit150communicates with a system control unit50via the communication terminals6and10. The lens unit150controls an aperture stop1via an aperture drive circuit2by using a lens system control circuit4inside the lens unit150. The lens system control circuit4adjusts the focus of the lens unit150by moving a position of the lens103via an automatic focus (AF) drive circuit3. A shutter101is a focal plane shutter that can freely control an exposure time of an imaging unit22based on control by the system control unit50. The imaging unit22is an image sensor including a charge-coupled device (CCD) sensor or complementary metal-oxide-semiconductor (CMOS) sensor that converts an optical image into an electrical signal. The imaging unit22may include an imaging plane phase difference sensor that outputs defocus amount information to the system control unit50. An analog-to-digital (A/D) converter23converts an analog signal output from the imaging unit22into a digital signal. An image processing unit24performs predetermined processing (such as pixel interpolation, reduction and other resize processing, and color conversion processing) on data from the A/D converter23or data from a memory control unit15. The image processing unit24performs predetermined calculation processing using captured image data. The system control unit50performs exposure control and distance measurement control based on the calculation result obtained by the image processing unit24. Accordingly, through-the-lens (TTL) AF processing, AE processing, electronic flash (EF) (preliminary flash emission) processing are performed. The image processing unit24further performs predetermined calculation processing using the captured image data, and performs TTL automatic white balance (AWB) processing based on the calculation result obtained. The data output from the A/D converter23is written to a memory32via the image processing unit24and the memory control unit15. Alternatively, the data output from the A/D converter23is written to the memory32not via the image processing unit24but via the memory control unit15. The memory32stores image data that has been obtained by the imaging unit22and then digitally converted by the A/D converter23, and image data to be displayed on the display unit28and the EVF29. The memory32has a storage capacity sufficient to store a predetermined number of still images or a predetermined duration of moving image and sound. The memory32also serves as an image display memory (video memory). A digital-to-analog (D/A) converter19converts image display data stored in the memory32into an analog signal, and supplies the analog signal to the display unit28and the EVF29. The display image data written to the memory32is thus displayed on the display unit28and the EVF29via the D/A converter19. Each of the display unit28and the EVF29is a display, such as a liquid crystal display (LCD) and an organic electroluminescence (EL) display, and provides a display based on the analog signal from the D/A converter19. An LV display can be provided by converting a digital signal A/D-converted by the A/D converter23and stored in the memory32into an analog signal by the D/A converter19, and successively transferring the analog signal to the display unit28or the EVF29and displaying the analog signal. The image displayed by the LV display will hereinafter be referred to as an LV image. The system control unit50is a control unit including at least one processor and/or at least one circuit, and controls the entire digital camera100. The system control unit50is a processor as well as a circuit. The system control unit50implements various types of processing according to the present exemplary embodiment to be described below by executing programs recorded in a nonvolatile memory56. The system control unit50also performs display control by controlling the memory32, the D/A converter19, the display unit28, and the EVF29. A system memory52is a random access memory (RAM), for example. The system control unit50loads operating constants of the system control unit50, variables, and programs read from the nonvolatile memory56into the system memory52. The nonvolatile memory56is an electrically erasable and recordable memory. Examples include an electrically erasable programmable read-only memory (EEPROM). The nonvolatile memory56records operating constants of the system control unit50and programs. The programs here refer to ones for performing various flowcharts to be described below in the present exemplary embodiment. A system timer53is a clocking unit that measures time to be used for various types of control and the time of a built-in clock. A communication unit54includes a communication interface, and transmits and receives video signals and audio signals to and from an external device connected wirelessly or by a cable. The communication unit54can also connect to a wireless local area network (LAN) and the Internet. The communication unit54can also communicate with an external device using Bluetooth® and Bluetooth® Low Energy. The communication unit54can transmit images captured by the imaging unit22(including an LV image) and images recorded on the recording medium200, and receive image data and other various types of information from an external device. An orientation detection unit55detects orientation of the digital camera100with respect to the direction of gravity. Whether an image captured by the imaging unit22is one captured with the digital camera100held landscape or with the digital camera100held portrait can be determined based on orientation detected by the orientation detection unit55. The system control unit50can add direction information corresponding to the orientation detected by the orientation detection unit55to the image file of the image captured by the imaging unit22, or rotate the image and record the rotated image. An acceleration sensor or a gyro sensor can be used as the orientation detection unit55. Motion of the digital camera100, including pan, tilt, lift, and whether at rest or not, can also be detected by using the acceleration sensor or gyro sensor that is the orientation detection unit55. The eye proximity detection unit57is an eye proximity detection sensor that detects an approach (eye approach) and a separation (eye separation) of an eye (object) to/from the eyepiece unit16of the eyepiece viewfinder (hereinafter, referred to simply as a “viewfinder”)17(proximity detection). The system control unit50switches the display unit28and the EVF29between display (display state) and non-display (hidden state) based on a state detected by the eye proximity detection unit57. More specifically, in a case where the digital camera100is at least in an imaging standby state, and a display destination switching setting is set at automatic switching, the system control unit50turns on display on the display unit28as the display destination and turns off display on the EVF29during eye separation. The system control unit50turns on display on the EVF29as the display destination and turns off display on the display unit28during eye approach. Examples of the eye proximity detection unit57include an infrared proximity sensor. The eye proximity detection unit57can detect an approach of an object to the eyepiece unit16of the eyepiece viewfinder17including the built-in EVF29. In a case where an object is in the proximity of the eye proximity detection unit57, infrared rays projected from a light projection part (not illustrated) of the eye proximity detection unit57are reflected on the object and the reflected light is received by a light receiving part (not illustrated) of the eye proximity detection unit57. How close an object is to the eyepiece unit16(eye approach distance) can also be determined based on the amount of infrared rays received. In such a manner, the eye proximity detection unit57performs eye approach detection to detect the approaching distance of the object to the eyepiece unit16. In a case where an object approaching the eyepiece unit16from an eye separation state (non-approach state) into within a predetermined distance is detected, an eye approach is detected. In a case where an object having been detected in an eye approach state (approach state) is separated from the eye proximity detection unit57by a predetermined distance or more, an eye separation is detected. A threshold for detecting an eye approach and a threshold for detecting an eye separation may have hysteresis to be different from each other. After an eye approach is detected, the eye approach state continues until an eye separation is detected. After an eye separation is detected, the eye separation state continues until an eye approach is detected. The infrared proximity sensor is just an example, and other sensors capable of detecting a state that can be regarded as an eye approach may be employed for the eye proximity detection unit57. The external-viewfinder display unit43displays various setting values of the digital camera100, including a shutter speed and an aperture, via an external-viewfinder display unit drive circuit44. A power supply control unit80includes a battery detection circuit, a direct-current-to-direct-current (DC-DC) converter, and a switch circuit for switching blocks to be energized. The power supply control unit80detects presence or absence of a battery mounted, a type of battery, and a remaining battery level. The power supply control unit80also controls the DC-DC converter based on the detection results and instructions from the system control unit50, and supplies predetermined voltages to various units, including the recording medium200, for predetermined periods. A power supply unit30includes a primary battery, such as an alkali battery and a lithium battery, a secondary battery, such as a nickel-cadmium (NiCd) battery, a nickel metal hydride (NiMH) battery, and a lithium-ion (Li) battery, and/or an alternating current (AC) adaptor. A recording medium interface (I/F)18is an I/F with which to connect to the recording medium200, such as a memory card and a hard disk. The recording medium200is one intended to record captured images, and includes a semiconductor memory or a magnetic disk. An operation unit70is an input unit for receiving operations from a user (user operations). The operation unit70is used by a user to input various operation instructions to the system control unit50. As illustrated inFIG.2, the operation unit70includes the shutter button61, the mode selection switch60, the power switch72, the touch panel70a, and other operation members70b. The other operation members70binclude the main electronic dial71, the sub electronic dial73, the four-way directional pad74, the set button75, the moving image button76, the AE lock button77, the zoom button78, the playback button79, the menu button81, and the touch bar82. The shutter button61includes a first shutter switch62and a second shutter switch64. The first shutter switch62turns on to generate a first shutter switch signal SW1when the shutter button61is operated halfway, i.e., half-pressed (imaging preparation instruction). The system control unit50starts imaging preparation operations, such as the AF processing, the AE processing, the AWB processing, and the EF (preliminary flash emission) processing, in response to the first shutter switch signal SW1. The second shutter switch64turns on to generate a second shutter switch signal SW2when the shutter button61is completely operated, i.e., fully pressed (imaging instruction). In response to the second shutter switch signal SW2, the system control unit50starts a series of image processing operations from reading signals from the imaging unit22to writing a captured image to the recording medium200as an image file. The mode selection switch60switches an operation mode of the system control unit50to any one of a still image capturing mode, a moving image capturing mode, and a playback mode. The still image capturing mode includes such modes as an automatic imaging mode, an automatic scene determination mode, a manual mode, an aperture priority mode (Av mode), a shutter speed priority mode (Tv mode), and a program AE mode (P mode). The still image capturing mode further includes various scene modes each of which is for an imaging scene-specific imaging setting, and a custom mode. The user can directly switch to one of the modes by using the mode selection switch60. Alternatively, the user can once switch to an imaging mode list screen by using the mode selection switch60, and then selects and switches to one of a plurality of displayed modes by using other operation members. The operation modes may similarly include a plurality of moving image capturing modes. The touch panel70ais a touch sensor for detecting various touch operations performed on the display surface of the display unit28(operation surface of the touch panel70a). The touch panel70aand the display unit28may be integrally configured. For example, the touch panel70ais configured to have a light transparency not interfering with display on the display unit28, and attached onto a display surface of the display unit28. Input coordinates of the touch panel70aare associated with display coordinates on the display surface of the display unit28. A graphical user interface (GUI) that enables the user to perform operations as if directly operating a screen displayed on the display unit28can thus be provided. The system control unit50can detect the following operations or states of the touch panel70a:That a finger or a pen not touching the touch panel70anewly touches the touch panel70a, i.e., a start of a touch (hereinafter, referred to as a touch-down);A state where the touch panel70ais touched with a finger or a pen (hereinafter, referred to as a touch-on);That a finger or a pen touching the touch panel70amoves (hereinafter, referred to as a touch-move);That a finger or a pen touching the touch panel70ais released from the touch panel70a, i.e., an end of a touch (hereinafter, referred to as a touch-up); andA state where nothing is touching the touch panel70a(hereinafter referred to as a touch-off). In a case where a touch-down is detected, a touch-on is also detected at the same time. After a touch-down, a touch-on typically continues to be detected unless a touch-up is detected. In a case where a touch-move is detected, a touch-on is also detected at the same time. In a case where a touch-on is detected and the touch position does not move, a touch-move is not detected. A touch-off is detected after all fingers and pens touching are detected to be touched up. The system control unit50is notified of such operations and states and position coordinates of a touching finger or a touching pen on the touch panel70avia an internal bus. The system control unit50then determines what operation (touch operation) is performed on the touch panel70abased on the notified information. In a case where a touch-move is performed, the system control unit50can also determine a moving direction of the finger or pen moving over the touch panel70ain terms of vertical and horizontal components on the touch panel70aseparately based on a change in position coordinates. In a case where a touch-move is detected for a predetermined distance or more, the system control unit50determines that a slide operation is performed. An operation of quickly moving a finger touching the touch panel70afor some distance and releasing the finger immediately after the moving is called a flick. In other words, a flick is an operation of quickly sweeping the surface of the touch panel70awith a finger or a pen as if flicking. In a case where a touch-move is detected for a predetermined distance or more at a predetermined speed or higher and a touch-up is detected immediately after the touch-move, the system control unit50can determines that a flick is performed (a flick is performed after a slide operation). A touch operation of simultaneously touching (multi-touching) a plurality of points (for example, two points) and bringing the touch positions close to each other is called a pinch-in. A touch operation of separating the touch positions from each other is called a pinch-out. A pinch-out and a pinch-in are referred to collectively as pinch operations (or simply pinches). The touch panel70amay be any one of various types of touch panels, including resistive, capacitive, elastic wave, infrared, electromagnetic induction, image recognition, and optical-sensor touch panels. Possible detection methods include one for detecting a touch based on contact with the touch panel and one for detecting a touch based on approach of a finger or pen to the touch panel, either of which may be employed. FIG.3is a conceptual diagram illustrating an image processing system including the digital camera100.FIG.3illustrates a relationship between the digital camera100and peripheral devices, such as a cloud server. A cloud storage301is a storage server on the cloud for storing images. The digital camera100and the cloud storage301are linked (paired) with each other in advance, whereby captured images300can be transmitted to the cloud storage301based on a transmission instruction issued from the digital camera100. The link between the digital camera100and the cloud storage301has been established by using information (user account information) with which the user can be identified, such as a user identifier (ID). A smartphone302is a mobile device (such as a mobile phone terminal and a tablet terminal) for browsing images. In the present exemplary embodiment, the smartphone302is a smartphone. The images stored in the cloud storage301can be browsed using the smartphone302. A development server303is an image processing server (content processing server) that performs image processing, such as raw development, and is provided on the cloud. The development server303receives a development instruction from the smartphone302and performs development processing on images (contents) stored in the cloud storage301. The development server303on the cloud has high throughput compared to personal computers (PCs), and are capable of performing more sophisticated types of image processing than development applications on PCs. The latest types of image processing can be performed by using the development server303even in a case where a terminal, such as the smartphone302, is not updated with programs for performing latest sophisticated image processing. Specifically, in a case where a development instruction is issued, the cloud storage301transmits raw images stored therein to the development server303, and the development server303performs development processing, such as sophisticated noise reduction using deep learning. The digital camera100is unable to perform such sophisticated types of development processing (image processing). After the development processing, the development server303transmits Joint Photographic Experts Group (JPEG) (or High Efficiency Image File (HEIF)) images that are finished developed images to the cloud storage301. The development processing performed using the development server303will hereinafter be referred to as cloud development processing. In the present exemplary embodiment, a method for browsing images using the smartphone302will be described as a method for checking a result obtained by the sophisticated development processing. After the digital camera100, the cloud storage301, and the development server303are linked (associated) with each other using identification information, such as a user ID, development instructions can be issued from the digital camera100to the development server303. Specifically, the user adjusts development processing parameters, i.e., image processing parameters, about color tones, such as white balance and brightness, of a raw image stored in the digital camera100. The adjusted development processing parameters are recorded in the raw image file. The digital camera100then transmits the raw image file to the cloud storage301along with an instruction for the cloud development processing. The raw image file is transferred to the development server303via the cloud storage301. The development server303performs the development processing based on the development processing parameters (image processing parameters) adjusted by the digital camera100, and further performs the foregoing sophisticated image processing at the same time. Next, the cloud development processing and a UI of the digital camera100for issuing a development instruction to the development server303will be described in detail. Before the detailed description, a supplementary description of the development server303will be given, including two preconditions in the present exemplary embodiment:Precondition (1): The development server303is a dedicated server on the cloud, and incurs server maintenance costs and server communication costs.Precondition (2): A plurality of servers is run in parallel as the development server303. There are two reasons why precondition (2) is applied to the development server303according to the present exemplary embodiment. First, the plurality of servers can perform parallel processing for development, whereby the processing is accelerated. Suppose, for example, that the development processing of a raw image is performed by four servers. In such a case, since the raw image can be divided into four areas and the four servers can perform the development processing at the same time, the processing speed can be approximately one quarter compared to the case where the development processing is performed by a single server. Even in a case where development instructions are simultaneously issued by four users, the development instructions from the users can be distributed to and processed by the respective servers. This reduces processing wait time compared to the case with a single server. Secondly, the load per server is reduced. For example, if one of the four servers fails and becomes unusable, the other three can continue processing, and the service does not need to be stopped. Even in the environment where a plurality of servers is used to perform parallel processing, it is desirable to take into account the possibility of several thousands of users simultaneously issuing development instructions. The server maintenance costs and communication costs discussed in precondition (1) also taken into account, and thus the service provider may therefore desirably impose restrictions (upper limit) on the use per user. For example, restrictions can be imposed in such a manner that the number of images developable per user per month is limited up to M (M is a numeral). For the above-described purpose, accounts are created to manage the use of the development server303so that the development server303can be used only by users subscribed to a cloud development processing service, such as image processing service, content processing service. In the present exemplary embodiment, a user setting management unit304performs user management. The user setting management unit304may be implemented by the cloud storage301or by a management server different from the cloud storage301. With the foregoing user management, the cloud service including cloud development processing can be provided either on a free basis or on a paid basis. On a free basis, the cloud service is provided to a user on condition that the development server303can be used up to M images per month, by simply creating a user account (hereinafter, referred to simply as an account). On a paid basis, the cloud service is provided to a user on condition that the development server303can be used up to M images per month for fees like a subscription service. Information about the upper limit of use can be inquired and obtained by the digital camera100or other external devices linked with the cloud storage301. The digital camera100may include a UI for creating an account and registering for subscription. Other devices such as the smartphone302may include the UI. Based on the above described case, the cloud development processing and the UI of the digital camera100for issuing a development instruction to the development server303will be described in detail. FIGS.4to8A and8Bare flowcharts illustrating processing where the digital camera100(hereinafter, the digital camera100may be referred to simply as a camera) issues instructions for the cloud development processing. The flowcharts ofFIGS.4to8A and8Bare all implemented by loading programs recorded in the nonvolatile memory56into the system memory52and executing the programs by the system control unit50. The system control unit50performs communication control using the communication unit54and display control on the display unit28by executing the programs.FIGS.9A to9F to12A to12Cillustrate display examples of screens on the camera100.FIG.13is a structure diagram of a raw image file. The following description will be given with reference to these diagrams. FIG.4illustrates a procedure related to playback menu processing that is performed by the camera100. The processing illustrated inFIG.4is started when the digital camera100is activated and set to the playback mode and the menu button81is pressed in a state where a playback image is displayed (in a state where a playback screen is displayed). In step S401, the system control unit50displays a playback menu screen on the display unit28of the digital camera100.FIG.9Aillustrates a display example of the playback menu screen. The playback menu screen displays a plurality of menu items. For example, the menu items including “trimming”, “raw development” for issuing an instruction for development in the camera main body, “HEIF to JPEG conversion”, and “cloud raw development” are displayed. The user can select any one of the plurality of displayed menu items by moving a cursor to the menu item using the operation unit70. The user can determine selection of (hereinafter, referred to as select and determine) a menu item selected by the cursor by pressing the set button75. In step S402, the system control unit50determines whether a menu item911for cloud raw development is selected and determined from among the menu items displayed on the playback menu screen. In a case where the menu item911for cloud raw development is selected and determined (YES in step S402), the processing proceeds to step S403. In a case where the menu item911for cloud raw development is not selected or determined (NO in step S402), the processing proceeds to step S406. In step S403, the system control unit50determines whether the digital camera100has been linked or paired with the cloud storage301. In a case where the digital camera100has been linked with the cloud storage301(YES in step S403), the processing proceeds to step S404. In a case where the digital camera100has not been linked with the cloud storage301(NO in step S403), the processing proceeds to step S405. The link with the cloud storage301can be established in advance by using a pairing setting item included in a setting menu screen of the digital camera100. In pairing setting, individual identification information about the digital camera100and account information are associated with each other to establish pairing by connecting to the cloud storage301, entering the account information from the digital camera100, and logging in to the cloud storage301. In a case where the pairing established, the individual identification information about the digital camera100is associated with the account information and recorded in the cloud storage301or the user setting management unit304. In response to the pairing established, the account information and information indicating the establishment of the pairing are also recorded in the nonvolatile memory56of the digital camera100. In step S403, the system control unit50refers to the nonvolatile memory56, and in a case where information indicating the establishment of the pairing is recorded, the system control unit50determines that the digital camera100has been linked (paired). In a case where information indicating the establishment of the pairing is not recorded, the system control unit50determines that the digital camera100has not been linked. In other words, the determination of step S403can be performed on the offline device without connecting to the cloud storage301. In step S404, the system control unit50performs cloud development menu processing. The cloud development menu processing will be described below with reference toFIGS.5A and5B. In step S405, the system control unit50displays an error screen on the display unit28. Examples of a content of the error screen displayed here include “Not linked” and “Please pair your camera with the server”. In step S406, the system control unit50determines whether a menu item other than the menu item911for the cloud raw development (another menu item) is selected and determined from among the plurality of menu items displayed on the playback menu screen. In a case where a menu item other than the menu item911for the cloud raw development (another menu item) is selected and determined (YES in step S406), the processing proceeds to step S407. In a case where a menu item is not selected or determined (NO in step S406), the processing proceeds to step S408. In step S407, the system control unit50performs processing corresponding to the menu item selected and determined in step S406. For example, in a case where the menu item for trimming is selected and determined, the system control unit50displays a screen for trimming an image. In step S408, the system control unit50determines whether an instruction to end the playback menu screen is issued. In a case where the end instruction is issued (YES in step S408), the playback menu processing ends. The system control unit50switches the playback menu screen to an image playback screen in the playback mode. In case where the end instruction is not issued (NO in step S408), the processing returns to step S402. FIGS.5A and5Bis a detailed flowchart of the cloud development menu processing in step S404ofFIG.4described above. In step S501, the system control unit50reads information registered in a transmission reservation list recorded on the recording medium200. Specifically, the system control unit50reads the number of images registered in the transmission reservation list from the recording medium200. In step S502, the system control unit50displays a cloud development menu on the display unit28.FIG.9Billustrates a display example of the cloud development menu. The display contents of the cloud development menu will now be described. A specified image number display field921displays the number of images reserved and registered to be subjected to the cloud development processing by the development server303. Here, the number of images registered in the transmission reservation list read in step S501is displayed. The specified image number display field921displays “0” in a case where this screen is displayed for the first time without a transmission reservation for the cloud development processing, like immediately after purchase of the digital camera100and immediately after full initialization. A developable image number display field922displays the remaining number of images on which the cloud development processing can be performed in a present account currently logged in. As described above, an upper limit of M images per user per month is imposed on the number of images that can be instructed to be developed by the development server303. The upper limit number is thus also displayed on the UI of the camera100. When this screen is displayed for the first time, the developable image number display field922provides a display like “?”, instead of displaying “0”, from which a user can identify the number of images as being unknown or yet to be obtained. Such display is performed since the exact number of images developable in the present account is unknown because the connection processing with the cloud storage301is not performed before transition from the playback screen to this screen for the purpose of reducing power consumption. Not displaying a number in the developable image number display field922until the digital camera100connects to the cloud storage301and obtains information from the user setting management unit304can prevent a user's misunderstanding. An update button923is a GUI button, a display item, or an operation icon for updating the number of developable images displayed in the developable image number display field922. Based on a selecting and determining operation or a touch operation on the update button923, the system control unit50performs connection processing with the cloud storage301and displays the number of developable images obtained from the user setting management unit304in the developable image number display field922. An add button924is a GUI button, a display item, or an operation icon for issuing instructions to select an image or images to be subjected to the cloud development processing by the development server303. Based on a selecting and determining operation or a touch operation on the add button924, the screen of the cloud development menu transitions to a screen for adding an image or images to be developed. A check/cancel button925is a GUI button, a display item, or an operation icon for checking an image(s) reserved to be transmitted for the cloud development processing by operating the add button924, or cancelling a transmission reservation of an image(s) in the transmission reservation list. A transmission button926is a GUI button, a display item, or an operation icon for transmitting the image(s) registered in the transmission reservation list. A return button927is a GUI button for returning from the cloud development menu ofFIG.9Bto the playback menu screen ofFIG.9A. The user can also return to the playback menu screen by pressing the menu button81instead of directly operating the return button927. In step S503, the system control unit50determines whether the update button923is selected and determined. In a case where the update button923is selected and determined (YES in step S503), the processing proceeds to step S504. In a case where the update button923is not selected or determined (NO in step S503), the processing proceeds to step S512. In step S504, the system control unit50performs connection processing with a server to obtain information about the upper limit of the number of images (limited number of images) up to which the development processing (cloud development processing) of the development server303can be used. The server here refers to at least one of the cloud storage301, the user setting management unit304, and the development server303. The communication unit54can connect to the server using a wired connection, such as the Ethernet, or a wireless connection, such as Wi-Fi. The connection processing establishes a communication session. Once the connection processing is completed (connection is established), the digital camera100enters an “online state”. In step S505, the system control unit50obtains information about a subscribing/unsubscribing state of the present account to the cloud development processing service and, in a case where the present account is in the subscribing state, the system control unit50also obtains the number of developable images from the server (the user setting management unit304in particular) via the communication unit54. The system control unit50stores the obtained information in the nonvolatile memory56. In other words, in a case of the subscribing state, the nonvolatile memory56stores the number of developable images, and in a case of the unsubscribing state, the nonvolatile memory56stores information indicating non-subscription in the unsubscribing state. In step S506, the system control unit50determines the state of subscription to the cloud development processing service based on the information obtained in step S505. In a case where the cloud development processing service is subscribed (YES in step S506), the processing proceeds to step S507. In a case where the cloud development processing service is not subscribed (NO in step S506), the processing proceeds to step S509. In step S507, the system control unit50performs processing for disconnecting from the server. In a case where the connection disconnected, the digital camera100enters an “offline state”. In step S508, the system control unit50displays the cloud development menu on the display unit28again. In this process, the developable image number display field922is updated to display the numerical value of the number of developable images obtained in step S505.FIG.9Cillustrates a display example after the update.FIG.9Cillustrates a case where the information obtained in step S505notifies that the number of developable images is 50, the display is thus updated with that value. Specifically, the developable image number display field922displays 50 which is the updated value. In this process, the date and time of acquisition of the information about the subscribing state to the cloud development processing service may be stored in the nonvolatile memory56, and the date and time when the number of images displayed in the developable image number display field922is last acquired may be displayed each time the cloud development menu is displayed. This enables the user to determine whether the information is the latest or old. In step S509, the system control unit50displays a message that the user has not subscribed to the cloud development processing service yet on a message screen.FIG.9Dillustrates a display example of the message about the non-subscription. In a case where the user checks the message and presses an OK button941, the processing proceeds to step S510. In step S510, like the foregoing step S507, the system control unit50performs the processing for disconnecting from the server. In step S511, the system control unit50displays the cloud development menu on the display unit28again. Since the information obtained in step S505indicates the unsubscribing state, the developable image number display field922displays “?” which is the same display as in the developable image number display field922ofFIG.9B. In step S512, the system control unit50determines whether the add button924is selected and determined. In a case where the add button924is selected and determined (YES in step S512), the processing proceeds to step S513. In a case where the add button924is not selected or determined (NO in step S512), the processing proceeds to step S514. In step S513, the system control unit50performs addition processing for adding a raw image(s) to request the cloud development processing, i.e., adding a raw image(s) to the transmission reservation list stored in the recording medium200. Details of the addition processing will be described below with reference toFIGS.6A and6B. In step S514, the system control unit50determines whether the check/cancel button925is selected and determined. In a case where the check/cancel button925is selected and determined (YES in step S514), the processing proceeds to step S515. In a case where the check/cancel button925is not selected or determined (NO in step S514), the processing proceeds to step S516. In step S515, the system control unit50performs check/cancellation processing for checking/cancelling the raw image(s) to be subjected to the cloud development processing, i.e., raw image(s) of which an image ID or IDs is/are recorded in the transmission reservation list. Details of the check/cancellation processing will be described below with reference toFIG.7. In step S516, the system control unit50determines whether the transmission button926is selected and determined. In a case where the transmission button926is selected and determined (YES in step S516), the processing proceeds to step S517. In a case where the transmission button926is not selected or determined (NO in step S516), the processing proceeds to step S519. In step S517, the system control unit50determines whether the number of images specified to be subjected to the cloud development processing, i.e., the number of raw images registered in the transmission reservation list stored in the recording medium200, is 0. In a case where the number of images specified is 0 (YES in step S517), the processing proceeds to step S519. In a case where the number of images specified is not 0 (NO in step S517), the processing proceeds to step S518. In step S518, the system control unit50performs transmission processing for transmitting the raw image(s) registered in the transmission reservation list stored in the recording medium200to the cloud storage301. Details of the transmission processing will be described below with reference toFIGS.8A and8B. In step S519, the system control unit50determines whether an instruction to end the display of the cloud development menu is issued, which is, for example, whether the return button927is selected and determined, or whether the menu button81is pressed. In a case where the end instruction is issued (YES in step S519), the cloud development menu processing ends. In a case where the end instruction is not issued (NO in step S519), the processing returns to step S503. FIGS.6A and6Bare a detailed flowchart of the addition processing in step S513ofFIG.5Bdescribed above. In step S601, the system control unit50displays a development type selection screen1on the display unit28. The development type selection screen1is a menu screen on which a user can select whether processing is to be performed with or without relighting. In a case where relighting is selected to be performed, relighting parameters become adjustable as development parameters to be adjusted, and the development server303is instructed to perform image processing including relighting. In a case where relighting is selected to not be performed, relighting parameters are not provided as development parameters to be adjusted, and the development server303is instructed to perform image processing without relighting. Relighting refers to processing for corrections to brighten human faces, shaded portions of faces in particular, in an image, by adjusting parameters such as an angle of application of virtual light to the shaded portions of the faces and the strength thereof. Different parameters are set image by image since shadings by relighting vary from one object to another in an image. The processing load of relighting is high. In the present exemplary embodiment, in a case where relighting is selected to be performed, only one image is thus allowed to be specified at a time. In a case where relighting is selected to not be performed, a plurality of images can be collectively specified. In step S602, the system control unit50determines whether relighting is selected to be performed on development type selection screen1. In a case where relighting is selected to not be performed (NO in step S602), the processing proceeds to step S603. In a case where relighting is selected to be performed (YES in step S602), the processing proceeds to step S606. In step S603, the system control unit50displays a selection screen where a plurality of images can be selected, which is for example, a list view screen listing thumbnail images of each of the images, on the display unit28. A display image(s) used in this screen is a Design rule for Camera File system (DCF) thumbnail portion of DisplayImageData1308in a raw image file illustrated inFIG.13which is read by the system control unit50for use in the display. Images already selected to be developed, i.e., images added to the transmission reservation list are not displayed as selection candidates on the list view screen. In step S604, the system control unit50accepts an image selection operation. Specifically, the system control unit50performs processing for putting a checkmark on an image to be developed on the foregoing list view screen based on user operations. In the example case of the present exemplary embodiment, checkmarks are put on a plurality of images. In step S605, the system control unit50determines whether an instruction to end the image selection operation, i.e., a selection completion instruction is issued. In a case where an instruction to end the image selection operation (selection completion instruction) is issued (YES in step S605), the processing proceeds to step S609. In a case where an instruction to end the image selection operation is not issued (NO in step S605), the processing returns to step S604. In step S606, the system control unit50displays a selection screen where only one image can be selected, which is for example, a screen displaying a reproduced image, on the display unit28. The display image used in this screen is a medium-side image of DisplayImageData1308in the raw image file illustrated inFIG.13which is read by the system control unit50for use in the display. Images already selected to be developed, which are images added to the transmission reservation list, are not displayed as selection candidates. In step S607, the system control unit50receives an image selection operation. In a case where the user selects the image displayed on the display unit28, the user issues a selection completion instruction for the image. In a case where the user wants to select a different image from the one displayed on the display unit28, the user performs an image switch operation to switch the image displayed on the display unit28to another selection candidate. The user can select an image from among the selection candidates by repeating switching until an image to be selected is displayed. In step S608, the system control unit50determines whether an instruction to end the image selection operation (selection completion instruction) is issued. In a case where the end instruction (selection completion instruction) is issued (YES in step S608), the processing proceeds to step S609. In a case where the end instruction is not issued (NO in step S608), the processing returns to step S607. In step S609, the system control unit50displays a development type selection screen2on the display unit28. The development type selection screen2is a menu on which, for the image(s) selected in step S604or S607, a user selects from among settings used in image capturing or user-set parameters as the development processing parameters to be used in the development server303specified by the camera100. In a case where “image settings” that is an option to use the image settings used in image capturing is selected, the imaging parameters (development parameters) that have been used during the image capturing and recorded in the raw image file(s) are specified. As for a case where user-set parameters are to be used, there is a plurality of options for specifying a developed file format (“fine development into JPEG” and “fine development into HEIF”) as options. In a case where “fine development into JPEG” is selected, a preview image is displayed and the user can freely adjust and specify development parameters usable for JPEG development. Images developed by the cloud development processing using these parameters are in a JPEG format. In a case where “fine development into HEIF” is selected, a preview image is displayed and the user can freely adjust and specify development parameters usable for HEIF development. Images developed by the cloud development processing using these parameters are in a HEIF format. In step S610, the system control unit50determines whether “imaging settings” is selected and determined on development type selection screen2. In a case where “imaging settings” is selected and determined (YES in step S610), the processing proceeds to step S611. In a case where “imaging settings” is not selected or determined (NO in step S610), the processing proceeds to step S612. In step S611, the system control unit50temporarily develops the image(s) using the imaging settings and provides a preview display on the display unit28. Specifically, the system control unit50reads RecParameter1305illustrated inFIG.13from the selected raw image file(s), and performs development processing on ImageData1309that is raw data which is undeveloped image(s). The system control unit50displays the image(s) resulting from the development processing on the display unit28as a preview image(s). The user can check visual impression of an outcome of the development on the display unit28before submission to the development server303. The result of the development by the development server303will not be exactly the same as the preview display, because of additional sophisticated image processing. The processing proceeds to step S622. In step S612, the system control unit50determines whether “fine development into JPEG” is selected and determined on development type selection screen2. In a case where “fine development into JPEG” is selected and determined (YES in step S612), the processing proceeds to step S613. In a case where “fine development into JPEG” is not selected or determined (NO in step S612), the processing proceeds to step S617. In step S613, the system control unit50displays a development parameter setting screen for generating a JPEG file on the display unit28. FIG.10illustrates a display example of the development parameter setting screen. The development parameter setting screen displays a preview image1010and a plurality of display items (icons) corresponding to a respective plurality of types of development parameters that can be adjusted by the user. For example, brightness can be changed by selecting an icon1011, white balance can be changed by selecting an icon1012, and color space parameters can be changed by selecting an icon1013. The icons corresponding to the respective types of development parameters indicate setting values currently set as those types of development parameters. The user can select one of the plurality of icons corresponding to the types of development parameters, for example, by moving a selection cursor using the four-way directional pad74, and adjust the selected type of parameter by adjustment operations, for example, by operating the main electronic dial71. Even on the development parameter setting screen, like step S611, the development processing using the settings used in image capturing is performed to provide a preview display for the first time. In a case where a plurality of raw image files is selected in the foregoing S605, the raw image file of the earliest capturing date and time among the selected raw image images is developed using the settings used in the image capturing and displayed on the display unit28. In step S614, the system control unit50determines whether an operation to adjust any one of the adjustable types of development parameters (parameter change operation) is performed on the development parameter setting screen. In a case where a parameter change operation is performed (YES in step S614), the processing proceeds to step S615. In a case where a parameter change operation is not performed (NO in step S614), the processing proceeds to step S616. In step S615, the system control unit50changes the development parameter based on the parameter change operation. In the present exemplary embodiment, the changed development parameter is recorded into the system memory52in step S615. However, like step S624to be described below, a recipe of the raw image file may be updated with the changed development parameter. The system control unit50also performs development processing for a preview display based on the changed development parameter. For example, in a case where the icon1011is selected and a parameter change operation for increasing brightness by one level is performed, the system control unit50performs development processing on ImageData1309that is the raw data (undeveloped image) in such a manner that the brightness increases by one level. The system control unit50then updates the preview image1010with the image developed by this development processing. The visual impression of an outcome can thus also be checked in step S615before submission to the development server303. In step S616, the system control unit50determines whether an instruction (save instruction) to save the development parameter(s) adjusted (changed) on the development parameter setting screen is issued. In a case where the save instruction is issued (for example, a save button1014displayed on the development parameter setting screen is selected and determined) (YES in step S616), the processing proceeds to step S622. In a case where the save instruction is not issued (NO in step S616), the processing returns to step S614. In step S617, the system control unit50determines whether “fine development into HEIF” is selected and determined on development type selection screen2. In a case where “fine development into HEIF” is selected and determined (YES in step S617), the processing proceeds to step S618. In a case where “fine development into HEIF” is not selected or determined (NO in step S617), the processing proceeds to step S622. In step S618, the system control unit50displays a development parameter setting screen for generating a HEIF file on the display unit28. The development parameter setting screen for generating a HEIF file is similar to the development parameter setting screen illustrated inFIG.10. Examples of the changeable development parameters include brightness and white balance. Changes to some types of parameters may be restricted. For example, unlike the generation of a JPEG file, color space parameters may be fixed to predetermined values. JPEG development parameters and HEIF development parameters that are changeable (adjustable) may include at least one different type of development parameter. The processing of steps S619to S621is similar to that of the foregoing steps S614to S616. A description thereof will thus be omitted. In step S622, the system control unit50displays a message for confirming whether to save the current content (selected image(s) and adjusted development parameter(s)), an OK button for determining to save the current content, and a cancel button for cancelling the adjustments on the display unit28. In step S623, the system control unit50determines whether the OK button for determining to save the current content is selected and determined on the message display screen displayed in step S622. In a case where the OK button is selected and determined (YES in step S623), the processing proceeds to step S624. In a case where the cancel button is selected and determined (NO in step S623), the processing returns to step S609. In step S624, the system control unit50performs processing for saving the current content (selected image(s) and adjusted parameter(s)). Specifically, the system control unit50records an image ID(s) (such as a file name(s) and a unique ID(s)) indicating the selected raw image file(s) into the transmission reservation list stored in the recording medium200as information indicating the image(s) reserved to be transmitted. The system control unit50also overwrites a RecipeData1306section of the selected raw image file(s) recorded on the recording medium200with the set of adjusted development parameters as information to be used when the development server303performs the cloud development processing. The RecipeData1306section will be described below with reference toFIG.13. For example, in a case where 15 images are selected and determined in step S605, brightness is increased by one level in step S615, and the OK button is selected and determined in step S623, the system control unit50overwrites and saves the RecipeData1306section in each of the selected 15 raw image files with information “brightness: +1” in succession. The system control unit50then saves the image IDs of the 15 images into the transmission reservation list. In step S624, the RecipeData1306section of the selected raw image file(s) is overwritten. This can prevent a raw image file1300from changing in data size. Meanwhile, DisplayImageData1308is not updated and maintained the same as before the reflection of the set of adjusted development parameters. The reasons are described as follows. DisplayImageData1308is unable to be restored once overwritten and saved based on the image developed using the set of adjusted development parameters. Saving the image developed using the set of adjusted development parameters into a different area of the raw image file1300increases the data size of the raw image file1300. Since the sophisticated types of development processing available in the cloud development processing are unable to be reflected, the set of adjusted development parameters is not reflected in a strict sense. In step S625, the system control unit50updates the specified image number display field921based on the transmission reservation list stored in step S624, and displays the cloud development menu on the display unit28. For example, in a case where 15 raw image images are registered in the transmission reservation list, the specified image number display field921is updated to display “15”. FIG.7is a detailed flowchart of the check/cancellation processing in step S515ofFIG.5Bdescribed above. In step S701, the system control unit50lists images that are registered as reserved to be transmitted in the transmission reservation list recorded on the recording medium200on the display unit28. FIG.11Aillustrates a display example of the list view of the reserved images displayed in step S701. For example, in an example case where 15 images are registered as reserved images in the transmission reservation list, the system control unit50reads a thumbnail image for the display from DisplayImageData1308of each of the 15 reserved raw image files, and lists the thumbnail images. The thumbnail images displayed in the list are not ones obtained by developing ImageData1309using the development parameters adjusted when the images are added to the transmission reservation list, and thus the development parameters specified by the user are not reflected. The list view screen displays a plurality of thumbnail images as well as a cursor1111for image selection, an operation guide1112, a button1113, and a button1114. The operation guide1112is an operation guide notifying that an instruction to provide a preview display using the adjustment parameters (development parameters) saved when the images are added to the transmission reservation list can be issued by pressing an INFO button included in the operation unit70. The button1113is a GUI button for receiving an instruction to perform cancellation processing on the transmission reservation list. The button1113also serves as an operation guide notifying that the instruction to perform the cancellation processing on the transmission reservation list can also be issued by pressing the set button75. The button1114is a GUI button for receiving an instruction to return to the previous screen. The button1114also serves as an operation guide notifying that the instruction to return to the previous screen can also be issued by pressing the menu button81. In step S702, the system control unit50performs processing for selecting an image. In the processing, the cursor1111is initially located at the top left image on the list view screen, i.e., the oldest image. In step S703, the system control unit50determines whether an operation to switch to a single display or a multi display (list display) is performed on the list view screen. In a case where the switch operation is performed (YES in step S703), the processing proceeds to step S704. In a case where the switch operation is not performed (NO in step S703), the processing proceeds to step S705. In step S704, the system control unit50performs processing for switching to a single display or a multi display. In a case where the display before the switching is a list display and an operation to switch to a single display (pinch-out or the pressing of the zoom button78) is performed, the system control unit50switches to a single display of a raw image where the cursor1111is located before the switching. In the single display, the system control unit50reads a medium-sized image in DisplayImageData1308of the raw image file to be displayed, and displays the medium-sized image on the display unit28. In a case where the display before the switching is a single display and an operation to switch to a multi display (pinch-in or the pressing of the AE lock button77) is performed, the system control unit50switches to a list display and displays the cursor1111on the thumbnail image corresponding to an image single-displayed before the switching. In the list display, the system control unit50reads a thumbnail image in DisplayImageData1308of each raw image file, and displays the thumbnail images. In step S705, the system control unit50determines whether an operation to switch the selected image is performed. This operation refers to an operation for moving the cursor1111up, down, to the left, or to the right in the list display state, and an image fast-forward operation in the single display state. Any of these operations can be performed by operating the four-way directional pad74. In a case where an operation to switch the selected image is performed (YES in step S705), the processing proceeds to step S706. In a case where an operation to switch the selected image is not performed (NO in step S705), the processing proceeds to step S707. In step S706, the system control unit50performs switch processing. This processing refers to processing for moving the cursor1111in a direction of an operation in the list display state, and processing for switching to the previous or next image in the single display state. Thus, the reserved images registered in the transmission reservation list can be selected in succession. In step S707, the system control unit50determines whether a preview instruction operation (for example, the pressing of the INFO button indicated by the operation guide1112) is performed. In a case where a preview instruction operation is performed (YES in step S707), the processing proceeds to step S708. In a case where a preview instruction operation is not performed (NO in step S707), the processing proceeds to step S711. In step S708, the system control unit50develops the selected image (the image inside the cursor1111in the list display state, or the image displayed on the display unit28in the single display state) and singly displays the developed image. Specifically, the system control unit50reads development parameter information from RecipeData1306where the development parameters adjusted when the image is added to the transmission reservation list are recorded, instead of RecParameter1305of the selected raw image file. The system control unit50then performs development processing on ImageData1309that is the raw data (undeveloped image) using the development parameters, and displays the resulting image on the display unit28as a preview image (provides a preview display). FIG.11Billustrates a display example of the preview display in step S708. A preview image1120is an image of which development parameters adjusted when the image is added to the transmission reservation list are reflected, in other words, an image developed using the development parameters. Strictly speaking, the preview image1120is not exactly the same as the image to be developed by the cloud raw development using the development parameters adjusted when the image is added to the transmission reservation list, since the cloud raw development includes sophisticated development processing. Information1121which is indicated in a section surrounded by a dotted line is about a group of development parameters obtained from RecipeData1306of the displayed raw image file, and is displayed along with the preview image1120. Among the development parameters, types of development parameters changed from the image settings are displayed in a different display form, for example, different color, from that of the other types of development parameters, whereby the changes are identifiable. In the illustrated example, only information1122indicating a brightness development parameter is displayed to be distinguishable from other pieces of information. This notifies that only the brightness has been changed from the image settings. A button1123that is a GUI button for receiving an instruction to end the preview state and return to the previous screen is also displayed. The button1123also serves as a guide notifying that the menu button81can also be pressed to return to the previous screen. In step S709, the system control unit50determines whether an operation to issue a return instruction which is selection and determination of the button1123or the pressing of the menu button81is performed. In a case where an operation to issue a return instruction is performed (YES in step S709), the processing proceeds to step S710. In a case where an operation to issue a return instruction is not performed (NO in step S709), the processing returns to step S709. In step S710, the system control unit50restores the display state prior to the reflection of the development parameters. In other words, the system control unit50restores the display state prior to the receipt of the preview instruction in step S707. In step S711, the system control unit50determines whether a cancellation operation to issue an instruction to perform cancellation processing which is selection and determination of the button1113or pressing of the set button75is performed. In a case where a cancellation operation is performed (YES in step S711), the processing proceeds to step S712. In a case where a cancellation operation is not performed (NO in step S712), the processing proceeds to step S713. In step S712, the system control unit50cancels the reservation of the currently selected image, which is the image on which the cursor1111is located in the multi display, or the image displayed in the single display, in the transmission reservation list. The image ID of the selected image is thereby deleted from the transmission reservation list recorded on the recording medium200, or information notifying that the selected image is not reserved (reservation-cancelled) is recorded. The information in RecipeData1306in the raw image file of the selected image is also deleted. The development parameters adjusted by the user when the image is added to the transmission reservation list are thereby discarded. More than one image may be selected in a collective manner. In a case where displaying the cloud development menu next time after the cancellation processing, the system control unit50subtracts the number of cancelled images from the foregoing number of images displayed in the specified image number display field921and displays the result. In step S713, the system control unit50determines whether a return operation which is selection and determination of the button1114or the pressing of the menu button81is performed. In a case where a return operation is performed (YES in step S713), the check/cancellation processing ofFIG.7ends, and the screen returns to the cloud development menu. In a case where a return operation is not performed (NO in step S713), the processing returns to step S701. In a case where the cancellation processing is performed in step S712and the screen returns to the cloud development menu, the system control unit50updates the number of images displayed in the specified image number display field921. FIGS.8A and8Bis a detailed flowchart of the transmission processing in step S518ofFIG.5Bdescribed above. In step S800, the system control unit50displays a pre-transmission confirmation message (guidance) on the display unit28before connecting to the server. FIG.12Aillustrates a display example of the pre-transmission confirmation message displayed in step S801. The displayed screen displays a confirmation message1211, a transmission button1212for issuing an instruction to determine transmission after viewing the confirmation message1211, and a cancel button1213for issuing an instruction to cancel transmission after viewing the confirmation message1211. The confirmation message1211includes a message that when the transmission button1212is selected and determined, the image(s) registered in the transmission reservation list is/are transmitted to the cloud storage301. The confirmation message1211includes a message1211anotifying that the number of developable images, which is the number displayed in the developable image number display field922, will decrease as many as the number of transmitted images upon completion of the transmission. In other words, the system control unit50notifies the user that the remaining number of times of use of the cloud processing up to a limited number of times M will decrease in response to transmission of information for issuing an instruction for the cloud development processing, regardless of the remaining number of times up to the limited number of times M managed by the development server303. The message1211acorresponds to the part underlined with the dotted line which is actually not displayed. In the illustrated example, “NUMBER OF DEVELOPABLE IMAGES WILL DECREASE UPON DELIVERY OF IMAGE(S) TO CLOUD” is displayed. The message1211ais not limited thereto, and that the number of developable images is limited account by account and that the number of developable images is provided on a charging basis may be displayed. Examples of the message include the following messages: “The number of developable images will decrease upon delivery of the image(s) to the server. The number of developable images will not increase until next month”, “The number of developable images purchased will decrease upon delivery of the image(s) to the server”, and “The number of developable images will decrease upon delivery of the image(s) to the server. The number of developable images can be increased by purchasing”. Such a guidance display can inform the user that the number of developable images decreases when the cloud development processing is performed, before transmission of image(s). This enables the user to use the cloud development processing in a well-planned manner and avoid a risk of inconvenience. For example, the guidance display can prevent the occurrence of a situation where the number of images developable by the cloud development processing is first used up by images of low priority and the cloud development processing becomes unavailable for images of high priority to be subsequently transmitted. The confirmation message1211includes a message1211bnotifying that a result of the cloud development processing by the development server303should be checked using a device different from the digital camera100like the smartphone302, in other word, a different device is recommended for browsing the result. The message1211bcorresponds to the part underlined with the solid line which is actually not displayed. In the illustrated example, “CHECK DEVELOPED IMAGE(S) ON CLOUD” is displayed. The message1211bis not limited thereto, and that the developed image(s) can be checked by connecting to the cloud storage301using a device different from the digital camera100may be displayed. A message notifying that the developed image(s) can be checked by connecting to the cloud storage301using a device different from the digital camera100using a user account where the digital camera100is associated with the cloud storage301may be displayed. Devices different from the digital camera100include at least one of a smartphone, a PC, and a tablet terminal. A message notifying that developed image(s) processed by the cloud development processing (image-processed image(s)) is/are unable to be immediately checked on the display unit28of the digital camera100may be displayed. Examples of the messages include the following messages: “To check developed image(s), log in to your account subscribed to the service using your smartphone, PC, or tablet terminal” and “To check the developed image(s), connect to the cloud service using a device other than this camera”. A message notifying that a developed image(s) is/are unable to be immediately checked may be added as follows: “The development processing may take long. Please check later” and “Check image(s) after receipt of the development processing completion notification”. In the present exemplary embodiment, the digital camera100does not receive a developed image(s) processed by the cloud development processing by the development server303and recorded in the cloud storage301. However, the message1211binforms the user that a processing result is to be stored in a location other than the camera100from which the instruction has been transmitted. This enables the user to check a developed image(s) using another device without confusion. In step S801, the system control unit50determines whether an operation to determine transmission, which is selection and determination of the transmission button1212or the pressing of the set button75, is performed after the confirmation message ofFIG.12Ais displayed. In a case where the operation to determine the transmission is performed (YES in step S801), the processing proceeds to step S802. On the other hand, in a case where a cancel operation, which is selection and determination of the cancel button1213or the pressing of the menu button81, is performed (NO in step S801), the processing proceeds to step S828. In step S802, the system control unit50initializes an image number unknown flag stored in the system memory52to 0. The initialized image number unknown flag=0 notifies that a correct number of developable images is obtained. In a case where the image number unknown flag=1, it indicates the possibility that the correct number of developable images may not have been obtained because a processing result is not available from the server. In step S803, like step S504, the system control unit50performs connection processing with the server, which is at least one of the cloud storage301, the user setting management unit304, and the development server303. With the connection processing completed (connection established), the digital camera100enters the “online state”. In step S804, like the foregoing step S505, the system control unit50obtains information about the subscribing/unsubscribing state of a present account currently logged in to the cloud development processing service and, in a case where the present account is in the subscribing state, the system control unit50also obtains the number of developable images from the server, which is the user setting management unit304in particular. The system control unit50performs information acquisition which has been performed in the display processing of the cloud development menu again in this step S804of the transmission processing. The reason is that the number of images developable by the cloud development processing for the same user ID (account) can change due to factors other than the development instruction from the digital camera100. Examples of the factors for the change include as follows: The number of developable images can be decreased by a development request issued to the development server303from a different device linked with the user ID (account) other than the digital camera100. In the case of a subscription service where the number of developable images is set on a monthly basis, the number of developable images can be increased next month. The cloud development processing service can be terminated (unsubscribed). The link setting between the digital camera100and the cloud storage301can be reset. Since the situation can be changed due to such factors, the system control unit50obtains the latest information before transmission. This can reduce occurrences of errors after transmission is started. In step S805, the system control unit50performs a similar determination to that in the foregoing step S506. In a case where the cloud development processing service is subscribed (YES in step S805), the processing proceeds to step S807. In a case where the cloud development processing service is not subscribed (NO in step S805), the processing proceeds to step S806. In step S806, like the foregoing step S509, the system control unit50displays a message screen notifying that the user has not subscribed to the cloud development processing service on the display unit28. The processing then proceeds to step S823. In step S807, like step S403, the system control unit50determines whether the digital camera100has been linked (paired) with the cloud storage301. In a case where the digital camera100has been linked with the cloud storage301(YES in step S807), the processing proceeds to step S809. In a case where the digital camera100has not been linked with the cloud storage301, i.e., the link is cancelled (NO in step S807), the processing proceeds to step S808. In step S808, the system control unit50displays an error message that the link is cancelled on the display unit28. The processing then returns to step S401. In step S809, the system control unit50determines whether the number of developable images is insufficient for the number of images to be transmitted, which is the number of images registered in the transmission reservation list. In a case where the number of developable images is insufficient (YES in step S809), the processing proceeds to step S810. In a case where the number of developable images is not insufficient (NO in step S809), the processing proceeds to step S811. In step S810, the system control unit50displays a message (guidance) notifying that the number of developable images is insufficient on the display unit28.FIG.12Billustrates a display example of the message notifying that the number of developable images is insufficient, displayed in step S810. A confirmation message1221notifies the user that the number of images to be transmitted exceeds the number of images capable of being processed by the cloud development processing in the development server303. The confirmation message1221may include content for prompting the user to reduce the images to be transmitted, or content for prompting the user to increase the number of developable images if the cloud development processing service is a paid service. The system control unit50also displays an OK button1222for proceeding to the next screen. The processing then proceeds to step S823. In step S811, the system control unit50displays a progress screen notifying that transmission is in progress on the display unit28. For example, the progress screen provides a progress display providing, for example, the number of images transmitted/the total number of images to be transmitted, or a progress display providing transmitted percentages in the transmission. In step S812, the system control unit50initializes N, which is a variable indicating the number of images transmitted after the determination of the transmission, to 1. The system control unit50stores N=1 into the system memory52. In step S813, the system control unit50determines whether the number of developable images is greater than 0. Specifically, the system control unit50obtains the number of images developable by the cloud developing processing in the present account from the user setting management unit304again, and determines whether the number of developable images is greater than 0. In a case where the number of developable images is greater than 0 (YES in step S813), the processing proceeds to step S814. In a case where the number of developable image is 0 (NO in step S813), the processing proceeds to step S822. In step S813, the system control unit50obtains the number of developable images from the user setting management unit304again, and performs the determination using the number. The system control unit50stores the number of developable images obtained from the user setting management unit304into the system memory52. In step S814, the system control unit50determines whether the Nth image in the transmission reservation list has already been transmitted to the cloud storage301. This determination is performed in the following manner. The system control unit50initially transmits an ID, which is a unique ID uniquely assigned to each image, of an image of the Nth image in the transmission reservation list from the digital camera100to the cloud storage301, and inquires the cloud storage301whether a main body of the image has been transmitted. The cloud storage301in response searches to determine whether an image matching the transmitted image ID is stored. In a case where a matching image is found, the cloud storage301transmits a notification to the digital camera100that the image has already been transmitted to the cloud storage301. In a case where no matching image is found, the cloud storage301transmits a notification to the digital camera100that the image has not been transmitted to the cloud storage301. In a case where the notification that the image has been transmitted is received as a response to the inquiry to the cloud storage301, the system control unit50determines that the image has been transmitted. In a case where the notification that the image has not been transmitted is received as a response to the inquiry to the cloud storage301, the system control unit50determines that the image has not been transmitted. In a case where the image has been transmitted (YES in step S814), the processing proceeds to step S815. In a case where the image has not been transmitted (NO in step S814), the processing proceeds to step S816. Such a determination can prevent the same image from being transmitted to the cloud storage301a plurality of time, and reduce consumption of the communication time, communication power, and communication capacity due to a plurality of times of transmission. For example, the digital camera100has a function of automatically transmitting a captured image to the cloud storage301after imaging for archival purposes aside from transmitting an image for the purpose of using the cloud development processing service. If such a function is used, the same raw image file can already be stored in the cloud storage301. The determination of step S814can prevent the same file from being redundantly transmitted even in a case where a plurality of functions related to image transmission is utilized. In step S815, the system control unit50performs processing for transmitting the image ID of the Nth image in the transmission reservation list and the development parameter information and additional information included in the raw image file of the image having the image ID as information for issuing an instruction for the cloud development processing (development request). Specifically, the system control unit50extracts RecipeData1306and OtherMetaData1307from the raw image file captured by the imaging unit22and recorded on the recording medium200, links RecipeData1306and OtherMetaData1307with the image ID, and transmits the resultant from the communication unit54to the cloud storage301. In step S815, at least ImageData1309that is the raw data, which is an image before raw development, in the raw image file is not transmitted. Redundant data transmission is thereby prevented. In step S816, the system control unit50transmits an entire raw image file1300of the Nth image in the transmission reservation list from the communication unit54to the cloud storage301as information for issuing an instruction for the cloud development processing (development request). The raw image file1300to be transmitted is the one captured by the imaging unit22and recorded on the recording medium200. In view of the consistency of the transmission processing sequence, RecipeData1306and OtherMetaData1307may be extracted from the raw image file1300as in step S815, and transmitted along with the entire raw image file1300at the same time. In step S817, the system control unit50determines whether a normal response notification is obtained from the server within a predetermined time (for example, 200 msec or so) from the completion of the transmission in step S815or S816. The normal response notification refers to a notification that the development request is received or a notification that the development request is rejected. In a case where such a normal response notification is obtained within a predetermined time (YES in step S817), the processing proceeds to step S818. In a case where a normal response notification is not obtained within the predetermined time (NO in step S817), the processing proceeds to step S824. Examples of the case where a normal response notification is not obtained include the following cases: The predetermined time elapses from the completion of the transmission in step S815or S816without a response from the server. A response is transmitted from the server before the lapse of the predetermined time from the completion of the transmission in step S815or S816, but the response is not of a normal response notification. Examples of an abnormal notification include reception of a notification that the digital camera100is linked (paired) with a server which is not supposed to be normally received at this timing, and reception of noise with unknown notification content. Instead of the determination criterion of within a predetermined time, whether a normal response notification is received may be determined based on a determination criteria of within a predetermined number of periods of, for example, ten periods) in a case where a communication confirmation signal is transmitted and received to/from the server at a predetermined period of, for example, at a 20-msec period. In other words, the determination of step S817concerns whether a condition indicating a stable communication is satisfied. Conversely, the determination of step S817is determination of whether a condition indicating a lack of stable communication between the communication unit54and the server is satisfied. In other words, the determination of step S817can be determination of whether a condition that no normal response notification is received from the server within the predetermined time from the completion of the transmission in step S815or S816is satisfied. In step S817, in a case where the condition that no normal response notification is received from the server within the predetermined time from the completion of the transmission in step S815or S816is satisfied, the processing proceeds to step S824. In a case where the condition that no normal response notification is received from the server within the predetermined time is not satisfied, the processing proceeds to step S818. In step S818, the system control unit50determines whether the notification received in step S817is of receipt of a development request. In a case where the notification is of receipt of a development request (YES in step S818), the processing proceeds to step S819. In a case where the notification is of a failure of receipt of a development request (NO in step S818), the processing proceeds to step S813. Even in a case where the notification of a failure of receipt of a development request is received as a normal response within the predetermined time, the digital camera100is unable to determine why the development request is in failure. Thus, in step S813, the system control unit50inquires again whether the number of developable images is greater than 0. In a case where the number of developable images is 0 (NO in step S813), the processing proceeds to step S822to abort transmission. In a case where the number of developable image is not 0 (YES in step S813), the processing proceeds to step S814and the system control unit50attempts to transmit the same image again. In step S819, the system control unit50perform processing for deleting the image ID of the Nth image in the transmission reservation list and the development parameter information and other additional information in the raw image file. Specifically, the system control unit50initializes RecipeData1306and OtherMetaData1307in the raw image file. In step S820, the system control unit50determines whether there is an image yet to be transmitted in the transmission reservation list. Specifically, the system control unit50determines whether the total number of images Nmax in the transmission reservation list, which is the same as the number of images displayed in the specified image number display field921, satisfies a relationship of N<Nmax. In a case where there is an image yet to be transmitted, i.e., the relationship of N<Nmax is satisfied, (YES in step S820), the processing proceeds to step S821. In a case where there is no image to be transmitted, i.e., the relationship of N<Nmax is not satisfied, which is a state where all the images registered in the transmission reservation list have been transmitted (NO in step S820), the processing proceeds to step S823. In such a case, the system control unit50may display a message indicating completion of transmission on the display unit28before the transmission processing ends. In step S829to be described below, the number of developable images minus the number of transmitted images is displayed in the developable image number display field922displayed on the cloud development menu. In step S821, the system control unit50increments the number of transmitted images N. The processing proceeds to step S813, and the system control unit50performs processing related to transmission of the next image. In step S822, the system control unit50displays a message on the display unit28that the image transmission is aborted because the number of developable images is exceeded by the next image transmission. In a case where this message is displayed at N=1, the cloud development processing by the development server303is started by another device linked with the same user ID at almost the same timing as the operation to determine the transmission. In a case where this message is displayed at N=2 or more, the cloud development processing by the development server303is started by another device linked with the same user ID at any timing between the operation to determine the transmission and the completion of the transmission. In step S823, the system control unit50performs processing for disconnecting from the server. Once the connection is disconnected, the digital camera100enters the “offline state”. In step S824, the system control unit50obtains the image file of the Nth image in the transmission reservation list from the recording medium200. In step S825, the system control unit50displays a notification message about a communication error on the display unit28.FIG.12Cillustrates a display example of the notification message here. A notification screen1230displays a message1231, an image1233, and an OK button1232. The message1231includes a description for prompting the user to check an execution status of the cloud development processing by the development server303. In the present exemplary embodiment, the description corresponds to the text “CHECK EXECUTION STATUS OF CLOUD DEVELOPMENT PROCESSING OF IMAGE ABOVE” inFIG.12C. The message1231also includes a description for prompting the user to remove specification of the image from the transmission reservation list in a case where the cloud development processing by the development server303is confirmed to have been executed on the image. In the present exemplary embodiment, the description corresponds to the text “IF IMAGE HAS BEEN DEVELOPED, EXCLUDE IMAGE FROM SPECIFIED IMAGES” inFIG.12C. The OK button1232is an operation icon for the user having understood the notification description to perform a confirmation and issue an instruction to proceed to the next screen. The image1233is a thumbnail of a display image based on DisplayImageData1308of an image file of the Nth image obtained in step S824. In step S826, the system control unit50determines whether an operation to press the OK button1232, which is an operation for selection and determination, is performed. In a case where the operation to press the OK button1232is performed (YES in step S826), the processing proceeds to step S827. In a case where the operation to press the OK button1232is not performed (NO in step S826), the processing returns to step S826, and the notification message continues to be displayed. In step S827, the system control unit50sets the image number unknown flag stored in the system memory52to 1. The image number unknown flag=1 indicates a state where the correct number of developable images managed by the server may not have been obtained because a normal response is not obtained from the server. In step S828, the system control unit50determines whether the image number unknown flag is 1. In a case where the image number unknown flag is 1 (YES in step S828), the processing proceeds to step S830. In a case where the image number unknown flag is 0 (NO in step S828), the processing proceeds to step S829. In step S829, the system control unit50displays the cloud development menu described with reference toFIG.9C. The specified image number display field921and the developable image number display field922are updated and displayed based on the situation immediately before the disconnection in step S823. In other words, in a case where the image(s) is/are transmitted to the server, the developable image number display field922displays the numerical value of the number of images obtained by subtracting the number of images transmitted to the server based on the processing instruction from the latest number of developable images which is obtained before the transmission from the server and stored in the system memory52. The transmission processing ends. In step S830, the system control unit50displays the cloud development menu described with reference toFIG.9B. The developable image number display field922displays the symbol “?”. Such a non-numerical display can prompt the user to recognize that a correct numerical value may not have been obtained as the number of developable images. By the transmission processing described above, the symbol “?” different from a numerical value indicating the number of developable images is displayed in step S830based on the fact that the condition indicating a lack of stable communication with the server is satisfied after image transmission to the server. This display “?” (question mark) indicates a possibility of inconsistency with the correct number of developable images managed by the server, and will hereinafter be referred to as an inconsistency notification. The inconsistency notification can avoid the user's misunderstanding caused by display of an erroneous numerical value when there can be a discrepancy in the number of developable images between the server and the digital camera100. The display form of the inconsistency notification is not limited to “?” described above. Other display forms may be employed for the inconsistency notification in the developable image number display field922as long as the display is different from the numerical value indicating the remaining number of developable images displayed, which is “50” displayed in the developable image number display field922inFIG.9C, in the case where the image number unknown flag is not set. For example, as illustrated in the developable image number display field922inFIG.9E, a numerical value different from that displayed in the case where the image number unknown flag is not set in at least any one of color, pattern, thickness, font type, and background color may be displayed as the inconsistency notification. As illustrated in the developable image number display field922inFIG.9F, a display item930serving as an inconsistency notification may be displayed in addition to the numerical display. Moreover, the numerical value that is usually constantly displayed may be blinked as an inconsistency notification. In other words, the numerical value may be displayed in a different blinking state from usual. The inconsistency notification may be displayed in such a manner that association with the developable image number display field922is recognizable, instead of being displayed within an area of the developable image number display field922. For example, the inconsistency notification may be displayed in a second display area different from the developable image number display field922along with a balloon, an arrow, or text designating the developable image number display field922. In a case of performing the determination in the foregoing step S817, the system control unit50may determine whether a condition different from the foregoing is satisfied as the condition indicating a lack of stable communication with the server, which is at least one of the cloud storage301, the user setting management unit304, and the development server303. For example, the system control unit50may determine whether any one of the following conditions is satisfied:A cable is detected to come off in a situation where a communication apparatus capable of communicating with the server and the digital camera100are connected by the cable and the server and the digital camera100are communicating with each other via the cable;The reception field intensity (unit: dBm) of the communication unit54falls below a predetermined value (for example, −80 dBm) for a predetermined time. In other words, the field intensity has dropped;No response to a beacon (communication confirmation signal) regularly transmitted from the communication unit54to a relay apparatus, such as an access point (AP), a router, and a proxy server interposed in the communication with the server, is obtained for a predetermined time;A notification, such as “the destination host is not reachable”, “the destination host is unknown”, and “disconnected” is received from the relay apparatus in response to a beacon (communication confirmation signal) regularly transmitted from the communication unit54to the relay apparatus. In other words, a notification of unavailability of communication with the server is received from the relay apparatus;A notification that “the server is busy” is received from the server;A disconnection notification is received from the server;The system control unit50gets no response from the communication unit54for a certain time; andThe communication unit54is out of order. In a case where any of the foregoing conditions is true, i.e., stable communication is no longer being performed, the processing proceeds to step S823. In a case where all the foregoing conditions are false, i.e., no event indicating absence of stable communication has occurred, the processing proceeds to step S818. In a case where any of the foregoing conditions indicating the absence of stable communication with the server is satisfied, the digital camera100is unable to determine whether the development processing responding to the processing instruction transmitted in step S815or S816immediately before (the immediately previous instruction) is executed by the server. The system control unit50is therefore unable to determine which state is correct between a state where the remaining number of images is the number obtained by subtracting the number of images developed in response to the immediate previous instruction from the number of developable images stored in the system memory52or a state where the number of developable images is the number of developable images stored in the system memory52which is the number not subtracted. Moreover, the number of developable images may have further decreased due to a development processing instruction issued from a different terminal apparatus associated with the same user account in a period when the communication with the server is unstable. In such a situation where it is unknown whether the remaining number of images is properly managed, the foregoing inconsistency notification is displayed to reduce a possibility of the user's misunderstanding about the remaining number of images available for the processing. In a case where the communication with the server is stabilized and the notification of the remaining number of developable images is successfully obtained from the server again after the inconsistency notification, the system control unit50ends displaying the inconsistency notification and displays, as illustrated inFIG.9C, the remaining number of developable images in the developable image number display field922as usual. In the foregoing example, the server, which is at least one of the cloud storage301, the user setting management unit304, and the development server303, is described to be instructed to perform raw image development processing. However, this is not restrictive. In situations where the number of times of processing by a content processing server is limited, an electronic apparatus that can transmit a processing instruction for contents other than images can exercise control similar to that in the foregoing exemplary embodiment. For example, a description will be provided with a case where a content processing server can process audio data in response to each instruction issued from a plurality of apparatuses linked with the same account. Further, the number of instructions for the processing on audio data (audio processing) in the same account is limited up to 30 per month. In this case, in a case where a condition indicating absence of stable communication with the content processing server is satisfied, the electronic apparatus capable of issuing an instruction for the audio processing displays an inconsistency notification related to a display area for displaying the number of instructions. In this configuration, the user's misunderstanding caused by the display of an erroneous numerical value can be avoided in a case where there can be a discrepancy in the possible number of times of processing between the content processing server and the electronic device transmitting processing instructions. Similarly, inconsistency notifications can also be displayed about processing on various types of contents, including documents, charts, and files. FIG.13illustrates a structure of a raw image file (still image data) recorded on the recording medium200. Next, the structure of a raw image file will be described in detail. A file format of a raw image file is an International Organization for Standardization (ISO) base media file format defined in ISO/International Electrotechnical Commission (IEC) 14496-12. This file format has a tree structure and includes nodes called boxes. Each box can include a plurality of boxes as child elements. A raw image file1300includes a box ftyp1301for describing a file type at the beginning, and a box moov1302that contains all metadata and a box mdat1303that is a media data main body of a track. The box moov1302includes a box uuid1304including MetaData as its child element. MetaData describes metadata on the image. Examples of the information included in MetaData are as follows:generation date and time information about the image;RecParameter1305that is setting information during imaging;RecipeData1306that is information (development parameter group) to be used by the development server303in performing the cloud development processing; andOtherMetaData1307that is other imaging information. For example, the OtherMetaData1307may include detection information about human faces, eyes, and noses to be used by the development server303in performing the cloud development processing including relighting correction processing. The detection information can basically be obtained during imaging, but can also be detected in performing development processing for a preview in the foregoing steps S615and S620. In a case where detection information is detected in performing the development processing for a preview, the detection information is stored into the OtherMetaData1307during the preview. The box mdat1303includes, as its child elements, the ImageData1309that is the raw data itself on the captured still image and the DisplayImageData1308that is a display image. The DisplayImageData1308, which is a display image, is smaller than ImageData1309in size. DisplayImageData1308is recorded in a Multi-Picture Format (MPF), and includes a medium-sized image and a DCF thumbnail image that is smaller than the medium-sized image and used in a list view. The file format described in the present exemplary embodiment is an example, and other boxes may be included as appropriate. The system configuration and the development processing procedures for providing the cloud development processing service according to the present exemplary embodiment have been described above. Next, supplementary descriptions about the characteristic configurations and processes of the system will be given. The characteristic configurations and processes are basically independent of each other, and even in a case where the characteristic configurations are individually provided, respective effects can be obtained. In the foregoing exemplary embodiment, the digital camera100does not receive the developed images processed by the cloud development processing in the development server303, and the result of the cloud development processing is unable to be checked using the digital camera100. The reason for such a configuration is that if the developed images processed by the cloud development processing is received by the digital camera100and the processing result is checked using the digital camera100, there are the following disadvantages. A certain time period is consumed until when the image processing result is checked using the digital camera100as much as the time for the development server303to apply the image processing and the time for the digital camera100to receive the processed images from the cloud storage301. If the user is unaware of the time to be consumed before checking, the user waits for a long time in vain and new imaging opportunities can be missed during the waiting time. If communication is disconnected in the middle, it can take longer time before the image processing result can be checked, or the image processing result can even become unavailable for checking. In the foregoing exemplary embodiment, the digital camera100that is the sender of the images issues an image processing instruction to the server, and the result of the image processing is checked by accessing the server using a device different from the image sender. This can reduce the wait time of the image sender and reduce the risk of communication disconnection. Moreover, displaying the message1211bprevents the user of the image sender from getting confused about how to check the image processing result, and the user can check the image processing result without confusion. As a modification, the digital camera100may receive the developed images processed by the cloud development processing by the development server303so that the result of the cloud development processing can be checked using the digital camera100. In this case, the developed images are received by the communication unit54. However, in this case, it takes a certain time period before the image processing result is checked using the digital camera100because of the foregoing reasons. The system control unit50may therefore be configured to display a guidance for prompting the user to check the image processing result on the display unit28some time later after the image transmission, instead of or with the message1211b. This configuration provides an advantage that the user is prevented from getting worried, since the user is noticed that the processing result can be checked using the digital camera100even in a case where the processing result is returned to the sender with a delay because of the communication conditions, for example. The display of the guidance can also prevent the user from being unaware that it takes long time before the processing result can be checked using the digital camera100, and waiting in vain to miss imaging opportunities or cause a delay in other operations. The camera100saves power consumption in communication processing by adaptively disconnecting from the development server303based on the processing content. Referring toFIG.5A, the camera100establishes a communication connection with the development server303, obtains the service subscribing state, and then once disconnects the communication as illustrated in steps S507and S510. The camera100then enters the “offline state” while the cloud development menu is displayed. The reason of this configuration is that the communication connection with the server does not need to be maintained during the processing for adding images to be developed in step S512before the execution of the processing for transmitting the image(s) to the development server303in step S516and the subsequent steps. This can prevent the battery from being wasted by useless communication. In displaying the cloud development menu in steps S508and S511, the date and time of acquisition of the displayed information may also be displayed. This enables the user to identify whether the displayed information is the latest. The camera100further performs control to reduce useless communication even after the transmission processing for transmitting the image(s) to the server is started. Referring toFIG.8B, in step S813, the camera100updates the number of developable images managed by the camera100before transmitting each image. The camera100then determines whether the image can be transmitted based on the number of developable images updated. In other words, in a case where the number of developable images before transmission is 0, the camera100aborts the transmission. This can prevent an error to transmit an undevelopable image. The battery exhaustion due to useless image transmission can thus be avoided. The camera100inquires of the development server303about the number of developable images in the determination processing of step S813. Instead, the camera100may employ the following method. In a case where a plurality of images is selected as development targets, the camera100transmits the images one by one in succession (step S816). Each time the transmission of an image is completed, the camera100may receive a notification of completion of the transmission and the number of developable images updated by the transmission. The number of developable images can thus be efficiently obtained by using a communication of the completion notification of the image transmission. In step S814, whether the Nth image in the transmission reservation list has been transmitted is determined in every transmission of one image. However, the system control unit50may perform such determination for all the images in the transmission reservation list in a collective manner before starting the processing for transmitting the images in succession. This can further simplify the processing. In the processing of steps S815and S816, the system control unit50may store a transmission history for subsequent use based on transmission of the raw image file or the development parameter information. Specifically, based on the transmission history, the system control unit50may provide an already-transmitted image with an icons to indicate an already-transmitted status on an information display in browsing images using the camera100. This can prevent a situation where development requests for the same image are repeatedly issued, to reduce the number of developable images in vain. The foregoing various controls described to be performed by the system control unit50may be performed by a single piece of hardware. A plurality of pieces of hardware, for example, a plurality of processors or circuits, may perform processing in a distributed manner to control the entire apparatus. While the exemplary embodiment of the present disclosure has been described in detail, the present disclosure is not limited to the specific exemplary embodiment, and various other exemplary embodiments not departing from the gist of the present disclosure are also covered by the present disclosure. Moreover, such exemplary embodiments are just some examples of embodiments of the present disclosure, and the exemplary embodiments can be combined as appropriate. The foregoing exemplary embodiment has been described by using a case where the exemplary embodiment is applied to the digital camera100as an example. However, such an example is not restrictive. Specifically, the exemplary embodiment is applicable to a device or electronic apparatus that can communicate with the apparatuses on the network such as the cloud storage301, the development server303, and the user setting management unit304. More specifically, the present exemplary embodiment can be applied to a PC, a personal digital assistant (PDA), a mobile phone terminal, a portable image viewer, a printer apparatus including a display, a digital photo frame, a music player, a game machine, and an electronic book reader. The present exemplary embodiment is not limited to an imaging apparatus main body and also applicable to a control apparatus that communicates with an imaging apparatus (including a network camera) via wired or wireless communication and remotely controls the imaging apparatus. Examples of the apparatus for remotely controlling an imaging apparatus include apparatuses such as a smartphone, a tablet PC, and a desktop PC. The control apparatus can remotely control the imaging apparatus by notifying the imaging apparatus of commands to perform various operations and perform settings based on operations performed on the control apparatus and processing performed by the control apparatus. An LV image captured by the imaging apparatus can be received via wired or wireless communication and displayed on the control apparatus. Other Embodiments Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)?), a flash memory device, a memory card, and the like. While the present disclosure includes exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2020-218853, filed Dec. 28, 2020, which is hereby incorporated by reference herein in its entirety.
123,680
11943532
DETAILED DESCRIPTION Turning to the drawings, wherein like reference numerals refer to like elements, techniques of the present disclosure are illustrated as being implemented in a suitable environment. The following description is based on embodiments of the claims and should not be taken as limiting the claims with regard to alternative embodiments that are not explicitly described herein. The inventors believe that photographers would like, in addition to getting the best possible photographs, more than one picture to capture the moment, and, in some cases, a few seconds of video associated with a still picture. This later should be accomplished without the photographer having to spend the time to switch between still-capture mode and video-capture mode. Aspects of the presently disclosed techniques provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). Also, several seconds of video are captured around the same time and are made available to the photographer. More specifically, in some embodiments, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. In further embodiments, better low-light images are provided by enhancing exposure control. Higher-precision horizon detection may be provided based on motion analysis. For a more detailed analysis, turn first toFIG.1A. In this example environment100, a photographer102(also sometimes called the “user” in this discussion) wields his camera104to take a still image of the “scene”106. In this example, the photographer102wants to take a snapshot that captures his friend108. The view that the photographer102actually sees is depicted as110, expanded in the bottom half ofFIG.1A. Specifically, when the photographer102pushes a “capture” button (also called the “shutter” for historical reasons), the camera104captures an image and displays that captured image in the viewfinder display112. So far, this should be very familiar to anyone who has ever taken a picture with a smartphone or with a camera that has a large viewfinder display112. In the example ofFIG.1A, however, the camera104also displays a “notification icon”114to the photographer102. While the detailed functioning supporting this icon114is discussed at length below, in short, this icon114tells the photographer102that the camera104believes that it has either captured a “better” still image than the one displayed in the viewfinder display112or that it has captured a video that may be of interest to the photographer102. FIG.1Bintroduces a network116(e.g., the Internet) and a remote server118. The discussion below shows how these can be used to expand upon the sample situation ofFIG.1A.FIG.1Balso visually makes the point that the “camera”104need not actually be a dedicated camera: It could be any image-capture device including a video camera, a tablet computer, smartphone, and the like. For clarity's sake, the present discussion continues to call the image-capture device104a “camera.” FIG.2presents methods for specific techniques that enhance still-image capture. In step200, the camera104captures a number of still images. Consider, for example, the photographer102putting the camera104into “viewfinder” mode. In this mode, the camera's viewfinder112displays the image “seen” by the camera104. The photographer102may explicitly command the camera104to enter this mode, or the camera104can automatically enter this mode when it determines that this mode is desired (e.g., by monitoring the camera's current position and observing the behavior of the photographer102). In any case, the camera104automatically (that is, while still in viewfinder mode and not in response to an explicit command from the photographer102) captures a number of still images, e.g., five per second over a period of a couple of seconds. These captured still images are stored by the camera104. In taking so many images, memory storage often becomes an issue. In some embodiments, the images are stored in a circular buffer (optional step202) holding, say, ten seconds of still images. Because the capacity of the circular buffer is finite, the buffer may be continuously refreshed with the latest image replacing the earliest one in the buffer. Thus, the buffer stores a number of captured still images ranging in time from the newest image back to the oldest, the number of images in the buffer depending upon the size of the buffer. In some embodiments, the selection process (see the discussion of step208below) is performed continuously on the set of images contained in the circular buffer. Images that are not very good (as judged by the techniques discussed below) are discarded, further freeing up space in the circular buffer and leaving only the “best” images captured over the past, say, three seconds. Even in this case, the metadata associated with discarded images are kept for evaluation. Note that the capture rate of images in step200may be configurable by the photographer102or may depend upon an analysis of the photographer's previous behavior or even upon an analysis of the captured images themselves. If, for example, a comparison of one image to another indicates a significant amount of movement in the captured scene, then maybe the camera104is focused on a sporting event, and it should increase its capture rate. The capture rate could also depend upon the resources available to the camera104. Thus, if the camera's battery is running low, then it may reduce the capture rate to conserve energy. In extreme cases, the technique of automatic capture can be turned off when resources are scarce. At step204(generally while the camera104continues to automatically capture still images), the photographer102gives a capture command to the camera104. As mentioned above, this can result from the photographer102pressing a shutter button on the camera104. (In general, the capture command can be a command to capture one still image or a command to capture a video.) (For purposes of the present discussion, when the camera104receives the capture command, it exits the viewfinder mode temporarily and enters the “capture” mode. Once the requested still image (or video as discussed below) is captured, the camera104generally re-enters viewfinder mode and continues to automatically capture images per step200.) Unlike in the technique of step200, traditional cameras stay in the viewfinder mode without capturing images until they receive a capture command They then capture the current image and store it. A camera104acting according to the present techniques, however, is already capturing and storing images (steps200and202) even while it is still in the viewfinder mode. One way of thinking about the present techniques is to consider the capture command of step204not to be a command at all but rather to be an indication given by the photographer102to the camera104that the photographer102is interested in something that he is seeing in the viewfinder display112. The camera104then acts accordingly (that is, it acts according to the remainder of the flowchart ofFIG.2.) Step206is discussed below in conjunction with the discussion of step214. In step208, the camera104reviews the images it has captured (which may include images captured shortly before or shortly after the capture command is received) and selects a “best” one (or a “best” several in some embodiments). (In some embodiments, this selection process is performed on partially processed, or “raw,” images.) Many different factors can be reviewed during this analysis. As mentioned above, the capture command can be considered to be an indication that the photographer102is interested in what he sees. Thus, a very short time interval between the capture command and the time that a particular image was captured means that that particular image is likely to be of something that the photographer102wants to record, and, thus, this time interval is a factor in determining which image is “best.” Various embodiments use various sets of information in deciding which of the captured images is “best.” In addition to temporal proximity to the photographer's capture command, some embodiments use motion-sensor data (from an accelerometer, gyroscope, orientation, or GPS receiver on the camera104) (e.g., was the camera104moving when this image was captured?), face-detection information (face detection, position, smile and blink detection) (i.e., easy-to-detect faces often make for good snapshots), pixel-frame statistics (e.g., statistics of luminance: gradient mean, image to image difference), activity detection, data from other sensors on the camera104, and scene analysis. Further information, sometimes available, can include a stated preference of the photographer102, past behavior of the photographer102(e.g., this photographer102tends to keep pictures with prominent facial images), and a privacy setting (e.g., do not keep pictures with a prominent face of a person who is not in a list of contacts for the camera104). Also, often available are camera104metadata and camera-status information. All such data can be produced in the camera104and stored as metadata associated with the captured images. These metadata may also include reduced resolution versions of the captured images which can be used for motion detection within the captured scene. Motion detection provides information which is used for “best” picture selection (and analysis of captured video, see discussion below), as well as other features which improve the image-capture experience. The statistics and motion-detection results can also be used by an exposure procedure to improve captured-image quality in low light by, for example, changing exposure parameters and flash lighting. When there is motion in low light and strobe lighting is available from the camera104, the strobe may be controlled such that multiple images can be captured with correct exposures and then analyzed to select the best exposure. However, the “best” captured image is selected, that best image is presented to the photographer102is step210. There are several possible ways of doing this. Many embodiments are intended to be completely “transparent” from the photographer's perspective, that is, the photographer102simply “snaps” the shutter and is presented with the selected best image, whether or not that is actually the image captured at the time of the shutter command. Consider again the situation ofFIG.1A. When the photographer102presses the shutter button (step204), the viewfinder display112is as shown inFIG.1A. Clearly, the photographer102wants a picture of the face of his friend108. The system can review the captured images from, say a second before to a second after the capture command is received, analyze them, and then select the best one. Here, that would be an image that is in focus, in which the friend108is looking at the camera104, has her eyes open, etc. That best image is presented to the photographer102when he presses the shutter button even if the image captured at the exact time of the shutter press is not as good. A slightly more complicated user interface presents the photographer102with the image captured when the shutter command was received (as is traditional) and then, if that image is not the best available, presents the photographer102with an indication (114inFIG.1A) that a “better” image is available for the photographer's consideration. Again, considering the situation ofFIG.1A, maybe his friend108blinks at the time of the capture command That ‘blinking” image is presented to the photographer102, but the indication114is lit to show that other, possibly better, images are available for the photographer's review. Other variations on the user interface are possible. The choice of which to use in a given situation can be based on settings made by the photographer102, on an analysis of the photographer's past behavior (e.g., is he a “snapshot tourist,” or does he act more like an experienced photographer?), and on analysis of the captured scene. In optional step212, the selected image is further processed, if necessary, and copied to a more permanent storage area. In some embodiments, the metadata associated with the captured images (possibly including what the photographer102eventually does with the images) are sent (step214) to a remote server device (118ofFIG.1B). The work of the remote server118is discussed in greater detail below with reference toFIG.5, but briefly, the remote server118analyzes the information, potentially from multiple image-capture devices104, looking for trends and for “best practices.” It then encapsulates what it has learned and sends recommendations to cameras104(step206). The cameras104are free to use these recommendations when they select images in step208. FIG.3presents other methods for enhancing image-capture, this time for video images. The method ofFIG.3can be performed separately from, or in conjunction with, the methods ofFIG.2. In step300, the camera104captures video while the camera104is in viewfinder mode (that is, as described above, while the camera104has not received an explicit command to capture video). As with still-image capture, parameters of the video capture can be altered to reflect the resources (e.g., battery, memory storage) available on the camera104. In some embodiments, the captured video is, at this point, simply a time sequence of “raw,” unprocessed images. (These raw images can be further processed as necessary later: See the discussion of step312below.) The storage issues mentioned above for still images are exacerbated for video, so, again, a circular buffer is recommended for storing the video as it is captured (step302). The latest video images (also called “frames”) replace the oldest ones so that at any time, the circular buffer has, for example, the last twenty seconds of captured video. Optionally, a capture command is received in step304. As discussed above, this is not treated as an actual command, but rather as an indication given by the photographer102to the camera104that the photographer102is interested in something that he is seeing in the viewfinder display112. Whether a capture command has been received or not, the captured video is continuously analyzed (step308) to see if it is “interesting.” While the photographer102can indicate his interest by pressing the shutter, other information can be used in addition to (or instead of) that, such as activity detection, intra-frame and inter-frame motion, and face detection. For example, a sudden surge of activity combined with a clearly recognizable face may indicate an interesting situation. As with still-image capture, photographer102preferences, past behavior, and privacy settings can also be used in a machine-learning sense to know what this photographer102finds interesting. If a segment (also called a “clip”) of captured video has been found to be potentially interesting (e.g., if an “interest score” for a video clip is above a set threshold), then the photographer102is notified of this in step308. The photographer102may then review the indicated video clip to see if he too finds it to be of interest. If so, then the video clip is further processed as necessary (e.g., by applying video-compression techniques) and copied into longer-term storage (step312). As a refinement, the limits of the interesting video clip can be determined using the same analysis techniques described above along with applying motion-sensor data. For example, the starting point of the clip can be set shortly before something interesting begins to occur. Also, as with the still-image embodiments, metadata can be sent to the remote server118(step314). Recommendations and refined operational parameters, based on analysis performed by the remote server118, can be received (step306) and used in the analysis of step308. Note that from the description above, in some embodiments and in some situations, the camera104captures and presents video without ever leaving the viewfinder mode. That is, the camera104views the scene, delimits video clips of interest, and notifies the photographer102of these video clips without ever receiving any explicit command to do so. In other embodiments, these video-capture and analysis techniques can be explicitly invoked or disabled by the photographer102. As mentioned above in the introduction to the discussion ofFIG.3, the still-image capture-enhancement techniques ofFIG.2can be combined with the video-image capture-enhancement techniques ofFIG.3.FIG.4presents such a combination with some interesting refinements. Consider once again the scenario ofFIG.1A. The camera104is in viewfinder mode, capturing both still images (step400, as per step200ofFIG.2) and video (step408, as in step300ofFIG.3). In the proper circumstances, the system presents both the best captured still image (step406) and interesting video (step410) for the photographer's consideration (possibly using the time of the capture command of step402to select and analyze the captured images and frames). Even though still images and video frames can be captured at the same time, the refinement ofFIG.4applies image-stabilization techniques to the captured video but not to the captured still images (step412). This provides both better video and better stills than would any known “compromise” system that does the same processing for both stills and video. In another refinement, the selection of the best still image (step406) can depend, in part, on the analysis of the video (step410) and vice versa. Consider a high-motion sports scene. The most important scenes may be best determined from analyzing the video because that will best show the action. From this, the time of the most interesting moment is determined. That determination may alter the selection process of the best still image. Thus, a still image taken at the moment when a player kicks the winning goal may be selected as the best image, even though other factors may have to be compromised (e.g. the player's face is not clearly visible in that image). Going in the other direction, a video clip may be determined to be interesting simply because it contains an excellent view of a person's face even though that person is not doing anything extraordinary during the video. Specifically, all of the metadata used in still-image selection can be used in combination with all of the metadata used in video analysis and delimitation. The combined metadata set can then be used to both select the best still image and to determine whether or not a video clip is interesting. The methods ofFIG.4can also include refinements in the use of the remote server118(steps404and414). These refinements are discussed below in reference toFIG.5. Methods of operation of the remote server118are illustrated inFIG.5. As discussed above, the server118receives metadata associated with still-image selection (step500; see also step214ofFIG.2and step414ofFIG.4). The same server118may also receive metadata associated with analyzing videos to see if they are interesting (step504; see also step314ofFIG.3and step414ofFIG.4). The server118can analyze these two data sets separately (step508) and provide still-image selection recommendations (step510) and video-analysis recommendations (step510) to various image-capture devices104. In some embodiments, however, the remote server118can do more. First, in addition to analyzing metadata, it can further analyze the data themselves (that is, the actual captured still images and video) if that content is made available to it by the image-capture devices104(steps502and506). With the metadata and the captured content, the server118can perform the same kind of selection and analysis performed locally by the image-capture devices104themselves (see step208ofFIG.2; steps308and310ofFIG.3; and steps406and410ofFIG.4). Rather than simply providing a means for second-guessing the local devices104, the server118can compare its own selections and interest scores against those locally generated and thus refine its own techniques to better match those in the general population of image-capture devices104. Further, the image-capture device104can tell the remote server118just what the photographer102did with the selected still images and the video clips thought to be interesting (steps502and506). Again, the server118can use this to further improve its recommendation models. If, for example, photographers102very often discard those still images selected as best by the techniques described above, then it is clear that those techniques may need to be improved. The server118may be able to compare an image actually kept by the photographer102against the image selected by the system and, by analyzing over a large population set, learn better how to select the “best” image. Going still further, the remote server118can analyze the still-image-selection metadata (and, if available, the still images themselves and the photographer's ultimate disposition of the still images) together with the video-analysis metadata (and, if available, the video clips themselves and the photographer's ultimate disposition of the captured video). This is similar to the cross-pollination concept discussed above with respect toFIG.4: That is, by combining the analysis of still images and video, the server118can further improve its recommendations for both selecting still images and for analyzing video clips. The particular methodologies usable here are well known from the arts of pattern analysis and machine learning. In sum, if the remote server118is given access to information about the selections and analyses of multiple image-capture devices104, then from working with that information, the server118can provide better recommendations, either generically or tailored to particular photographers102and situations. FIG.6presents methods for a user interface applicable to the presently discussed techniques. Much of the user-interface functionality has already been discussed above, so only a few points are discussed in any detail here. In step600, the camera104optionally enters the viewfinder mode wherein the camera104displays what it sees in the viewfinder display112. As mentioned above with reference toFIG.2, the photographer102may explicitly command the camera104to enter this mode, or the camera104can automatically enter this mode when it determines that this mode is desired. In a first embodiment of step602, the photographer102presses the shutter button (that is, submits an image-capture command to the camera104), the camera104momentarily enters the image-capture mode, displays a captured image in the viewfinder display112, and then re-enters viewfinder mode. In a second embodiment, the photographer puts the camera104into another mode (e.g., a “gallery” mode) where it displays already captured images, including images automatically captured. As discussed above, the displayed image can either be one captured directly in response to an image-capture command or could be a “better” image as selected by the techniques discussed above. If there is a captured image that is better than the one displayed, then the photographer102is notified of this (step604). The notification can be visual (e.g., by the icon114ofFIG.1A), aural, or even haptic. In some cases, the notification is a small version of the better image itself. If the photographer102clicks on the small version, then the full image is presented in the viewfinder display112for his consideration. While the camera104is in gallery mode, the photographer102can be notified of which images are “better” by highlighting them in some way, for example by surrounding them with a distinctive border or showing them first. Meanwhile, a different user notification can be posted if the techniques above capture a video clip deemed to be interesting. Again, several types of notification are possible, including a small still from the video (or even a presentation of the video itself). Other user interfaces are possible. While the techniques described above for selecting a still image and for analyzing a video clip are quite sophisticated, they allow for a very simple user interface, in some cases an interface completely transparent to the photographer102(e.g., just show the best captured still image when the photographer102presses the shutter button). More sophisticated user interfaces are appropriate for more sophisticated photographers102. FIG.7presents a refinement that can be used with any of the techniques described above. A first image (a still or a frame of a video) is captured in step700. Optionally, additional images are captured in step702. In step704, the first image is analyzed (e.g., looking for horizontal or vertical lines). Also, motion-sensor data from the camera104are analyzed to try to determine the horizon in the first image. Once the horizon has been detected, it can be used as input when selecting other images captured close in time to the first image. For example, the detected horizon can tell how level the camera104was held when an image was captured, and that can be a factor in determining whether that image is better than another. Also, the detected horizon can be used when post-processing images to rotate them into level or to otherwise adjust them for involuntary rotation. FIG.8shows the major components of a representative camera104or server118. The camera104could be, for example, a smartphone, tablet, personal computer, electronic book, or dedicated camera. The server118could be a personal computer, a compute server, or a coordinated group of compute servers. The central processing unit (“CPU”)800of the camera104or server118includes one or more processors (i.e., any of microprocessors, controllers, and the like) or a processor and memory system which processes computer-executable instructions to control the operation of the device104,118. In particular, the CPU800supports aspects of the present disclosure as illustrated inFIGS.1through7, discussed above. The device104,118can be implemented with a combination of software, hardware, firmware, and fixed-logic circuitry implemented in connection with processing and control circuits, generally identified at802. Although not shown, the device104,118can include a system bus or data-transfer system that couples the various components within the device104,118. A system bus can include any combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and a processor or local bus that utilizes any of a variety of bus architectures. The camera104or server118also includes one or more memory devices804that enable data storage (including the circular buffers described in reference toFIGS.2through4), examples of which include random-access memory, non-volatile memory (e.g., read-only memory, flash memory, erasable programmable read-only memory, and electrically erasable programmable read-only memory), and a disk storage device. A disk storage device may be implemented as any type of magnetic or optical storage device, such as a hard disk drive, a recordable or rewriteable disc, any type of a digital versatile disc, and the like. The device104,118may also include a mass-storage media device. The memory system804provides data-storage mechanisms to store device data812, other types of information and data, and various device applications810. An operating system806can be maintained as software instructions within the memory804and executed by the CPU800. The device applications810may also include a device manager, such as any form of a control application or software application. The utilities808may include a signal-processing and control module, code that is native to a particular component of the camera104or server118, a hardware-abstraction layer for a particular component, and so on. The camera104or server118can also include an audio-processing system814that processes audio data and controls an audio system816(which may include, for example, speakers). A visual-processing system818processes graphics commands and visual data and controls a display system820that can include, for example, a display screen112. The audio system816and the display system820may include any devices that process, display, or otherwise render audio, video, display, or image data. Display data and audio signals can be communicated to an audio component or to a display component via a radio-frequency link, S-video link, High-Definition Multimedia Interface, composite-video link, component-video link, Digital Video Interface, analog audio connection, or other similar communication link, represented by the media-data ports822. In some implementations, the audio system816and the display system820are components external to the device104,118. Alternatively (e.g., in a cellular telephone), these systems816,820are integrated components of the device104,118. The camera104or server118can include a communications interface which includes communication transceivers824that enable wired or wireless communication. Example transceivers824include Wireless Personal Area Network radios compliant with various Institute of Electrical and Electronics Engineers (“IEEE”) 802.15 standards, Wireless Local Area Network radios compliant with any of the various IEEE 802.11 standards, Wireless Wide Area Network cellular radios compliant with 3rd Generation Partnership Project standards, Wireless Metropolitan Area Network radios compliant with various IEEE 802.16 standards, and wired Local Area Network Ethernet transceivers. The camera104or server118may also include one or more data-input ports826via which any type of data, media content, or inputs can be received, such as user-selectable inputs (e.g., from a keyboard, from a touch-sensitive input screen, or from another user-input device), messages, music, television content, recorded video content, and any other type of audio, video, or image data received from any content or data source. The data-input ports826may include Universal Serial Bus ports, coaxial-cable ports, and other serial or parallel connectors (including internal connectors) for flash memory, storage disks, and the like. These data-input ports826may be used to couple the device104,118to components, peripherals, or accessories such as microphones and cameras. Finally, the camera104or server118may include any number of “other sensors”828. These sensors828can include, for example, accelerometers, a GPS receiver, compass, magnetic-field sensor, and the like. The remainder of this discussion presents details of choices and procedures that can be used in certain implementations. Although quite specific, these details are given so that the reader can more fully understand the broad concepts discussed above. These implementation choices are not intended to limit the scope of the claimed invention in any way. Many techniques can be used to evaluate still images in order to select the “best” one (step208ofFIG.2). For images that contain faces, one embodiment calculates an image score based on sharpness and exposure and calculates a separate score for facial features. First, facial-recognition techniques are applied to the captured images to see if many of them contain faces. If so, then the scene being captured is evaluated as a “face” scene. If the scene is not a face scene, then the sharpness/exposure score is used by itself to select the best image. For a face scene, on the other hand, if the images available for evaluation (that is, the set of all captured images that are reasonably close in time to the capture command) have very similar sharpness/exposure scores (e.g., the scores are equal within a similarity threshold which can be specific to the hardware used), then the best image is selected based purely on the face score. For a face scene when the set of images have significant differences in their sharpness/exposure scores, then the best image is the one that has the highest combination score based on both the sharpness/exposure score and the face score. The combination score may be a sum or weighted sum of the two scores: pictureescore(i)=mFEscore(i)+totalfaces(i) The sharpness/exposure score can be calculated using the mean of the Sobel gradient measure for all pixels in the image and the mean pixel difference between the image being analyzed and the immediately preceding image. Luminance-only data are used in these calculations. The frame-gradient metric and frame-difference metric are calculated as: mSobel=1WH⁢∑i=1W∑j=1H0.5*(abs⁡(Sobel_x⁢(i,j))+abs⁡(Sobel_y⁢(i,j))mDiff=1WH⁢∑i=1W∑j=1Habs⁡(Yt(i,j,t)-Yt-1(i,j)) where:W=image width;H=image height;Sobel_x=The result of convolution of the image with the Sobel Gx operator: Gx=[-101-202-101]Sobel_y=The result of convolution of the image with the Sobel Gy operator: Gy=[121000-1-2-1] The sharpness/exposure score is calculated for each image (i) in the circular image buffer of N images around the capture moment using the Sobel value and its minimum: mFEscore⁡(i)=(mSobel⁡(i)-minN(mSobel)+1)*(1-mDiff⁡(i)200) The mFEscore is set to 0 for any image if the mean of all pixel values in the image is not within a normal exposure range or if the focus state indicates that the image is out-of-focus. The sharpness/exposure score values for the set of available images are then normalized to a range of, say, 0 to 100 to be used in conjunction with face scores, when a face scene is detected. The face score is calculated for the images when at least one face is detected. For each face, the score consists of a weighted sum of detected-smile score, open-eyes score, and face-orientation score. For example:Smile: Values range from 1 to 100 with large values for a wide smile, small values for no smile.Eyes Open: Values range from 1 to 100 with small values for wide-open eyes, large values for closed eyes (e.g., a blink). Values are provided for each eye separately. A separate blink detector may also be used.Face Orientation (Gaze): An angle from 0 for a frontal look to +/−45 for a sideways look. The procedure uses face-detection-engine values and creates normalized scores for each of the face parameters as follows:Smile Score: Use the smile value from the engine; then normalize to a 1 to 100 range for the set of N available images as follows: smile(i)=smile(i)-minN(smile)maxN(smile)-minN(smile)Eyes-Open Score: Detect the presence of a blink or half-opened eyes using the blink detector and a change-of-eyes parameters between consecutive frames; score O for images when a blink or half-open eye is detected. For the rest of the images, a score is calculated using the average of the values for both eyes and normalizing to the range in a manner similar to that described for a smile. The maximum score is obtained when the eyes are widest open over the N images in the analysis.Face-Orientation Score (Gaze): Use a maximum score for a frontal gaze and reduce the score when the face is looking sideways For each face in the image, a face score is calculated as a weighted sum: facescore=α*smile+β*eyes+π*gaze If there are more faces than one in an image, then an average or weighted average of all face scores can be used to calculate the total face score for that image. The weights used to calculate total face score could correlate to the face size, such that larger faces have higher score contributions to the total face score. In another embodiment, weights correlate with face priority determined through position or by some face-recognition engine. For an image (i) with M faces, the total faces score then may be calculated as: totalfaces(i)=∑j=1Mwj*facescore(j)∑j=1Mwj As discussed above, the face score can then be combined (as appropriate) with the sharpness/exposure score, and the image with the highest score is selected as the “best” image. As a refinement, in some embodiments, the selected image is then compared against the “captured” image (that is, the image captured closest in time to the time of the capture command). If these images are too similar, then only the captured image is presented to the user. This consideration is generally applicable because studies have shown that photographers do not prefer the selected “best” image when its differences from the captured image are quite small. As with selecting a “best” image, many techniques can be applied to determine whether or not a captured video is “interesting.” Generally, the video-analysis procedure runs in real time, constantly marking video frames as interesting or not. Also, the video analysis determines where the interesting video clip begins and ends. Some metrics useful in video analysis include region of interest, motion vectors (“MVs”), device motion, face information, and frame statistics. These metrics are calculated per frame and associated with the frame. In some embodiments, a device-motion detection procedure combines data from a gyroscope, accelerometer, and magnetometer to calculate device movement and device position, possibly using a complementary filter or Kalman filter. The results are categorized as follows:NO_MOTION means that the device is either not moving or is experiencing only a small level of handshake;INTENTIONAL_MOTION means that the device has been intentional moved (e.g., the photographer is panning); andUNINTENTIONAL_MOTION means that the device has experienced large motion that was not intended as input to the image-capture system (e.g., the device was dropped, pulled out of a pocket, etc.). By comparing consecutive values of the calculated position, the device's motion in three spatial axes is characterized:if (delta position of all 3-axis<NO_MOVEMENT_THRESHOLD)device motion state=NO_MOTIONif (delta position of one cods<INTENTIONAL_MOTION_THRESHOLD && delta position of other two axis <NO_MOVEMENT_THRESHOLD && occurs over a sequence of frames)device motion state=INTENTIONAL_MOTIONif (delta position of any axis>UNINTENTIONAL_MOTION_THRESHOLD)device motion state=UNINTENTIONAL_MOTION The device-motion state is then stored in association with the image. Motion estimation finds movement within a frame (intra-frame) as opposed to finding movement between frames (inter-frame). A block-based motion-estimation scheme uses a sum of absolute differences (“SAD”) as the primary cost metric. Other embodiments may use object tracking. Generic motion-estimation equations include: S⁢A⁢D⁡(i,j)=∑x=0N-1∑y=0N-1❘"\[LeftBracketingBar]"s⁡(x,y,l)-s⁡(x+i,y+j,k-l)❘"\[RightBracketingBar]"s⁡(x,y,l)⁢where⁢0≤x,y≤N-1[vx,vy]=arg⁢mini,j[S⁢A⁢D⁡(i,j)] where:S(x, y, l) is a function specifying pixel location;(l)=candidate frame;(k)=reference frame; and(vx, vy) is the motion-vector displacement with respect to (i, j). The motion-estimation procedure compares each N×N candidate block against a reference frame in the past and calculates the pixel displacement of the candidate block. At each displacement position, SAD is calculated. The position that produces the minimum SAD value represents the position with the lowest distortion (based on the SAD cost metric). Once the raw vectors are calculated for each N×N block, the vectors are filtered to obtain the intra-frame motion. In one exemplary method:Motion is estimated with predicted motion vectors;The median filter is applied to the motion vectors;Motion vectors are additionally filtered for the following reasons:IIMVII>a static-motion threshold; orIIMVII>a dynamic-motion threshold; orCollocated zero SAD>mean zero SAD (of all blocks); orBlock SAD<a large-SAD threshold; orLuma variance>a low-block-activity threshold;Create a mask region (e.g., inscribe a maximal regular diamond in the rectangular frame and then inscribe a maximal regular rectangular (the “inner rectangle”) in the diamond); andCalculate:Diamond_Count=num(MV in the diamond region))/num(MV in the frame); andInner_Rectangle_Count=num(MV in the inner rectangle))/num(MV in the diamond region). Each frame of video is characterized as “interesting” (or not) based on metrics such as internal movement in the frame, luma-exposure values, device motion, Sobel-gradient scores, and face motion. These metrics are weighted to account for the priority of each metric.Internal Frame Motion: Calculated from Diamond Count and Inner_Rectangle_Count ratios;Luma Exposure: Calculated from pixel data and weighted less for over or under exposed images;Sobel-Gradient Scores: Calculated from pixel data and weighted less for Sobel scores that are far from the temporal average of Sobel scores for each frame;Device Motion: Uses device-motion states and weighted less for UNINTENTIONAL_MOTIONFace Motion: Motion vectors are calculated from detected positions for each face.Weighted less for larger motion vectors for each face. Putting these together: motion_frame⁢_score=∑i=0Nw⁡(i)*metric(i) If the motion frame score exceeds a threshold, then the frame is included in a “sequence calculation.” This sequence calculation sums up the number of frames that have interesting information and compares that to a sequence-score threshold. If the sequence-score is greater than the threshold, then the scene is marked as an interesting video clip and is permanently stored (step312ofFIG.3). Before a video clip is stored, the start and stop points are calculated. Based on device motion, the first level of delimiters are applied. The procedure finds the segment in the video where the device was marked as NO_MOTION and marks the start and stop points. As a secondary check, the procedure also examines intra-frame motion in each frame and marks those sub-segments within the segment that have no camera motion to indicate when interesting motion occurred in the video. The first frame with interesting intra-frame motion is the new start of the video clip, and the last frame after capture in the video with interesting motion ends the video clip. In some embodiments, the clip is extended to capture a small amount of time before and after the interesting section. Horizon detection (seeFIG.7and accompanying text) processes image frames and sensor data to find the frame with the most level horizon. If none of the images contain a 0 degree (within a threshold) horizon line, then the image is rotated and cropped to create an image with a level horizon. Vertical lines can be used in detecting the horizon as well. In some embodiments, the following procedure is performed continuously, as each frame arrives. For each image:Associate an angle position from the motion sensors with the image;Apply a Gaussian blur filter followed by an edge-detection filter on the image (e.g., use a Canny detection filter);Apply image processing to find lines in the image (e.g., use a Hough Line Transform). For each line found:Calculate the angle of the line with reference to 0 degrees orientation of the device (i.e., horizontal); andKeep lines that are:Within some angle threshold; andWithin some length threshold;Find the longest line (called the “maximal line”), the start and end positions of the maximal line, and the angle of the maximal line. (It is useful to store line information in polar and Cartesian coordinates and in a linear equation.) At this point in the procedure, each image contains metadata corresponding to the following parameters: the length of the maximal line, the maximal line's angle with respect to the horizon, the linear equation of the maximal line, and the device orientation (i.e., the angle of the device with respect to the image plane derived from the motion sensors). For each series of images, remove from consideration those images where the absolute difference (device orientation angle minus angle of the maximal line) is greater than a threshold. This allows physical motion-sensor information to be used in conjunction with pixel information to determine the angle. Find the “region of interest” for each image. To do this, extend the maximal line in the image to two boundaries of the image. The region of interest is the smallest rectangle bounded on the right and left sides of the image that contains the maximal line. Next find the “reference region” by finding the area of greatest overlap among the regions of interest of the relevant images. This helps verify that each maximal line is actually the same horizon line but captured at different angles in the different images. Remove from consideration any images whose maximal lines fall outside of the reference region. Finally, for the relevant images, select that image whose maximal line in the reference region has an angle closest to 0 degree orientation (that is, closest to horizontal). Use that as the detected horizon. If necessary, that is, if the angle of the selected image is greater than some threshold, the rotate the image using the calculated angle and crop and upscale the image. In view of the many possible embodiments to which the principles of the present discussion may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of the claims. Therefore, the techniques as described herein contemplate all such embodiments as may come within the scope of the following claims and equivalents thereof.
45,961
11943533
DETAILED DESCRIPTION An image capture apparatus or device may be designed to be operated using one or more lenses, wherein one or more of the lenses may be attachable, so that the respective lens is operable for image capture, and detachable, so that the image capture apparatus or device operates to capture one or more images using a lens other than the disconnected lens. The current operative lens may receive and direct light to the image sensor of the image capture apparatus. The portion or portions of the image sensor to which the received light is directed may depend on the operative lens. For example, a primary lens may be attached, or otherwise operatively connected, to the image capture apparatus, and the primary lens may direct light to the image sensor, including the respective portions of the image sensor. In another example, an alternate lens, which may be a fisheye lens, may be attached, or otherwise operatively connected, to the image capture apparatus, and the alternate lens may direct light to a circle at the center of the image sensor, such that relatively little, or no, light is directed to corner portions of the image sensor. The image capture apparatus may include one more settings, or configurations, such as image capture settings, image processing settings, or both, that are used to control the capture, processing, or both, of images by the image capture apparatus. The settings may include one or more settings, or configurations, associated with one or more of the lenses. For example, image capture and processing settings may include a current lens mode, or a currently configured lens mode, which may have a first value associated with using the primary lens and a second value associated with using the alternate lens. Other lenses and settings may be used. In some implementations, the currently configured lens mode may indicate a mode, such as primary lens mode or alternate lens mode, that corresponds with the current operative lens, such that image capture and processing is performed efficiently and accurately. In some implementations, the currently configured lens mode may indicate a mode, such as primary lens mode or alternate lens mode, that conflicts with the current operative lens, such that image capture, processing, or both, such as for stabilization, auto-exposure, auto-white balance, or other image processing, are performed inefficiently, inaccurately, or both. Although described herein as inaccurate, in some implementations, a mismatch between lens and lens mode, such as using the alternate lens and the primary lens mode, may be used to obtain a distorted image. For example, the image data corresponding to one or more of the corner portions of an image captured using the primary lens may include values corresponding to receiving, or detecting, substantial amounts of light, and image processing in accordance with the primary image mode may utilize the image data corresponding to the corner portions of the image, corresponding to the corner portions of the image sensor, such that the image processing is efficient and accurate, whereas image processing of the image in accordance with the alternate image mode may omit utilizing the image data corresponding to the corner portions of the image, corresponding to the corner portions of the image sensor, such that the image processing is inefficient, inaccurate, or both. In another example, the image data corresponding to one or more of the corner portions of an image captured using the alternate lens, wherein the alternate lens is a fisheye lens, may include values corresponding to receiving, or detecting, substantially little, or no, light, and image processing in accordance with the primary image mode may utilize the image data corresponding to the corner portions of the image, corresponding to the corner portions of the image sensor, which may have values substantially close to black or zero light, such that the image processing is inefficient, inaccurate, or both, whereas image processing of the image in accordance with the alternate image mode may omit utilizing the image data corresponding to the corner portions of the image, corresponding to the corner portions of the image sensor, such that the image processing is efficient and accurate. In some implementations, express data indicating the connected, or attached, operative lens may be unavailable or inaccessible to the image capture apparatus, or a component thereof. For example, the current lens mode of the image capture apparatus may be a primary lens mode, a user of the image capture apparatus may operatively connect an alternate lens, such as a fisheye lens, and the user may omit setting, or configuring, the current lens mode to the alternate lens mode, such that the configured lens mode and the operatively connected lens are inconsistent, incompatible, or mismatched. In another example, the current lens mode of the image capture apparatus may be the alternate lens mode, a user of the image capture apparatus may operatively connect the primary lens, and the user may omit setting, or configuring, the current lens mode to the primary lens mode, such that the configured lens mode and the operatively connected lens are inconsistent, incompatible, or mismatched. To improve the accuracy, efficiency, or both, of the image capture apparatus, lens mode auto-detection may be performed. In the absence of data expressly indicating a mismatch between the lens and the lens mode, lens mode auto-detection may detect a mismatch between the lens and the lens mode using image analysis and may automatically output data, such as a user interface notification, the currently configured lens mode may be inconsistent with the currently operative lens. In some implementations, lens mode auto-detection may include automatically adjusting image processing to improve image quality for images captured by a lens that is inconsistent with the currently configured lens mode. FIGS.1A-Bare isometric views of an example of an image capture apparatus100. The image capture apparatus100includes a body102, an image capture device104, an indicator106, a display108, a mode button110, a shutter button112, a door114, a hinge mechanism116, a latch mechanism118, a seal120, a battery interface122, a data interface124, a battery receptacle126, microphones128,130,132, a speaker136, an interconnect mechanism138, and a display140. Although not expressly shown inFIG.1, the image capture apparatus100includes internal electronics, such as imaging electronics, power electronics, and the like, internal to the body102for capturing images and performing other functions of the image capture apparatus100. An example showing internal electronics is shown inFIG.3. The arrangement of the components of the image capture apparatus100shown inFIGS.1A-Bis an example, other arrangements of elements may be used, except as is described herein or as is otherwise clear from context. The body102of the image capture apparatus100may be made of a rigid material such as plastic, aluminum, steel, or fiberglass. Other materials may be used. As shown inFIG.1A, the image capture apparatus100includes the image capture device104structured on a front surface of, and within, the body102. The image capture device104includes a lens. The lens of the image capture device104receives light incident upon the lens of the image capture device104and directs the received light onto an image sensor of the image capture device104internal to the body102. The image capture apparatus100may capture one or more images, such as a sequence of images, such as video. The image capture apparatus100may store the captured images and video for subsequent display, playback, or transfer to an external device. Although one image capture device104is shown inFIG.1A, the image capture apparatus100may include multiple image capture devices, which may be structured on respective surfaces of the body102. As shown inFIG.1A, the image capture apparatus100includes the indicator106structured on the front surface of the body102. The indicator106may output, or emit, visible light, such as to indicate a status of the image capture apparatus100. For example, the indicator106may be a light-emitting diode (LED). Although one indicator106is shown inFIG.1A, the image capture apparatus100may include multiple indictors structured on respective surfaces of the body102. As shown inFIG.1A, the image capture apparatus100includes the display108structured on the front surface of the body102. The display108outputs, such as presents or displays, such as by emitting visible light, information, such as to show image information such as image previews, live video capture, or status information such as battery life, camera mode, elapsed time, and the like. In some implementations, the display108may be an interactive display, which may receive, detect, or capture input, such as user input representing user interaction with the image capture apparatus100. Although one display108is shown inFIG.1A, the image capture apparatus100may include multiple displays, which may be structured on respective surfaces of the body102. In some implementations, the display108may be omitted or combined with another component of the image capture apparatus100. As shown inFIG.1B, the image capture apparatus100includes the mode button110structured on a side surface of the body102. Although described as a button, the mode button110may be another type of input device, such as a switch, a toggle, a slider, or a dial. Although one mode button110is shown inFIG.1B, the image capture apparatus100may include multiple mode, or configuration, buttons structured on respective surfaces of the body102. In some implementations, the mode button110may be omitted or combined with another component of the image capture apparatus100. For example, the display108may be an interactive, such as touchscreen, display, and the mode button110may be physically omitted and functionally combined with the display108. As shown inFIG.1A, the image capture apparatus100includes the shutter button112structured on a top surface of the body102. Although described as a button, the shutter button112may be another type of input device, such as a switch, a toggle, a slider, or a dial. Although one shutter button112is shown inFIG.1A, the image capture apparatus100may include multiple shutter buttons structured on respective surfaces of the body102. In some implementations, the shutter button112may be omitted or combined with another component of the image capture apparatus100. The mode button110, the shutter button112, or both, obtain input data, such as user input data in accordance with user interaction with the image capture apparatus100. For example, the mode button110, the shutter button112, or both, may be used to turn the image capture apparatus100on and off, scroll through modes and settings, and select modes and change settings. As shown inFIG.1A, the image capture apparatus100includes the door114coupled to the body102, such as using the hinge mechanism116. The door114may be secured to the body102using the latch mechanism118that releasably engages the body102at a position generally opposite the hinge mechanism116. As shown inFIG.1A, the door114includes the seal120and the battery interface122. Although one door114is shown inFIG.1A, the image capture apparatus100may include multiple doors respectively forming respective surfaces of the body102, or portions thereof. Although not shown inFIGS.1A-B, the door114may be removed from the body102by releasing the latch mechanism118from the body102and decoupling the hinge mechanism116from the body102. InFIG.1A, the door114is shown in an open position such that the data interface124is accessible for communicating with external devices and the battery receptacle126is accessible for placement or replacement of a battery (not shown). InFIG.1B, the door114is shown in a closed position. In implementations in which the door114is in the closed position the seal120engages a flange (not shown) to provide an environmental seal. In implementations in which the door114is in the closed position the battery interface122engages the battery to secure the battery in the battery receptacle126. As shown inFIG.1A, the image capture apparatus100includes the battery receptacle126structured to form a portion of an interior surface of the body102. The battery receptacle126includes operative connections (not shown) for power transfer between the battery and the image capture apparatus100. In some implementations, the battery receptable126may be omitted. Although one battery receptacle126is shown inFIG.1A, the image capture apparatus100may include multiple battery receptacles. As shown inFIG.1A, the image capture apparatus100includes a first microphone128structured on a front surface of the body102. As shown inFIG.1A, the image capture apparatus100includes a second microphone130structured on a top surface of the body102. As shown inFIG.1B, the image capture apparatus100includes the drain microphone132structured on a side surface of the body102. The drain microphone132is a microphone located behind a drain cover, including a drainage channel134for draining liquid from audio components of the image capture apparatus100, including the drain microphone132. The image capture apparatus100may include other microphones (not shown) on other surfaces of the body102. The microphones128,130,132receive and record audio, such as in conjunction with capturing video or separate from capturing video. In some implementations, one or more of the microphones128,130,132may be omitted or combined with other components of the image capture apparatus100. As shown inFIG.1B, the image capture apparatus100includes the speaker136structured on a bottom surface of the body102. The speaker136outputs or presents audio, such as by playing back recorded audio or emitting sounds associated with notifications. Although one speaker136is shown inFIG.1B, the image capture apparatus100may include multiple speakers structured on respective surfaces of the body102. As shown inFIG.1B, the image capture apparatus100includes the interconnect mechanism138structured on a bottom surface of the body102. The interconnect mechanism138removably connects the image capture apparatus100to an external structure, such as a handle grip, another mount, or a securing device. As shown inFIG.1B, the interconnect mechanism138includes folding protrusions configured to move between a nested or collapsed position as shown inFIG.1Band an extended or open position (not shown inFIG.1B). The folding protrusions of the interconnect mechanism138shown in the collapsed position inFIG.1Bmay be similar to the folding protrusions of the interconnect mechanism214shown in the extended or open position inFIGS.2A-2B, except as is described herein or as is otherwise clear from context. The folding protrusions of the interconnect mechanism138in the extended or open position may be coupled to reciprocal protrusions of other devices such as handle grips, mounts, clips, or like devices. Although one interconnect mechanism138is shown inFIG.1B, the image capture apparatus100may include multiple interconnect mechanisms structured on, or forming a portion of, respective surfaces of the body102. In some implementations, the interconnect mechanism138may be omitted. As shown inFIG.1B, the image capture apparatus100includes the display140structured on, and forming a portion of, a rear surface of the body102. The display140outputs, such as presents or displays, such as by emitting visible light, data, such as to show image information such as image previews, live video capture, or status information such as battery life, camera mode, elapsed time, and the like. In some implementations, the display140may be an interactive display, which may receive, detect, or capture input, such as user input representing user interaction with the image capture apparatus100. Although one display140is shown inFIG.1B, the image capture apparatus100may include multiple displays structured on respective surfaces of the body102. In some implementations, the display140may be omitted or combined with another component of the image capture apparatus100. The image capture apparatus100may include features or components other than those described herein, such as other buttons or interface features. In some implementations, interchangeable lenses, cold shoes, and hot shoes, or a combination thereof, may be coupled to or combined with the image capture apparatus100. Although not shown inFIGS.1A-1B, the image capture apparatus100may communicate with an external device, such as an external user interface device (not shown), via a wired or wireless computing communication link, such as via the data interface124. The computing communication link may be a direct computing communication link or an indirect computing communication link, such as a link including another device or a network, such as the Internet. The image capture apparatus100may transmit images to the external device via the computing communication link. The external device may store, process, display, or combination thereof, the images. The external user interface device may be a computing device, such as a smartphone, a tablet computer, a phablet, a smart watch, a portable computer, personal computing device, or another device or combination of devices configured to receive user input, communicate information with the image capture apparatus100via the computing communication link, or receive user input and communicate information with the image capture apparatus100via the computing communication link. The external user interface device may implement or execute one or more applications to manage or control the image capture apparatus100. For example, the external user interface device may include an application for controlling camera configuration, video acquisition, video display, or any other configurable or controllable aspect of the image capture apparatus100. In some implementations, the external user interface device may generate and share, such as via a cloud-based or social media service, one or more images or video clips. In some implementations, the external user interface device may display unprocessed or minimally processed images or video captured by the image capture apparatus100contemporaneously with capturing the images or video by the image capture apparatus100, such as for shot framing or live preview. FIGS.2A-2Billustrate another example of an image capture apparatus200. The image capture apparatus200is similar to the image capture apparatus100shown inFIGS.1A-B, except as is described herein or as is otherwise clear from context. The image capture apparatus200includes a body202, a first image capture device204, a second image capture device206, indicators208, a mode button210, a shutter button212, an interconnect mechanism214, a drainage channel216, audio components218,220,222, a display224, and a door226including a release mechanism228. The arrangement of the components of the image capture apparatus200shown inFIGS.2A-2Bis an example, other arrangements of elements may be used, except as is described herein or as is otherwise clear from context. The body202of the image capture apparatus200may be similar to the body102shown inFIGS.1A-1B, except as is described herein or as is otherwise clear from context. As shown inFIG.2A, the image capture apparatus200includes the first image capture device204structured on a front surface of the body202. The first image capture device204includes a first lens. The first image capture device204may be similar to the image capture device104shown inFIG.1A, except as is described herein or as is otherwise clear from context. As shown inFIG.2B, the image capture apparatus200includes the second image capture device206structured on a rear surface of the body202. The second image capture device206includes a second lens. The second image capture device206may be similar to the image capture device104shown inFIG.1A, except as is described herein or as is otherwise clear from context. The image capture devices204,206are disposed on opposing surfaces of the body202, for example, in a back-to-back configuration, Janus configuration, or offset Janus configuration. Although two image capture devices204,206are shown inFIGS.2A-2B, the image capture apparatus200may include other image capture devices structured on respective surfaces of the body202. As shown inFIG.2A, the image capture apparatus200includes the indicators208structured on a top surface of the body202. The indicators208may be similar to the indicator106shown inFIG.1A, except as is described herein or as is otherwise clear from context. For example, one of the indicators208may indicate a status of the first image capture device204and another one of the indicators208may indicate a status of the second image capture device206. Although two indicator208are shown inFIGS.2A-2B, the image capture apparatus200may include other indictors structured on respective surfaces of the body202. As shown inFIGS.2A-B, the image capture apparatus200includes input mechanisms including a mode button210, structured on a side surface of the body202, and a shutter button212, structured on a top surface of the body202. The mode button210may be similar to the mode button110shown inFIG.1B, except as is described herein or as is otherwise clear from context. The shutter button212may be similar to the shutter button112shown inFIG.1A, except as is described herein or as is otherwise clear from context. The image capture apparatus200includes internal electronics (not expressly shown), such as imaging electronics, power electronics, and the like, internal to the body202for capturing images and performing other functions of the image capture apparatus200. An example showing internal electronics is shown inFIG.3. As shown inFIGS.2A-2B, the image capture apparatus200includes the interconnect mechanism214structured on a bottom surface of the body202. The interconnect mechanism214may be similar to the interconnect mechanism138shown inFIG.1B, except as is described herein or as is otherwise clear from context. For example, the interconnect mechanism138shown inFIG.1Bis shown in the nested or collapsed position and the interconnect mechanism214shown inFIGS.2A-2Bare shown in an extended or open position. As shown inFIG.2A, the image capture apparatus200includes the drainage channel216for draining liquid from audio components of the image capture apparatus200. As shown inFIGS.2A-2B, the image capture apparatus200includes the audio components218,220,222, respectively structured on respective surfaces of the body202. The audio components218,220,222may be similar to the microphones128,130,132and the speaker136shown inFIGS.1A-1B, except as is described herein or as is otherwise clear from context. One or more of the audio components218,220,222may be, or may include, audio sensors, such as microphones, to receive and record audio signals, such as voice commands or other audio, in conjunction with capturing images or video. One or more of the audio components218,220,222may be, or may include, an audio presentation component that may present, or play, audio, such as to provide notifications or alerts. As shown inFIG.2A, a first audio component218is located on a front surface of the body202. As shown inFIG.2B, a second audio component220is located on a side surface of the body202, and a third audio component222is located on a back surface of the body202. Other numbers and configurations for the audio components may be used. As shown inFIG.2A, the image capture apparatus200includes the display224structured on a front surface of the body202. The display224may be similar to the displays108,140shown inFIGS.1A-1B, except as is described herein or as is otherwise clear from context. The display224may include an I/O interface. The display224may receive touch inputs. The display224may display image information during video capture. The display224may provide status information to a user, such as status information indicating battery power level, memory card capacity, time elapsed for a recorded video, etc. Although one display224is shown inFIG.2A, the image capture apparatus200may include multiple displays structured on respective surfaces of the body202. In some implementations, the display224may be omitted or combined with another component of the image capture apparatus200. As shown inFIG.2A, the image capture apparatus200includes the door226structured on, or forming a portion of, the side surface of the body202. The door226may be similar to the door114shown inFIG.1A, except as is described herein or as is otherwise clear from context. For example, the door226shown inFIG.2Aincludes a release mechanism228. The release mechanism228may include a latch, a button, or another mechanism configured to receive a user input that allows the door226to change position. The release mechanism228may be used to open the door226for a user to access a battery, a battery receptacle, an I/O interface, a memory card interface, etc. (not shown) In some embodiments, the image capture apparatus200may include features or components other than those described herein, some features or components described herein may be omitted, or some features or components described herein may be combined. For example, the image capture apparatus200may include additional interfaces or different interface features, interchangeable lenses, cold shoes, or hot shoes. FIG.2Cis a top view of the image capture apparatus200ofFIGS.2A-2B. For simplicity, some features or components of the image capture apparatus200shown inFIGS.2A-2Bare omitted fromFIG.2C. As shown inFIG.2C, the first image capture device204includes a first lens230and the second image capture device206includes a second lens232. The image capture apparatus200captures spherical images. For example, the first image capture device204may capture a first image, such as a first hemispheric, or hyper-hemispherical, image, the second image capture device206may capture a second image, such as a second hemispheric, or hyper-hemispherical, image, and the image capture apparatus200may generate a spherical image incorporating or combining the first image and the second image, which may be captured concurrently, or substantially concurrently. The first image capture device204defines a first field-of-view240wherein the first lens230of the first image capture device204receives light. The first lens230directs the received light corresponding to the first field-of-view240onto a first image sensor242of the first image capture device204. For example, the first image capture device204may include a first lens barrel (not expressly shown), extending from the first lens230to the first image sensor242. The second image capture device206defines a second field-of-view244wherein the second lens232receives light. The second lens232directs the received light corresponding to the second field-of-view244onto a second image sensor246of the second image capture device206. For example, the second image capture device206may include a second lens barrel (not expressly shown), extending from the second lens232to the second image sensor246. A boundary248of the first field-of-view240is shown using broken directional lines. A boundary250of the second field-of-view244is shown using broken directional lines. As shown, the image capture devices204,206are arranged in a back-to-back (Janus) configuration such that the lenses230,232face in generally opposite directions, such that the image capture apparatus200may capture spherical images. The first image sensor242captures a first hyper-hemispherical image plane from light entering the first lens230. The second image sensor246captures a second hyper-hemispherical image plane from light entering the second lens232. As shown inFIG.2C, the fields-of-view240,244partially overlap such that the combination of the fields-of-view240,244form a spherical field-of-view, except that one or more uncaptured areas252,254may be outside of the fields-of-view240,244of the lenses230,232. Light emanating from or passing through the uncaptured areas252,254, which may be proximal to the image capture apparatus200, may be obscured from the lenses230,232and the corresponding image sensors242,246, such that content corresponding to the uncaptured areas252,254may be omitted from images captured by the image capture apparatus200. In some implementations, the image capture devices204,206, or the lenses230,232thereof, may be configured to minimize the uncaptured areas252,254. Examples of points of transition, or overlap points, from the uncaptured areas252,254to the overlapping portions of the fields-of-view240,244are shown at256,258. Images contemporaneously captured by the respective image sensors242,246may be combined to form a combined image, such as a spherical image. Generating a combined image may include correlating the overlapping regions captured by the respective image sensors242,246, aligning the captured fields-of-view240,244, and stitching the images together to form a cohesive combined image. Stitching the images together may include correlating the overlap points256,258with respective locations in corresponding images captured by the image sensors242,246. Although a planar view of the fields-of-view240,244is shown inFIG.2C, the fields-of-view240,244are hyper-hemispherical. A change in the alignment, such as position, tilt, or a combination thereof, of the image capture devices204,206, such as of the lenses230,232, the image sensors242,246, or both, may change the relative positions of the respective fields-of-view240,244, may change the locations of the overlap points256,258, such as with respect to images captured by the image sensors242,246, and may change the uncaptured areas252,254, which may include changing the uncaptured areas252,254unequally. Incomplete or inaccurate information indicating the alignment of the image capture devices204,206, such as the locations of the overlap points256,258, may decrease the accuracy, efficiency, or both of generating a combined image. In some implementations, the image capture apparatus200may maintain information indicating the location and orientation of the image capture devices204,206, such as of the lenses230,232, the image sensors242,246, or both, such that the fields-of-view240,244, the overlap points256,258, or both may be accurately determined, which may improve the accuracy, efficiency, or both of generating a combined image. The lenses230,232may be aligned along an axis (not shown), laterally offset from each other, off-center from a central axis of the image capture apparatus200, or laterally offset and off-center from the central axis. As compared to image capture devices with back-to-back lenses, such as lenses aligned along the same axis, image capture devices including laterally offset lenses may include substantially reduced thickness relative to the lengths of the lens barrels securing the lenses. For example, the overall thickness of the image capture apparatus200may be close to the length of a single lens barrel as opposed to twice the length of a single lens barrel as in a back-to-back lens configuration. Reducing the lateral distance between the lenses230,232may improve the overlap in the fields-of-view240,244, such as by reducing the uncaptured areas252,254. Images or frames captured by the image capture devices204,206may be combined, merged, or stitched together to produce a combined image, such as a spherical or panoramic image, which may be an equirectangular planar image. In some implementations, generating a combined image may include use of techniques such as noise reduction, tone mapping, white balancing, or other image correction. In some implementations, pixels along a stitch boundary, which may correspond with the overlap points256,258, may be matched accurately to minimize boundary discontinuities. FIG.3is a block diagram of electronic components in an image capture apparatus300. The image capture apparatus300may be a single-lens image capture device, a multi-lens image capture device, or variations thereof, including an image capture apparatus with multiple capabilities such as the use of interchangeable integrated sensor lens assemblies. Components, such as electronic components, of the image capture apparatus100shown inFIGS.1A-B, or the image capture apparatus200shown inFIGS.2A-C, may be implemented as shown inFIG.3, except as is described herein or as is otherwise clear from context. The image capture apparatus300includes a body302. The body302may be similar to the body102shown inFIGS.1A-1B, or the body202shown inFIGS.2A-B, except as is described herein or as is otherwise clear from context. The body302includes electronic components such as capture components310, processing components320, data interface components330, spatial sensors340, power components350, user interface components360, and a bus370. The capture components310include an image sensor312for capturing images. Although one image sensor312is shown inFIG.3, the capture components310may include multiple image sensors. The image sensor312may be similar to the image sensors242,246shown inFIG.2C, except as is described herein or as is otherwise clear from context. The image sensor312may be, for example, a charge-coupled device (CCD) sensor, an active pixel sensor (APS), a complementary metal-oxide-semiconductor (CMOS) sensor, or an N-type metal-oxide-semiconductor (NMOS) sensor. The image sensor312detects light, such as within a defined spectrum, such as the visible light spectrum or the infrared spectrum, incident through a corresponding lens such as the lens230with respect to the image sensor242as shown inFIG.2Cor the lens232with respect to the image sensor246as shown inFIG.2C. The image sensor312captures detected light as image data and conveys the captured image data as electrical signals (image signals or image data) to the other components of the image capture apparatus300, such as to the processing components320, such as via the bus370. The capture components310include a microphone314for capturing audio. Although one microphone314is shown inFIG.3, the capture components310may include multiple microphones. The microphone314detects and captures, or records, sound, such as sound waves incident upon the microphone314. The microphone314may detect, capture, or record sound in conjunction with capturing images by the image sensor312. The microphone314may detect sound to receive audible commands to control the image capture apparatus300. The microphone314may be similar to the microphones128,130,132shown inFIGS.1A-1Bor the audio components218,220,222shown inFIGS.2A-2B, except as is described herein or as is otherwise clear from context. The processing components320perform image signal processing, such as filtering, tone mapping, or stitching, to generate, or obtain, processed images, or processed image data, based on image data obtained from the image sensor312. The processing components320may include one or more processors having single or multiple processing cores. In some implementations, the processing components320may include, or may be, an application specific integrated circuit (ASIC) or a digital signal processor (DSP). For example, the processing components320may include a custom image signal processor. The processing components320conveys data, such as processed image data, with other components of the image capture apparatus300via the bus370. In some implementations, the processing components320may include an encoder, such as an image or video encoder that may encode, decode, or both, the image data, such as for compression coding, transcoding, or a combination thereof. Although not shown expressly inFIG.3, the processing components320may include memory, such as a random-access memory (RAM) device, which may be non-transitory computer-readable memory. The memory of the processing components320may include executable instructions and data that can be accessed by the processing components320. The data interface components330communicates with other, such as external, electronic devices, such as a remote control, a smartphone, a tablet computer, a laptop computer, a desktop computer, or an external computer storage device. For example, the data interface components330may receive commands to operate the image capture apparatus300. In another example, the data interface components330may transmit image data to transfer the image data to other electronic devices. The data interface components330may be configured for wired communication, wireless communication, or both. As shown, the data interface components330include an I/O interface332, a wireless data interface334, and a storage interface336. In some implementations, one or more of the I/O interface332, the wireless data interface334, or the storage interface336may be omitted or combined. The I/O interface332may send, receive, or both, wired electronic communications signals. For example, the I/O interface332may be a universal serial bus (USB) interface, such as USB type-C interface, a high-definition multimedia interface (HDMI), a FireWire interface, a digital video interface link, a display port interface link, a Video Electronics Standards Associated (VESA) digital display interface link, an Ethernet link, or a Thunderbolt link. Although one I/O interface332is shown inFIG.3, the data interface components330include multiple I/O interfaces. The I/O interface332may be similar to the data interface124shown inFIG.1A, except as is described herein or as is otherwise clear from context. The wireless data interface334may send, receive, or both, wireless electronic communications signals. The wireless data interface334may be a Bluetooth interface, a ZigBee interface, a Wi-Fi interface, an infrared link, a cellular link, a near field communications (NFC) link, or an Advanced Network Technology interoperability (ANT+) link. Although one wireless data interface334is shown inFIG.3, the data interface components330include multiple wireless data interfaces. The wireless data interface334may be similar to the data interface124shown inFIG.1A, except as is described herein or as is otherwise clear from context. The storage interface336may include a memory card connector, such as a memory card receptacle, configured to receive and operatively couple to a removable storage device, such as a memory card, and to transfer, such as read, write, or both, data between the image capture apparatus300and the memory card, such as for storing images, recorded audio, or both captured by the image capture apparatus300on the memory card. Although one storage interface336is shown inFIG.3, the data interface components330include multiple storage interfaces. The storage interface336may be similar to the data interface124shown inFIG.1A, except as is described herein or as is otherwise clear from context. The spatial, or spatiotemporal, sensors340detect the spatial position, movement, or both, of the image capture apparatus300. As shown inFIG.3, the spatial sensors340include a position sensor342, an accelerometer344, and a gyroscope346. The position sensor342, which may be a global positioning system (GPS) sensor, may determine a geospatial position of the image capture apparatus300, which may include obtaining, such as by receiving, temporal data, such as via a GPS signal. The accelerometer344, which may be a three-axis accelerometer, may measure linear motion, linear acceleration, or both of the image capture apparatus300. The gyroscope346, which may be a three-axis gyroscope, may measure rotational motion, such as a rate of rotation, of the image capture apparatus300. In some implementations, the spatial sensors340may include other types of spatial sensors. In some implementations, one or more of the position sensor342, the accelerometer344, and the gyroscope346may be omitted or combined. The power components350distribute electrical power to the components of the image capture apparatus300for operating the image capture apparatus300. As shown inFIG.3, the power components350include a battery interface352, a battery354, and an external power interface356(ext. interface). The battery interface352(bat. interface) operatively couples to the battery354, such as via conductive contacts to transfer power from the battery354to the other electronic components of the image capture apparatus300. The battery interface352may be similar to the battery receptacle126shown inFIG.1A, except as is described herein or as is otherwise clear from context. The external power interface356obtains or receives power from an external source, such as a wall plug or external battery, and distributes the power to the components of the image capture apparatus300, which may include distributing power to the battery354via battery interface352to charge the battery354. Although one battery interface352, one battery354, and one external power interface356are shown inFIG.3, any number of battery interfaces, batteries, and external power interfaces may be used. In some implementations, one or more of the battery interface352, the battery354, and the external power interface356may be omitted or combined. For example, in some implementations, the external interface356and the I/O interface332may be combined. The user interface components360receive input, such as user input, from a user of the image capture apparatus300, output, such as display or present, information to a user, or both receive input and output information, such as in accordance with user interaction with the image capture apparatus300. As shown inFIG.3, the user interface components360include visual output components362to visually communicate information, such as to present captured images. As shown, the visual output components362include an indicator362.2and a display362.4. The indicator362.2may be similar to the indicator106shown inFIG.1Aor the indicators208shown inFIG.2A, except as is described herein or as is otherwise clear from context. The display362.4may be similar to the display108shown inFIG.1A, the display140shown inFIG.1B, or the display224shown inFIG.2A, except as is described herein or as is otherwise clear from context. Although the visual output components362are shown inFIG.3as including one indicator362.2, the visual output components362may include multiple indicators. Although the visual output components362are shown inFIG.3as including one display362.4, the visual output components362may include multiple displays. In some implementations, one or more of the indicator362.2or the display362.4may be omitted or combined. As shown inFIG.3, the user interface components360include a speaker364. The speaker364may be similar to the speaker136shown inFIG.1Bor the audio components218,220,222shown inFIGS.2A-B, except as is described herein or as is otherwise clear from context. Although one speaker364is shown inFIG.3, the user interface components360may include multiple speakers. In some implementations, the speaker364may be omitted or combined with another component of the image capture apparatus300, such as the microphone314. As shown inFIG.3, the user interface components360include a physical input interface366. The physical input interface366may be similar to the shutter button112shown inFIG.1A, the mode button110shown inFIG.1B, the shutter button212shown inFIG.2A, or the mode button210shown inFIG.2B, except as is described herein or as is otherwise clear from context. Although one physical input interface366is shown inFIG.3, the user interface components360may include multiple physical input interfaces. In some implementations, the physical input interface366may be omitted or combined with another component of the image capture apparatus300. The physical input interface366may be, for example, a button, a toggle, a switch, a dial, or a slider. As shown inFIG.3, the user interface components360include a broken line border box labeled “other”, to indicate that components of the image capture apparatus300other than the components expressly shown as included in the user interface components360may be user interface components. For example, the microphone314may receive, or capture, and process audio signals to obtain input data, such as user input data corresponding to voice commands. In another example, the image sensor312may receive, or capture, and process image data to obtain input data, such as user input data corresponding to visible gesture commands. In another example, one or more of the spatial sensors340, such as a combination of the accelerometer344and the gyroscope346, may receive, or capture, and process motion data to obtain input data, such as user input data corresponding to motion gesture commands. FIG.4is a block diagram of an example of an image processing pipeline400. The image processing pipeline400, or a portion thereof, is implemented in an image capture apparatus, such as the image capture apparatus100shown inFIGS.1A-1B, the image capture apparatus200shown inFIGS.2A-2C, the image capture apparatus300shown inFIG.3, or another image capture apparatus. In some implementations, the image processing pipeline400may be implemented in a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or a combination of a digital signal processor and an application-specific integrated circuit. One or more components of the pipeline400may be implemented in hardware, software, or a combination of hardware and software. As shown inFIG.4, the image processing pipeline400includes an image sensor410, an image signal processor (ISP)420, and an encoder430. The encoder430is shown with a broken line border to indicate that the encoder may be omitted, or absent, from the image processing pipeline400. In some implementations, the encoder430may be included in another device. In implementations that include the encoder430, the image processing pipeline400may be an image processing and coding pipeline. The image processing pipeline400may include components other than the components shown inFIG.4. The image sensor410receives input440, such as photons incident on the image sensor410. The image sensor410captures image data (source image data). Capturing source image data includes measuring or sensing the input440, which may include counting, or otherwise measuring, photons incident on the image sensor410, such as for a defined temporal duration or period (exposure). Capturing source image data includes converting the analog input440to a digital source image signal in a defined format, which may be referred to herein as “a raw image signal.” For example, the raw image signal may be in a format such as RGB format, which may represent individual pixels using a combination of values or components, such as a red component (R), a green component (G), and a blue component (B). In another example, the raw image signal may be in a Bayer format, wherein a respective pixel may be one of a combination of adjacent pixels, such as a combination of four adjacent pixels, of a Bayer pattern. Although one image sensor410is shown inFIG.4, the image processing pipeline400may include two or more image sensors. In some implementations, an image, or frame, such as an image, or frame, included in the source image signal, may be one of a sequence or series of images or frames of a video, such as a sequence, or series, of frames captured at a rate, or frame rate, which may be a number or cardinality of frames captured per defined temporal period, such as twenty-four, thirty, sixty, or one-hundred twenty frames per second. The image sensor410obtains image acquisition configuration data450. The image acquisition configuration data450may include image cropping parameters, binning/skipping parameters, pixel rate parameters, bitrate parameters, resolution parameters, framerate parameters, or other image acquisition configuration data or combinations of image acquisition configuration data. Obtaining the image acquisition configuration data450may include receiving the image acquisition configuration data450from a source other than a component of the image processing pipeline400. For example, the image acquisition configuration data450, or a portion thereof, may be received from another component, such as a user interface component, of the image capture apparatus implementing the image processing pipeline400, such as one or more of the user interface components360shown inFIG.3. The image sensor410obtains, outputs, or both, the source image data in accordance with the image acquisition configuration data450. For example, the image sensor410may obtain the image acquisition configuration data450prior to capturing the source image. The image sensor410receives, or otherwise obtains or accesses, adaptive acquisition control data460, such as auto exposure (AE) data, auto white balance (AWB) data, global tone mapping (GTM) data, Auto Color Lens Shading (ACLS) data, color correction data, or other adaptive acquisition control data or combination of adaptive acquisition control data. For example, the image sensor410receives the adaptive acquisition control data460from the image signal processor420. The image sensor410obtains, outputs, or both, the source image data in accordance with the adaptive acquisition control data460. The image sensor410controls, such as configures, sets, or modifies, one or more image acquisition parameters or settings, or otherwise controls the operation of the image sensor420, in accordance with the image acquisition configuration data450and the adaptive acquisition control data460. For example, the image sensor410may capture a first source image using, or in accordance with, the image acquisition configuration data450, and in the absence of adaptive acquisition control data460or using defined values for the adaptive acquisition control data460, output the first source image to the image signal processor420, obtain adaptive acquisition control data460generated using the first source image data from the image signal processor420, and capture a second source image using, or in accordance with, the image acquisition configuration data450and the adaptive acquisition control data460generated using the first source image. The image sensor410outputs source image data, which may include the source image signal, image acquisition data, or a combination thereof, to the image signal processor420. The image signal processor420receives, or otherwise accesses or obtains, the source image data from the image sensor410. The image signal processor420processes the source image data to obtain input image data. In some implementations, the image signal processor420converts the raw image signal (RGB data) to another format, such as a format expressing individual pixels using a combination of values or components, such as a luminance, or luma, value (Y), a blue chrominance, or chroma, value (U or Cb), and a red chroma value (V or Cr), such as the YUV or YCbCr formats. Processing the source image data includes generating the adaptive acquisition control data460. The adaptive acquisition control data460includes data for controlling the acquisition of a one or more images by the image sensor410. The image signal processor420includes components not expressly shown inFIG.4for obtaining and processing the source image data. For example, the image signal processor420may include one or more sensor input (SEN) components (not shown), one or more sensor readout (SRO) components (not shown), one or more image data compression components, one or more image data decompression components, one or more internal memory, or data storage, components, one or more Bayer-to-Bayer (B2B) components, one or more local motion estimation (LME) components, one or more local motion compensation (LMC) components, one or more global motion compensation (GMC) components, one or more Bayer-to-RGB (B2R) components, one or more image processing units (IPU), one or more high dynamic range (HDR) components, one or more three-dimensional noise reduction (3DNR) components, one or more sharpening components, one or more raw-to-YUV (R2Y) components, one or more Chroma Noise Reduction (CNR) components, one or more local tone mapping (LTM) components, one or more YUV-to-YUV (Y2Y) components, one or more warp and blend components, one or more stitching cost components, one or more scaler components, or a configuration controller. The image signal processor420, or respective components thereof, may be implemented in hardware, software, or a combination of hardware and software. Although one image signal processor420is shown inFIG.4, the image processing pipeline400may include multiple image signal processors. In implementations that include multiple image signal processors, the functionality of the image signal processor420may be divided or distributed among the image signal processors. In some implementations, the image signal processor420may implement or include multiple parallel, or partially parallel paths for image processing. For example, for high dynamic range image processing based on two source images, the image signal processor420may implement a first image processing path for a first source image and a second image processing path for a second source image, wherein the image processing paths may include components that are shared among the paths, such as memory components, and may include components that are separately included in each path, such as a first sensor readout component in the first image processing path and a second sensor readout component in the second image processing path, such that image processing by the respective paths may be performed in parallel, or partially in parallel. The image signal processor420, or one or more components thereof, such as the sensor input components, may perform black-point removal for the image data. In some implementations, the image sensor410may compress the source image data, or a portion thereof, and the image signal processor420, or one or more components thereof, such as one or more of the sensor input components or one or more of the image data decompression components, may decompress the compressed source image data to obtain the source image data. The image signal processor420, or one or more components thereof, such as the sensor readout components, may perform dead pixel correction for the image data. The sensor readout component may perform scaling for the image data. The sensor readout component may obtain, such as generate or determine, adaptive acquisition control data, such as auto exposure data, auto white balance data, global tone mapping data, Auto Color Lens Shading data, or other adaptive acquisition control data, based on the source image data. The image signal processor420, or one or more components thereof, such as the image data compression components, may obtain the image data, or a portion thereof, such as from another component of the image signal processor420, compress the image data, and output the compressed image data, such as to another component of the image signal processor420, such as to a memory component of the image signal processor420. The image signal processor420, or one or more components thereof, such as the image data decompression, or uncompression, components (UCX), may read, receive, or otherwise access, compressed image data and may decompress, or uncompress, the compressed image data to obtain image data. In some implementations, other components of the image signal processor420may request, such as send a request message or signal, the image data from an uncompression component, and, in response to the request, the uncompression component may obtain corresponding compressed image data, uncompress the compressed image data to obtain the requested image data, and output, such as send or otherwise make available, the requested image data to the component that requested the image data. The image signal processor420may include multiple uncompression components, which may be respectively optimized for uncompression with respect to one or more defined image data formats. The image signal processor420, or one or more components thereof, such as the internal memory, or data storage, components. The memory components store image data, such as compressed image data internally within the image signal processor420and are accessible to the image signal processor420, or to components of the image signal processor420. In some implementations, a memory component may be accessible, such as write accessible, to a defined component of the image signal processor420, such as an image data compression component, and the memory component may be accessible, such as read accessible, to another defined component of the image signal processor420, such as an uncompression component of the image signal processor420. The image signal processor420, or one or more components thereof, such as the Bayer-to-Bayer components, which may process image data, such as to transform or convert the image data from a first Bayer format, such as a signed 15-bit Bayer format data, to second Bayer format, such as an unsigned 14-bit Bayer format. The Bayer-to-Bayer components may obtain, such as generate or determine, high dynamic range Tone Control data based on the current image data. Although not expressly shown inFIG.4, in some implementations, a respective Bayer-to-Bayer component may include one or more sub-components. For example, the Bayer-to-Bayer component may include one or more gain components. In another example, the Bayer-to-Bayer component may include one or more offset map components, which may respectively apply respective offset maps to the image data. The respective offset maps may have a configurable size, which may have a maximum size, such as 129×129. The respective offset maps may have a non-uniform grid. Applying the offset map may include saturation management, which may preserve saturated areas on respective images based on R, G, and B values. The values of the offset map may be modified per-frame and double buffering may be used for the map values. A respective offset map component may, such as prior to Bayer noise removal (denoising), compensate for non-uniform blackpoint removal, such as due to non-uniform thermal heating of the sensor or image capture device. A respective offset map component may, such as subsequent to Bayer noise removal, compensate for flare, such as flare on hemispherical lenses, and/or may perform local contrast enhancement, such a dehazing or local tone mapping. In another example, the Bayer-to-Bayer component may include a Bayer Noise Reduction (Bayer NR) component, which may convert image data, such as from a first format, such as a signed 15-bit Bayer format, to a second format, such as an unsigned 14-bit Bayer format. In another example, the Bayer-to-Bayer component may include one or more lens shading (FSHD) component, which may, respectively, perform lens shading correction, such as luminance lens shading correction, color lens shading correction, or both. In some implementations, a respective lens shading component may perform exposure compensation between two or more sensors of a multi-sensor image capture apparatus, such as between two hemispherical lenses. In some implementations, a respective lens shading component may apply map-based gains, radial model gain, or a combination, such as a multiplicative combination, thereof. In some implementations, a respective lens shading component may perform saturation management, which may preserve saturated areas on respective images. Map and lookup table values for a respective lens shading component may be configured or modified on a per-frame basis and double buffering may be used. In another example, the Bayer-to-Bayer component may include a PZSFT component. In another example, the Bayer-to-Bayer component may include a half-RGB (½ RGB) component. In another example, the Bayer-to-Bayer component may include a color correction (CC) component, which may obtain subsampled data for local tone mapping, which may be used, for example, for applying an unsharp mask. In another example, the Bayer-to-Bayer component may include a Tone Control (TC) component, which may obtain subsampled data for local tone mapping, which may be used, for example, for applying an unsharp mask. In another example, the Bayer-to-Bayer component may include a Gamma (GM) component, which may apply a lookup-table independently per channel for color rendering (gamma curve application). Using a lookup-table, which may be an array, may reduce resource utilization, such as processor utilization, using an array indexing operation rather than more complex computation. The gamma component may obtain subsampled data for local tone mapping, which may be used, for example, for applying an unsharp mask. In another example, the Bayer-to-Bayer component may include an RGB binning (RGB BIN) component, which may include a configurable binning factor, such as a binning factor configurable in the range from four to sixteen, such as four, eight, or sixteen. One or more sub-components of the Bayer-to-Bayer component, such as the RGB Binning component and the half-RGB component, may operate in parallel. The RGB binning component may output image data, such as to an external memory, which may include compressing the image data. The output of the RGB binning component may be a binned image, which may include low-resolution image data or low-resolution image map data. The output of the RGB binning component may be used to extract statistics for combing images, such as combining hemispherical images. The output of the RGB binning component may be used to estimate flare on one or more lenses, such as hemispherical lenses. The RGB binning component may obtain G channel values for the binned image by averaging Gr channel values and Gb channel values. The RGB binning component may obtain one or more portions of or values for the binned image by averaging pixel values in spatial areas identified based on the binning factor. In another example, the Bayer-to-Bayer component may include, such as for spherical image processing, an RGB-to-YUV component, which may obtain tone mapping statistics, such as histogram data and thumbnail data, using a weight map, which may weight respective regions of interest prior to statistics aggregation. The image signal processor420, or one or more components thereof, such as the local motion estimation components, which may generate local motion estimation data for use in image signal processing and encoding, such as in correcting distortion, stitching, and/or motion compensation. For example, the local motion estimation components may partition an image into blocks, arbitrarily shaped patches, individual pixels, or a combination thereof. The local motion estimation components may compare pixel values between frames, such as successive images, to determine displacement, or movement, between frames, which may be expressed as motion vectors (local motion vectors). The image signal processor420, or one or more components thereof, such as the local motion compensation components, which may obtain local motion data, such as local motion vectors, and may spatially apply the local motion data to an image to obtain a local motion compensated image or frame and may output the local motion compensated image or frame to one or more other components of the image signal processor420. The image signal processor420, or one or more components thereof, such as the global motion compensation components, may receive, or otherwise access, global motion data, such as global motion data from a gyroscopic unit of the image capture apparatus, such as the gyroscope346shown inFIG.3, corresponding to the current frame. The global motion compensation component may apply the global motion data to a current image to obtain a global motion compensated image, which the global motion compensation component may output, or otherwise make available, to one or more other components of the image signal processor420 The image signal processor420, or one or more components thereof, such as the Bayer-to-RGB components, which convert the image data from Bayer format to an RGB format. The Bayer-to-RGB components may implement white balancing and demosaicing. The Bayer-to-RGB components respectively output, or otherwise make available, RGB format image data to one or more other components of the image signal processor420. The image signal processor420, or one or more components thereof, such as the image processing units, which perform warping, image registration, electronic image stabilization, motion detection, object detection, or the like. The image processing units respectively output, or otherwise make available, processed, or partially processed, image data to one or more other components of the image signal processor420. The image signal processor420, or one or more components thereof, such as the high dynamic range components, may, respectively, generate high dynamic range images based on the current input image, the corresponding local motion compensated frame, the corresponding global motion compensated frame, or a combination thereof. The high dynamic range components respectively output, or otherwise make available, high dynamic range images to one or more other components of the image signal processor420. The high dynamic range components of the image signal processor420may, respectively, include one or more high dynamic range core components, one or more tone control (TC) components, or one or more high dynamic range core components and one or more tone control components. For example, the image signal processor420may include a high dynamic range component that includes a high dynamic range core component and a tone control component. The high dynamic range core component may obtain, or generate, combined image data, such as a high dynamic range image, by merging, fusing, or combining the image data, such as unsigned 14-bit RGB format image data, for multiple, such as two, images (HDR fusion) to obtain, and output, the high dynamic range image, such as in an unsigned 23-bit RGB format (full dynamic data). The high dynamic range core component may output the combined image data to the Tone Control component, or to other components of the image signal processor420. The Tone Control component may compress the combined image data, such as from the unsigned 23-bit RGB format data to an unsigned 17-bit RGB format (enhanced dynamic data). The image signal processor420, or one or more components thereof, such as the three-dimensional noise reduction components reduce image noise for a frame based on one or more previously processed frames and output, or otherwise make available, noise reduced images to one or more other components of the image signal processor420. In some implementations, the three-dimensional noise reduction component may be omitted or may be replaced by one or more lower-dimensional noise reduction components, such as by a spatial noise reduction component. The three-dimensional noise reduction components of the image signal processor420may, respectively, include one or more temporal noise reduction (TNR) components, one or more raw-to-raw (R2R) components, or one or more temporal noise reduction components and one or more raw-to-raw components. For example, the image signal processor420may include a three-dimensional noise reduction component that includes a temporal noise reduction component and a raw-to-raw component. The image signal processor420, or one or more components thereof, such as the sharpening components, obtains sharpened image data based on the image data, such as based on noise reduced image data, which may recover image detail, such as detail reduced by temporal denoising or warping. The sharpening components respectively output, or otherwise make available, sharpened image data to one or more other components of the image signal processor420. The image signal processor420, or one or more components thereof, such as the raw-to-YUV components, may transform, or convert, image data, such as from the raw image format to another image format, such as the YUV format, which includes a combination of a luminance (Y) component and two chrominance (UV) components. The raw-to-YUV components may, respectively, demosaic, color process, or a both, images. Although not expressly shown inFIG.4, in some implementations, a respective raw-to-YUV component may include one or more sub-components. For example, the raw-to-YUV component may include a white balance (WB) component, which performs white balance correction on the image data. In another example, a respective raw-to-YUV component may include one or more color correction components (CC0, CC1), which may implement linear color rendering, which may include applying a 3×3 color matrix. For example, the raw-to-YUV component may include a first color correction component (CC0) and a second color correction component (CC1). In another example, a respective raw-to-YUV component may include a three-dimensional lookup table component, such as subsequent to a first color correction component. Although not expressly shown inFIG.4, in some implementations, a respective raw-to-YUV component may include a Multi-Axis Color Correction (MCC) component, such as subsequent to a three-dimensional lookup table component, which may implement non-linear color rendering, such as in Hue, Saturation, Value (HSV) space. In another example, a respective raw-to-YUV component may include a blackpoint RGB removal (BPRGB) component, which may process image data, such as low intensity values, such as values within a defined intensity threshold, such as less than or equal to, 28, to obtain histogram data wherein values exceeding a defined intensity threshold may be omitted, or excluded, from the histogram data processing. In another example, a respective raw-to-YUV component may include a Multiple Tone Control (Multi-TC) component, which may convert image data, such as unsigned 17-bit RGB image data, to another format, such as unsigned 14-bit RGB image data. The Multiple Tone Control component may apply dynamic tone mapping to the Y channel (luminance) data, which may be based on, for example, image capture conditions, such as light conditions or scene conditions. The tone mapping may include local tone mapping, global tone mapping, or a combination thereof. In another example, a respective raw-to-YUV component may include a Gamma (GM) component, which may convert image data, such as unsigned 14-bit RGB image data, to another format, such as unsigned 10-bit RGB image data. The Gamma component may apply a lookup-table independently per channel for color rendering (gamma curve application). Using a lookup-table, which may be an array, may reduce resource utilization, such as processor utilization, using an array indexing operation rather than more complex computation. In another example, a respective raw-to-YUV component may include a three-dimensional lookup table (3DLUT) component, which may include, or may be, a three-dimensional lookup table, which may map RGB input values to RGB output values through a non-linear function for non-linear color rendering. In another example, a respective raw-to-YUV component may include a Multi-Axis Color Correction (MCC) component, which may implement non-linear color rendering. For example, the multi-axis color correction component may perform color non-linear rendering, such as in Hue, Saturation, Value (HSV) space. The image signal processor420, or one or more components thereof, such as the Chroma Noise Reduction (CNR) components, may perform chroma denoising, luma denoising, or both. The image signal processor420, or one or more components thereof, such as the local tone mapping components, may perform multi-scale local tone mapping using a single pass approach or a multi-pass approach on a frame at different scales. The as the local tone mapping components may, respectively, enhance detail and may omit introducing artifacts. For example, the Local Tone Mapping components may, respectively, apply tone mapping, which may be similar to applying an unsharp-mask. Processing an image by the local tone mapping components may include obtaining, processing, such as in response to gamma correction, tone control, or both, and using a low-resolution map for local tone mapping. The image signal processor420, or one or more components thereof, such as the YUV-to-YUV (Y2Y) components, may perform local tone mapping of YUV images. In some implementations, the YUV-to-YUV components may include multi-scale local tone mapping using a single pass approach or a multi-pass approach on a frame at different scales. The image signal processor420, or one or more components thereof, such as the warp and blend components, may warp images, blend images, or both. In some implementations, the warp and blend components may warp a corona around the equator of a respective frame to a rectangle. For example, the warp and blend components may warp a corona around the equator of a respective frame to a rectangle based on the corresponding low-resolution frame. The warp and blend components, may, respectively, apply one or more transformations to the frames, such as to correct for distortions at image edges, which may be subject to a close to identity constraint. The image signal processor420, or one or more components thereof, such as the stitching cost components, may generate a stitching cost map, which may be represented as a rectangle having disparity (x) and longitude (y) based on a warping. Respective values of the stitching cost map may be a cost function of a disparity (x) value for a corresponding longitude. Stitching cost maps may be generated for various scales, longitudes, and disparities. The image signal processor420, or one or more components thereof, such as the scaler components, may scale images, such as in patches, or blocks, of pixels, such as 16×16 blocks, 8×8 blocks, or patches or blocks of any other size or combination of sizes. The image signal processor420, or one or more components thereof, such as the configuration controller, may control the operation of the image signal processor420, or the components thereof. The image signal processor420outputs processed image data, such as by storing the processed image data in a memory of the image capture apparatus, such as external to the image signal processor420, or by sending, or otherwise making available, the processed image data to another component of the image processing pipeline400, such as the encoder430, or to another component of the image capture apparatus. The encoder430encodes or compresses the output of the image signal processor420. In some implementations, the encoder430implements one or more encoding standards, which may include motion estimation. The encoder430outputs the encoded processed image to an output470. In an embodiment that does not include the encoder430, the image signal processor420outputs the processed image to the output470. The output470may include, for example, a display, such as a display of the image capture apparatus, such as one or more of the displays108,140shown inFIG.1, the display224shown inFIG.2, or the display362.4shown inFIG.3, to a storage device, or both. The output470is a signal, such as to an external device. FIG.5is a flow diagram of an example of lens mode configuration500using lens mode auto-detection. The methods and techniques of lens mode configuration500described herein, or aspects thereof, may be implemented by an image capture apparatus, or one or more components thereof, such as the image capture apparatus100shown inFIGS.1A-1B, the image capture apparatus200shown inFIGS.2A-2C, or the image capture apparatus300shown inFIG.3. The methods and techniques of lens mode configuration500described herein, or aspects thereof, may be implemented by an image capture device, such as the image capture device104shown inFIGS.1A-1B, one or more of the image capture devices204,206shown inFIGS.2A-2C, an image capture device of the image capture apparatus300shown inFIG.3. The methods and techniques of lens mode configuration500described herein, or aspects thereof, may be implemented by an image processing pipeline, or one or more components thereof, such as the image processing pipeline400shown inFIG.4. Lens mode configuration500includes obtaining predicate lens mode data at510, obtaining probable lens data at520, obtaining a lens mode score at530, determining whether the lens mode score is greater than a threshold at540, and outputting lens mode configuration data at550. Predicate lens mode data is obtained at510. The predicate lens mode data may be obtained by reading, or otherwise accessing, previously stored data from a memory, of the image capture apparatus. The predicate lens mode data may include a confidence index, which may be a predicate lens mode score (changeModet-1) indicating a lens mode score output, such as stored in the memory of the image capture apparatus, by a previous, such as immediately previous, iteration, or performance, of lens mode configuration500. In some implementations, the previously stored predicate lens mode data may be manually configured, stored, or both, data. In some implementations, the respective predicate lens mode scores generated, or output, by a defined cardinality, or number, (N), such as thirty (30), of previous iterations of lens mode configuration500may be stored in the memory of the image capture apparatus, or may be otherwise available to the current iteration or performance of lens mode configuration500, and one or more of the previously stored respective predicate lens mode scores may be used, such as aggregated or summarized, to obtain the confidence index. The defined cardinality, or number, (N) of previous predicate lens mode scores used may be configurable. Probable lens data is obtained at520. The probable lens data may include a value (AltLensOn), such as a Boolean value, indicating a prediction, estimate, or calculation, as to whether the alternate lens is in use, or is on, such as operatively connected to, the image capture apparatus. For example, the value (AltLensOn) may be a Boolean value corresponding to truth, such as one (1), indicating a determination, prediction, or estimation, that the alternate lens is in use on the image capture apparatus, such that contemporaneously captured images captured by the image capture apparatus are obtained using the alternate lens, or the value (AltLensOn) may be a Boolean value corresponding to falsehood, such as zero (0), indicating a determination, prediction, or estimation, that a lens other than the alternate lens, such as the primary lens, is in use on the image capture apparatus, such that contemporaneously captured images captured by the image capture apparatus are obtained using a lens other than the alternate lens, such as the primary lens. An example of obtaining the probable lens data is shown inFIG.6. In some implementations, the probable lens data may include multiple values, respectively indicating a determination, prediction, or estimation, whether a respective lens is in use. In some implementations, the probable lens data may include an integer or floating-point value, wherein respective values thereof indicate a determination, prediction, or estimation that a corresponding lens is in use. A lens mode score (changeModet) is obtained at530. The lens mode score (changeModet) may be a smoothed score indicating whether the current lens mode is consistent with the current lens. For example, the current lens may be the primary lens, the current lens mode may be the primary lens mode, and the lens mode score (changeModet) may be zero (0), indicating that the current lens mode is consistent with the current lens. In another example, the current lens may be the alternate lens, the current lens mode may be the alternate lens mode, and the lens mode score (changeModet) may be zero (0), indicating that the current lens mode is consistent with the current lens. In another example, the current lens may be the primary lens, the current lens mode may be the alternate lens mode, and the lens mode score (changeModet) may be one (1), indicating that the current lens mode is inconsistent with the current lens. In another example, the current lens may be the alternate lens, the current lens mode may be the primary lens mode, and the lens mode score (changeModet) may be one (1), indicating that the current lens mode is inconsistent with the current lens. The lens mode score (changeModet) may be a value, such as a Boolean value, or a floating-point value in the range from zero (0) to one (1), inclusive. Obtaining the lens mode score (changeModet) may include obtaining a lens mode error value (errMode). The lens mode error value (errMode) may be a Boolean value, such as one (1) indicating that the currently configured lens mode is determined to be inconsistent, or mismatched, with the prediction of the current operative lens or zero (0) indicating that the currently configured lens mode is determined to be consistent, or matched, with the prediction of the current operative lens. The lens mode error value (errMode) may be obtained, determined, or calculated, using the probable lens data (AltLensOn) obtained at520, a primary lens mode value (primaryMode) indicating whether the currently configured lens mode is the primary lens mode, which may be a Boolean value, such as one (1) indicating that the currently configured lens mode is the primary lens mode or zero (0) indicating that the currently configured lens mode is other than the primary lens mode, and an alternate lens mode value (altMode) indicating whether the currently configured lens mode is the alternate lens mode, which may be a Boolean value, such as one (1) indicating that the currently configured lens mode is the alternate lens mode or zero (0) indicating that the currently configured lens mode is other than the alternate lens mode, such as by obtaining a result of a logical disjunction (“OR”, “∥”), which may be a Boolean operation, of a result of a logical conjunction (“AND”, “&&”), which may be a Boolean operation, of the probable lens data and the primary lens mode value, and a result of a logical conjunction (“AND”, “&&”), which may be a Boolean operation, of a negative (!) of the probable lens data and the alternate lens mode value, which may be expressed as the following: errMode=(AltLensOn && primaryMode)∥(! AltLensOn && altMode).   [Equation 1] For example, the currently configured lens mode may be the primary lens mode, such that the primary lens mode value (primaryMode) is one (1 or TRUE) and the alternate lens mode value (altMode) is zero (0 or FALSE), and the probable lens data (AltLensOn) may have a value, such as one (1), representing truth, and indicating a determination, prediction, or estimation that the alternate lens is in use. A result (primary result) of the logical, or Boolean, conjunction (“AND” or “&&”) of the probable lens data (AltLensOn) and the primary lens mode value (primaryMode) is a Boolean value, such as one (1), indicating truth. A result (alternate result) of the logical, or Boolean, conjunction of the negative (!) of the probable lens data (AltLensOn), which is a Boolean value, such as zero (0), and the alternate lens mode value (altMode), is a Boolean value, such as zero (0), indicating falsehood. The logical, or Boolean, disjunction (“OR” or “H”) of the primary result (AltLensOn && primaryMode) and the alternate result (! AltLensOn && altMode) may be a value, such as Boolean value, such as one (1), indicating truth, indicating that there is an error, or mismatch, between the predicted lens and the currently configured lens mode. In another example, the currently configured lens mode may be the alternate lens mode, such that the primary lens mode value (primaryMode) is zero (0 or FALSE) and the alternate lens mode value (altMode) is one (1 or TRUE), and the probable lens data (AltLensOn) may have a value, such as one (1), representing truth, and indicating a determination, prediction, or estimation that the alternate lens is in use. A result (primary result) of the logical, or Boolean, conjunction (“AND” or “&&”) of the probable lens data (AltLensOn) and the primary lens mode value (primaryMode) is a Boolean value, such as zero (0), indicating falsehood. A result (alternate result) of the logical, or Boolean, conjunction of the negative (!) of the probable lens data (AltLensOn), which is a Boolean value, such as zero (0), and the alternate lens mode value (altMode), is a Boolean value, such as zero (0), indicating falsehood. The logical, or Boolean, disjunction (“OR” or “H”) of the primary result (AltLensOn && primaryMode) and the alternate result (! AltLensOn && altMode) may be a value, such as Boolean value, such as zero (0), indicating falsehood, indicating that the absence of error, or mismatch, between the predicted lens and the currently configured lens mode. In another example, the currently configured lens mode may be the primary lens mode, such that the primary lens mode value (primaryMode) is one (1 or TRUE) and the alternate lens mode value (altMode) is zero (0 or FALSE), and the probable lens data (AltLensOn) may have a value, such as zero (0), representing falsehood, and indicating a determination, prediction, or estimation that the primary lens is in use. A result (primary result) of the logical, or Boolean, conjunction (“AND” or “&&”) of the probable lens data (AltLensOn) and the primary lens mode value (primaryMode) is a Boolean value, such as zero (0), indicating falsehood. A result (alternate result) of the logical, or Boolean, conjunction of the negative (!) of the probable lens data (AltLensOn), which is a Boolean value, such as zero (0), and the alternate lens mode value (altMode), is a Boolean value, such as zero (0), indicating falsehood. The logical, or Boolean, disjunction (“OR” or “H”) of the primary result (AltLensOn && primaryMode) and the alternate result (! AltLensOn && altMode) may be a value, such as Boolean value, such as zero (0), indicating falsehood, indicating that there is an absence of an error, or mismatch, between the predicted lens and the currently configured lens mode. In another example, the currently configured lens mode may be the alternate lens mode, such that the primary lens mode value (primaryMode) is zero (0 or FALSE) and the alternate lens mode value (altMode) is one (1 or TRUE), and the probable lens data (AltLensOn) may have a value, such as zero (0), representing falsehood, and indicating a determination, prediction, or estimation that the primary lens is in use. A result (primary result) of the logical, or Boolean, conjunction (“AND” or “&&”) of the probable lens data (AltLensOn) and the primary lens mode value (primaryMode) is a Boolean value, such as zero (0), indicating falsehood. A result (alternate result) of the logical, or Boolean, conjunction of the negative (!) of the probable lens data (AltLensOn), which is a Boolean value, such as zero (0), and the alternate lens mode value (altMode), is a Boolean value, such as one (1), indicating truth. The logical, or Boolean, disjunction (“OR” or “H”) of the primary result (AltLensOn && primaryMode) and the alternate result (! AltLensOn && altMode) may be a value, such as Boolean value, such as one (1), indicating truth, indicating that there is an error, or mismatch, between the predicted lens and the currently configured lens mode. The lens mode score (changeModet) may be determined using the lens mode error value (errMode), a defined modifier value (α), such as 0.95, the predicate lens mode score (changeModet-1), wherein the subscript (t) indicates the current temporal location and the subscript (t−1) indicates the prior temporal location corresponding to the predicate lens mode score, and which may be expressed as the following: changeModet=α*changeModet-1+(1−α)*errMode.   [Equation 2] Whether the lens mode score (changeModet) is greater than a lens mode change threshold is determined at540. Determining whether the lens mode score (changeModet) is greater than the lens mode change threshold (thresholdChangeMode) may include determining that the lens mode score (changeModet) is greater than the lens mode change threshold (thresholdChangeMode), which may be expressed as the following: changeModet>thresholdChangeMode. Determining whether the lens mode score (changeModet) is greater than the lens mode change threshold (thresholdChangeMode) may include determining that the lens mode score (changeModet) is less than or equal to the lens mode change threshold (thresholdChangeMode), which may be expressed as the following: changeModet≤thresholdChangeMode. The determination whether the lens mode score (changeModet) is greater than the lens mode change threshold (thresholdChangeMode) may include determining that the lens mode score (changeModet) is less than or equal to the lens mode change threshold (thresholdChangeMode) at540and outputting the lens mode configuration data at550may be omitted. The determination whether the lens mode score (changeModet) is greater than the lens mode change threshold (thresholdChangeMode) may include determining that the lens mode score (changeModet) is greater than the lens mode change threshold (thresholdChangeMode) at540and lens mode configuration data may be output at550. In some implementations, outputting the lens mode configuration data may include obtaining data indicating a target lens mode (target lens mode data), which differs from the current lens mode. For example, the current lens mode may be the primary lens mode, corresponding to the primary lens, and the target lens mode data may indicate that the alternate lens mode, such as the fisheye lens mode, is the target lens mode. In some implementations, outputting the lens mode configuration data may include outputting lens mode configuration user interface data indicating a request, or suggestion, to change, modify, or configure the current lens mode to the target lens mode, such as for presentation to a user of the image capture apparatus. Outputting the lens mode configuration data may include storing, or configuring, the target lens mode as the current lens mode of the image capture apparatus. In some implementations, storing, or configuring, the target lens mode as the current lens mode of the image capture apparatus may be performed in response to obtaining data, such as in response to obtaining [not expressly shown] user input responsive to presenting the lens mode configuration user interface data and indicating approval of the request, or an instruction to change, modify, or configure the current lens mode to the target lens mode. In some implementations, user input data may be obtained [not expressly shown] responsive to presenting the lens mode configuration user interface data and indicating denial of the request, or an instruction to omit changing, modifying, or configuring the current lens mode to the target lens mode, such that the current lens mode is retained, and image processing may be adjusted in accordance with the lens and lens mode mismatch to maximize the quality of the captured image or images. For example, changing image processing wherein the lens mode is retained may omit, or exclude, changing cropping and may include changing auto-exposure processing parameters, white balance processing parameters, contrast management processing parameters, and the like, or a combination thereof. Lens mode configuration500may be performed periodically, in response to detecting an event, or both. For example, an iteration of lens mode configuration500may be performed in accordance with capturing an image, which may be an image captured automatically, such as a preview image, which may correspond with a defined periodicity, such as ten (10) times per second at thirty (30) frames per second. Prior to the performance of an iteration of lens mode configuration500, the image capture apparatus may be in one of four states, such as a first state wherein the primary lens is operatively coupled to the image capture apparatus and the image capture apparatus is configured to use the primary lens mode, a second state wherein the alternate lens is operatively coupled to the image capture apparatus and the image capture apparatus is configured to use the alternate lens mode, a third state wherein the primary lens is operatively coupled to the image capture apparatus and the image capture apparatus is configured to use the alternate lens mode, or a fourth state wherein the alternate lens is operatively coupled to the image capture apparatus and the image capture apparatus is configured to use the primary lens mode. Lens mode configuration500may detect the current state of the image capture apparatus, such as the third state or the fourth state, and may minimize false positive determinations, wherein a false positive includes incorrectly identifying the currently configured lens mode as mismatched with the currently operative lens, wherein the image capture apparatus is in the first state or the second state, and a false negative includes incorrectly omitting identifying the currently configured lens mode as mismatched with the currently operative lens, wherein the image capture apparatus is in the third state or the fourth state. FIG.6is a flow diagram of an example of lens mode auto-detection600. The methods and techniques of lens mode auto-detection600described herein, or aspects thereof, may be implemented by an image capture apparatus, or one or more components thereof, such as the image capture apparatus100shown inFIGS.1A-1B, the image capture apparatus200shown inFIGS.2A-2C, or the image capture apparatus300shown inFIG.3. The methods and techniques of lens mode auto-detection600described herein, or aspects thereof, may be implemented by an image capture device, such as the image capture device104shown inFIGS.1A-1B, one or more of the image capture devices204,206shown inFIGS.2A-2C, an image capture device of the image capture apparatus300shown inFIG.3. The methods and techniques of lens mode auto-detection600described herein, or aspects thereof, may be implemented by an image processing pipeline, or one or more components thereof, such as the image processing pipeline400shown inFIG.4. Lens mode auto-detection600may be similar to obtaining probable lens data as shown at520inFIG.5, except as is described herein or as is otherwise clear from context. Lens mode auto-detection600includes obtaining a current image at610, obtaining lens mode detection metrics at620, and generating probable lens data (AltLensOn) at630. The current image is obtained at610. Obtaining the current image includes capturing an image (current input image or captured image) by the image capture apparatus using a currently operative lens and a currently configured lens mode. The image sensor may be rectangular. The lens, or the light directed by the lens, may form a circle, or ellipse, with respect to the plane of the image sensor. The image capture apparatus may be configured such that, using the primary lens, a portion, or portions, of the image sensor that captures little, or no, light is minimized or eliminated. For example, the image capture apparatus may be configured such that the light that is directed by the primary lens forms a circle, or ellipse, at the plane corresponding to the image sensor that includes the rectangular image sensor. An example representing light directed by the primary lens forming a circle, or ellipse, at the plane corresponding to the image sensor, such that the rectangular image sensor included in the circle, is shown at710inFIG.7. The image capture apparatus may be configured such that, using the alternate, or fisheye, lens, the portion, or portions, of the image sensor that captures little, or no, light is relatively large compared to using the primary lens. For example, the image capture apparatus may be configured such that the light that is directed by the alternate, or fisheye, lens forms a circle, or ellipse, at the plane corresponding to the image sensor that is included, or substantially included, such as horizontally included, in the rectangular area of the image sensor, such that an image captured using the alternate lens includes a content portion, corresponding to captured, measured, or detected light, and one or more non-content portions, corresponding to little, or no, captured, measured, or detected light. An example showing a representation an image captured using the alternate, or fisheye, lens is shown at720inFIG.7. Obtaining the current image may include obtaining a reduced (spatially reduced), or thumbnail, image, such as a 64×48 pixel RGB thumbnail image corresponding to the captured image, which may be, for example, a 3648×2736 pixel image, and using the thumbnail, or reduced, image as the current image for lens mode auto-detection600. Other images, such as the captured image, may be used. The thumbnail, or reduced, image may be a cropped image, such that a spatial portion of the captured image, such as proximate to one or more of the edges of the capture image are cropped, or omitted, from the thumbnail, or reduced, image. For example, the captured image may be cropped in accordance with the smallest rectangle, or square, that includes the circular, or elliptical, image content portion, horizontally, vertically, or both. Other image reduction, such as subsampling, may be used. The portion of the captured image that is cropped to obtain the thumbnail image may correspond with the currently configured lens mode. For example, in the alternate lens mode, the relatively large non-content portions, such as the portions shown at736,746inFIG.7, are cropped, and in the primary lens mode, relatively little, or none, of the captured image is cropped. In some implementations, in the primary lens mode, cropping may be omitted. One or more lens mode detection metrics are obtained at620by analyzing the current image. For example, the image capture apparatus may obtain, or determine, a corners mean metric at622, a corners standard deviation metric at624, a center mean metric at626, or a combination thereof. Other metrics may be used. Obtaining the lens mode detection metrics may include obtaining a corners mean at622, such as using the green color channel of the reduced, or thumbnail, image as the current image. Obtaining the corners mean may include obtaining a respective corner mean, such as a mean of the green color channel, values (corner mean values) for respective square portions of the current image, corresponding to the respective corners of the current image, such as a mean of a square portion of the current image at the top-left corner of the current image (first corner mean value), a mean of a square portion of the current image at the top-right corner of the current image (second corner mean value), a mean of a square portion of the current image at the bottom-left corner of the current image (third corner mean value), and a mean of a square portion of the current image at the bottom-right corner of the current image (fourth corner mean value). For example, the respective squares may be 4×4 pixels. Other size and shape portions may be used. Examples showing images with the square portions in the corners indicated are shown at730and740inFIG.7. In some implementations, one or more of the corners of the current image may include pixel values corresponding to non-content image data, such as lens flare image data. The corners that include non-content image data, such as lens flare image data, may have relatively high corner mean values as compared to corners from which non-content image data, such as lens flare image data, is absent. Some of the corner mean values, such as the two smallest corner mean values, may be used and some of the corner mean values, such as the two highest corner mean values may be omitted, or excluded, from further use. The corner mean values that are used, such as the two lowest corner mean values, are averaged to obtain, or determine, an average corner mean value as the corners mean value (cornersMean). In some implementations, obtaining the lens mode detection metrics may include obtaining a corners standard deviation (cornersStd) at624, such as using the green color channel of the reduced, or thumbnail, image and using the square portion of the corners used to obtain the corners mean value (cornersMean) at622, which may be the two corner portions respectively having the lowest (minimal magnitude) corner mean values. Obtaining the corners standard deviation (cornersStd), or variance, may include obtaining a respective corner standard deviation, such as a standard deviation of the green color channel values, for the respective square portions of the current image. The corner standard deviation values that are used are averaged to obtain, or determine, an average corner standard deviation value as the corners standard deviation value (cornersStd). In some implementations, information from the current image, other than from the corners of the current image, may be used to generate, determine, or otherwise obtain, the probable lens data (AltLensOn). For example, the portions of the current image other than the corner portions may be relatively bright (high luminance), and the corner portions may be relatively dark (low luminance), which may indicate a high probability that the current image corresponds to an image captured using the alternate, or fisheye, lens. In another example, the portions of the current image other than the corner portions may be relatively dark (low luminance), and the corner portions may be relatively dark (low luminance), which may indicate a relatively low probability that the current image corresponds to an image captured using the alternate, or fisheye, lens. Obtaining the lens mode detection metrics may include obtaining a center mean at626, such as using the green color channel of the reduced, or thumbnail, image as the current image. The center mean value (centerMean) may be a mean of the pixel values, such as the green color channel values, of a rectangular portion (center portion) of the current image, at the center of the current image, within the elliptical image content portion. The center portion omits or excludes the corner portions. Obtaining the lens mode detection metrics may include obtaining a ratio (ratio) of the corners mean value (cornersMean) obtained at622and the center mean value (centerMean) may be determined, calculated, or otherwise obtained. Other portions of the image data may be used to determine, calculate, or otherwise obtain, the center mean value (centerMean). For example, a mean of a relatively small portion, such as a 4×4 pixel square, at the center of the current image may be used as the center mean value (centerMean). In another example, a mean may be determined along a curve corresponding to the edge or boarder of the elliptical, or circular, image content portion may be used as the center mean value (centerMean). In some implementations, respective mean and standard deviation values may be determined, generated, or otherwise obtained, for the green color channel, the red color channel, and the blue color channel, respectively. Probable lens data is generated, or otherwise obtained, at630. In some implementations, the probable lens data (AltLensOn) may be generated based on the average corners mean value (cornersMean), such as 0.3 percent of the image dynamic, and a corresponding defined average corners mean threshold (threshMean), such as based on a less than determination, which may be a Boolean operation, whether the average corners mean value (cornersMean) is less than the corresponding defined average corners mean threshold (threshMean), which may be expressed as the following: AltLensOn=cornersMean<threshMean.   [Equation 3] In some implementations, the probable lens data (AltLensOn) may be generated based on the average corners mean value (cornersMean), the corresponding defined average corners mean threshold (threshMean), the average corners standard deviation value (cornersStd), such as 0.2 percent, and a corresponding defined average corners standard deviation threshold (threshStd), such as based on a logical conjunction (“AND”, “&&”), which may be a Boolean operation, of a less than determination (first less than determination), which may be a Boolean operation, whether the average corners mean value (cornersMean) is less than the corresponding defined average corners mean threshold (threshMean), and a less than determination (second less than determination), which may be a Boolean operation, whether the average corners standard deviation value (cornersStd) is less than the corresponding defined average corners standard deviation threshold (threshStd), which may be more robust than generating the probable lens data (AltLensOn) as shown in Equation 3, and which may be expressed as the following: AltLensOn=cornersMean<threshMean && cornersStd<threshStd.   [Equation 4] In some implementations, the probable lens data (AltLensOn) may be generated based on the average corners mean value (cornersMean), the corresponding defined average corners mean threshold (threshMean), the average corners standard deviation value (cornersStd), a corresponding defined average corners standard deviation threshold (threshStd), the ratio (ratio) of the corners mean value (cornersMean) obtained at622and the center mean value (centerMean) obtained at626, and a corresponding defined center mean threshold (threshRatio), such as based on a logical disjunction (“OR”, “H”), which may be a Boolean operation, of a result of a logical conjunction (“AND”, “&&”), which may be a Boolean operation, of a less than determination, which may be a Boolean operation, whether the average corners mean value (cornersMean) is less than the corresponding defined average corners mean threshold (threshMean), and a less than determination (third less than determination), which may be a Boolean operation, whether the average corners standard deviation value (cornersStd) is less than the corresponding defined average corners standard deviation threshold (threshStd), and a less than determination, which may be a Boolean operation, whether the ratio (ratio) is less than the corresponding defined center mean threshold (threshRatio), such as 0.2, which may be more robust than generating the probable lens data (AltLensOn) as shown in Equation 4, and which may be expressed as the following: AltLensOn=(cornerMean<threshMean && cornerStd<threshStd)∥ratio<threshRatio.   [Equation 5] A value of the probable lens data (AltLensOn) corresponding to truth, such as one (1), indicates a determination, prediction, or estimation, that the current image was generated from an image captured using the alternate lens. A value of the probable lens data (AltLensOn) corresponding to falsehood, such as zero (0), indicates a determination, prediction, or estimation, that the current image was generated from an image captured using the primary lens. In some implementations, one or more of the thresholds described herein, such as the defined average corners mean threshold (threshMean), may be defined in accordance with the currently operative lens mode. For example, the currently operative lens mode may be the primary lens mode and a relatively high value of the respective thresholds may be used such that the probability of detecting the alternate lens is relatively low, or the currently operative lens mode may be the alternate lens mode and a relatively low value of the respective thresholds may be used such that the probability of detecting the alternate lens is relatively high. FIG.7shows examples of representations of images for lens mode auto-detection. A representation of an image captured using the primary lens mode is shown at710. The example shown at710includes a rectangular image content portion712, shown with a stippled background to indicate image content, corresponding to the image content captured by the image sensor. The rectangular image content portion712is shown within a circle714, with a white background, representing light directed by the operative, primary, lens outside the image sensor. For example, the rectangular image content portion712may correspond to a 3648×2736 pixel image. A representation of an image captured using the alternate, or fisheye, lens is shown at720. In the image representation shown at720, an image content portion722is shown with a stippled background to indicate the portion of the image that includes image content, corresponding to substantial measured light, and other portions of the image724are shown with a cross-hatched background to indicate the portions of the captured image that omit image content and are substantially black, corresponding with little to no light measured or detected by the image sensor. For example, the image representation shown at720may correspond to a 3648×2736 pixel image. A representation of a reduced, or thumbnail, image generated from an image captured using the alternate, or fisheye, lens in the alternate lens mode is shown at730. The thumbnail image representation at730includes an image content portion732, shown with a stippled background to indicate the portion of the image that includes image content, corresponding to substantial measured light, and other portions of the image734are shown with a cross-hatched background to indicate the portions of the captured image that omit image content and are substantially black, corresponding with little to no light measured or detected by the image sensor. Although described as substantially black and omitting image content, the other portions of the image734may include non-content image data, such as pixel values corresponding to image capture artifacts, such as lens flare. Square portions736, such as 4×4 pixel squares, are shown at the respective corners, which may be used for determining corner means, corner standard deviations, or both, as described herein. A rectangular center portion738is shown with a dark-stippled background, which may be used to determine the center means as described herein. A representation of a reduced, or thumbnail, image generated from an image captured using the primary lens in the alternate lens mode is shown at740. The thumbnail image representation at740includes an image content portion742, shown with a stippled background to indicate the portion of the image that includes image content, corresponding to substantial measured light. The thumbnail image representation at740includes non-content portions744of the image, shown with a cross-hatched background to indicate the portions of the captured image that omit image content and are substantially black, corresponding with little to no light measured or detected by the image sensor. Although described as substantially black and omitting image content, the other portions of the image may include non-content image data, such as pixel values corresponding to image capture artifacts, such as lens flare. Square corner portions746, such as 4×4 pixel squares, are shown at the respective corners, which may be used for determining corner means, corner standard deviations, or both, as described herein. A rectangular, or square, center portion748, shown with a dark-stippled background, may be used to determine the center means as described herein. While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
112,759
11943534
DESCRIPTION OF EMBODIMENTS Embodiments will be described in detail below with reference to the drawings as appropriate. However, there is a case where description detailed more than necessary is omitted. For example, there is a case where detailed description of a well-known item or duplicate description of substantially the same configuration is omitted. This is to prevent the following description from being unnecessarily redundant and to facilitate understanding by a person skilled in the art. Note that the inventor(s) provide(s) the accompanying drawings and the following description for a person skilled in the art to fully understand the present disclosure, and the accompanying drawings and the description are not intended to limit the subject matters described in the claims. First Embodiment In a first embodiment, a digital camera that performs focusing operation and outputs focusing sound will be described as an example of an imaging device according to the present disclosure. The digital camera according to the present embodiment is a lens-integrated digital camera. 1-1. CONFIGURATION A configuration of the digital camera according to the first embodiment will be described with reference toFIG.1. FIG.1is a diagram illustrating a configuration of a digital camera100according to the present embodiment. The digital camera100according to the present embodiment includes an image sensor115, an image processing engine120, a display monitor130, and a controller135. Further, the digital camera100includes a buffer memory125, a card slot140, a flash memory145, an operation member150, a communication module155, and an annunciator165. Furthermore, the digital camera100includes, for example, an optical system110and a lens driver112. The optical system110includes a focus lens, a zoom lens, an optical image stabilization lens (OIS), an aperture, a shutter, and the like. The focus lens is a lens for changing a focus state of an object image formed on the image sensor115. The zoom lens is a lens for changing a magnification of an object image formed by an optical system. The focus lens or the like includes one or a plurality of lenses each. The lens driver112drives a focus lens and the like in the optical system110. The lens driver112includes a motor and causes the focus lens to move along an optical axis of the optical system110on the basis of control of the controller135. A configuration for driving a focus lens in the lens driver112can be implemented by a DC motor, a stepping motor, a servomotor, an ultrasonic motor or the like. The image sensor115captures an object image formed via the optical system110to generate imaging data. The imaging data is image data indicating the image captured by the image sensor115. The image sensor115generates image data of a new frame with a predetermined frame rate (e.g., 30 frames/second). The controller135controls timing of generating imaging data and operation of an electronic shutter in the image sensor115. As the image sensor115, various image sensors such as a CMOS image sensor, a CCD image sensor, or an NMOS image sensor can be used. The image sensor115performs imaging operation of a still image, imaging operation of a through image, or the like. The through image is mainly a moving image, and is displayed on the display monitor130in order for a user to determine composition to for taking a still image. Each of the through image and the still image is an example of a captured image according to the present embodiment. The image sensor115is an example of the image sensor according to the present embodiment. The image processing engine120performs various kinds of processing on the imaging data output from the image sensor115to generate image data, or performs various kinds of processing on the image data to generate an image to be displayed on the display monitor130. The various kinds of processing include, but are not limited to, white balance correction, gamma correction, YC conversion processing, electronic zoom processing, compression processing, decompression processing, and the like. The image processing engine120may include a hard-wired electronic circuit, or may include a microcomputer using a program, a processor, or the like. The display monitor130is an example of a display that displays various pieces of information. For example, the display monitor130displays an image (through image) indicated by the image data captured by the image sensor115and processed by the image processing engine120. The display monitor130displays a menu screen or the like for the user to perform various settings for the digital camera100. The display monitor130can include a liquid crystal display device or an organic EL device, for example. Note that, the digital camera100may include a viewfinder such as EVF while illustration is omitted inFIG.1. The operation member150is a general term for a hard key, such as an operation button or an operation lever, provided on an exterior of the digital camera100, and receives operation by the user. For example, the operation member150includes a release button, a mode dial, or a touch-sensitive panel. When the operation member150receives operation by the user, the operation member150transmits an operation signal corresponding to the user operation to the controller135. The controller135entirely controls overall operation of the digital camera100. The controller135includes a CPU or the like, and the CPU executes a program (software), by which a predetermined function is implemented. The controller135may include, instead of the CPU, a processor including a dedicated electronic circuit designed to implement a predetermined function. That is, the controller135can be implemented by various processors such as CPU, MPU, GPU, DSU, FPGA, or ASIC. The controller135may include one or more processors. The controller135may include one semiconductor chip along with the image processing engine120, or the like. The controller135is an example of a controller. The buffer memory125is a recording medium that functions as a work memory for the image processing engine120or the controller135. The buffer memory125is implemented by a dynamic random access memory (DRAM), or the like. The flash memory145is a non-volatile recording medium. While not illustrated, the controller135may have various kinds of internal memories, such as a ROM. Various programs executed by the controller135are stored in the ROM. Furthermore, the controller135may include a RAM that functions as a work area of the CPU. The card slot140is a means to insert a memory card142that is removable. The memory card142is electrically and mechanically connectable to the card slot140. The memory card142is an external memory including a recording device therein, such as a flash memory. The memory card142can store data such as image data generated by the image processing engine120. The communication module155is a communication module (circuit) that performs communication compliant with the communication standards IEEE 802.11, a Wi-Fi standard, or the like. The digital camera100can communicate with another apparatus via the communication module155. The digital camera100may communicate directly with another apparatus via the communication module155, or may communicate via an access point. The communication module155may be connectable to a communication network such as the Internet. The annunciator165is a module that converts sound data input from the controller135into sound and outputs the sound. The annunciator165includes a DA converter, a speaker, and the like. The DA converter converts a digital signal including sound data, which is input from the controller135, into an analog signal. The speaker converts the analog signal input from the DA converter into sound and outputs the sound. 1-2. Operation Operation of the digital camera100configured as the above will be described below. FIG.2is a flowchart illustrating operation of capturing a still image by the digital camera100. Each processing according to the flowchart inFIG.2is executed by the controller135of the digital camera100. This flowchart is performed according to user operation after the digital camera100is started, for example. At first, the controller135detects whether or not the release button on the operation member150is pressed halfway down (S101). The user can operate the release button when a desired object or the like is in the view, by looking through the viewfinder of the digital camera100or visually checking the display monitor130, for example. When the user presses the release button halfway down (YES in S101), the controller135controls focusing operation in which the lens driver112drives the focus lens of the optical system110to focus the object or the like (S102). In step S102, focusing may be performed on a predetermined area in the image as a focusing target, or image recognition of the object as the focusing target may be performed. When the focusing operation is completed, the controller135outputs, from the annunciator165, focusing sound notifying the user of completion of the focusing (S103). As will be described later, the digital camera100according to the present embodiment outputs focusing sound that is comfortable for the user to hear. Furthermore, the controller135detects whether or not the release button is fully pressed down (S104). In a case where the release button is released from the half-press, the controller135detects the release of the half-press from the operation member135, which allows a return to step S101. When the controller135detects that the release button is fully pressed (YES in S104), the controller135controls imaging operation by the image sensor115, and records, in the memory card142or the like, the image data as a result of the imaging (S105). Then, the processing according to this flowchart ends. The focusing operation in step S102may be one-shot AF or continuous AF. For example, in a case where the focusing operation is performed a plurality of times after the release button is pressed halfway down (YES in S102), the controller135may output focusing sound according to each focusing operation, or may restrict the output of the focusing sound as appropriate. 1-3. Focusing Sound Focusing sound according to the present embodiment, which is output during operation of the digital camera100as described above (S103inFIG.2), will be described in detail below. FIGS.3A and3Bare diagrams for describing a frequency characteristic of focusing sound of the digital camera100according to the first embodiment.FIG.3Aexemplifies a frequency characteristic of focusing sound based on a single tone.FIG.3Bexemplifies a frequency characteristic of focusing sound according to the present embodiment. InFIGS.3A and3B, a horizontal axis indicates frequency, and a vertical axis indicates a sound output level (that is, corresponding to a sound pressure level). FIG.4is a diagram for describing a degree of consonance between two tones. The degree of consonance refers to, for example, a degree to which two tones sound beautifully in harmony. The degree of consonance as illustrated inFIG.4can be calculated by a prediction model such as the Kameoka model (refer to Non-Patent Document 2). On the horizontal axis inFIG.4, an interval between two tones are represented by the number of semitones, and frequency of a higher-pitched tone of two tones is exemplified. In the drawing, a ratio a:b exemplifies a frequency ratio of two tones. Each graph G(n) inFIG.4indicates a degree of consonance between two tones for the number of overtones n (=1 to 6). For example, graph G(1) is in consideration up to 880 Hz as the higher-pitched tone with respect to a fundamental tone of 440 Hz, wherein the fundamental tone is lower tone in the two tones, and graph G(4) is in consideration up to 3520 Hz as the higher-pitched tone with respect to the fundamental tone of 1760 Hz. In the example inFIG.3A, the focusing sound includes a frequency component f2 of 8.80 kHz in addition to a frequency component f1 for which a sound output level peaks at 4.40 kHz. Even in a case where single tone frequency is set for focusing sound, a frequency characteristic of output focusing sound may include a frequency component of an overtone that is an integral multiple of the set frequency, depending on performance of the speaker or the like. According toFIG.4, the degree of consonance in this case is as high as that in a case where frequencies of two tones are the same. As a result of intensive research by the present inventor, a problem has been revealed that focusing sound as illustrated inFIG.3Ahas not only a high degree of consonance but also a high degree of straightness, which would give a user discomfort with ear-piercing impact in audibility. Such discomfort occurs regardless of presence or absence of the frequency component f2 of an overtone. It is considered that focusing sound is likely to give a strong impression to the user, since the user hears focusing sound near the speaker with looking at the object through the viewfinder, for example. Meanwhile, in view of focusing sound as a sound effect for notifying the user that focus is achieved, it is considered that a strong impression of focusing sound is not necessarily an adverse effect. At present, each of camera manufacturers uses, as focusing sound, a combined sound based on a single tone having own selected frequency. It is difficult for a person skilled in the art to conceive an idea of drastically changing a frequency characteristic of focusing sound, due to the consideration that the tone color of focusing sound can have a brand value showing the individual manufacturer. However, the present inventor has conducted intensive research on focusing sound from a viewpoint of the above problem, resulting in conceiving an idea that focusing sound is constituted from a frequency characteristic of a chord, such as a consonant chord without an overtone, instead of a frequency characteristic of a single tone. With such focusing sound, as illustrated inFIG.4, while a degree of consonance is lower than a degree of consonance of an overtone, straightness can be reduced, so that the audible impact sounding ear-piercing to the user can be reduced. By further intensive research, the present inventor has achieved an idea of causing the digital camera100according to the present embodiment to output focusing sound, as exemplified with the frequency characteristic inFIG.3B, for example. In the present embodiment, focusing sound of the digital camera100is set to a consonant chord of a perfect fifth. In the example inFIG.3B, focusing sound has a fundamental tone component f11 for which a sound output level peaks at frequency of 4.40 kHz, and a higher-pitched tone component f21 for which a sound output level peaks at frequency of 6.60 kHz. A frequency ratio of the fundamental tone component f11 to the higher-pitched tone component f21 is 2:3, thereby achieving a consonant chord of a perfect fifth. The fundamental tone component f11 is an example of a first sound component having peak frequency of the component f11 as first frequency. The higher-pitched tone component f21 is an example of a second sound component having peak frequency of the component f21 as second frequency. It can be seen fromFIG.4that a difference in a degree of consonance between a consonant chord and a dissonant chord is larger as the frequency is higher. The present inventor has also researched a possibility of giving the user uneasiness due to a too low degree of consonance. For example, the fundamental tone component f11 and the higher-pitched tone component f21 included in focusing sound may be a dissonant chord depending on a frequency ratio. According to focusing sound of a perfect fifth as an example, a next highest degree of consonance to a degree of consonance of an overtone inFIG.4can be obtained. Accordingly, the digital camera100according to the present embodiment can output focusing sound with which the user feels comfortable. It is considered that as the frequency of focusing sound is higher, the digital camera100gives the user more of a feeling of speedy focusing, that is, a feeling that focusing operation is performed at a high speed. Here, it is concerned that the higher frequency has the stronger straightness, which might cause picky impact in audibility. In contrast to this, the digital camera100according to the present embodiment can obtain focusing sound that relieves impact in audibility as well as giving the feeling of speedy focusing, by a frequency characteristic such as a perfect fifth using high frequency as inFIG.3Bor the like. For example, in the example inFIG.3B, an impression of recalling fineness of a machine may be obtained depending on how the higher-pitched tone sounds. FIGS.5A and5Bare diagrams for describing a waveform of focusing sound of the digital camera100according to the first embodiment. InFIGS.5A and5B, a vertical axis indicates a sound output level [dB], and a horizontal axis indicates time. FIG.5Ais a waveform chart exemplifying a part of a waveform of the focusing sound of the digital camera100according to the present embodiment. Amplitude of the focusing sound of the digital camera100periodically changes at a longer cycle than a cycle of each of the fundamental tone component f11 and the higher-pitched tone component f21 by superposition with each other. The digital camera100according to the present embodiment outputs focusing sound by emitting twice a sound wave having a frequency characteristic as described above. According to this, given an example that the focusing sound is expressed as “pi-pi” in onomatopoeia, it is possible to give the user an impression that focusing operation is completed (hereinafter referred to as a “focusing stop feeling”) at timing when the user hears a latter half “pi”. FIG.5Bexemplifies an overall waveform chart of the focusing sound according to the present embodiment. As illustrated inFIG.5B, the focusing sound of the digital camera100includes a first sound wave W1and a second sound wave W2output after the first sound wave W1. The first sound wave W1has amplitude A1and the second sound wave W2has amplitude A2. In the present embodiment, the amplitude A1of the first sound wave W1is set smaller than the amplitude A2of the second sound wave W2, that is, the amplitude A2is set larger than the amplitude A1. With this setting, sound heard in a latter half of the focusing sound can be emphasized, and thereby a focusing stop feeling can be improved. Furthermore, in the present embodiment, the focusing sound is set to fade out by adding a fade toward an end in a waveform of each of sound waves W1and W2. It is concerned that a chord constituting focusing sound might remain a feeling of uneasiness that a lingering tone is impure. In contrast to this, by applying the fade to the focusing sound, it is possible to resolve the above-described feeling of uneasiness, thereby achieving crispy focusing sound. In the digital camera100, sound data indicating a waveform of focusing sound as described above is stored in the flash memory145or the like in advance. The controller135of the digital camera100controls the annunciator165by using the sound data when the focusing sound is output (S102inFIG.2). The annunciator165of the digital camera100outputs the first sound wave W1for a time period T1, and outputs the second sound wave W2for a time period T4at an interval of a time period T2. The digital camera100outputs the first sound wave W1by adding a fade with a constant gradient so that amplitude becomes 0 after a lapse of a time period T3, for example. Similarly, the digital camera100outputs the second sound wave W2by adding a fade with a gradient so that amplitude becomes 0 after a lapse of a time period T5. In the present embodiment, the time periods T1, T2, T4are respectively 25 msec. T3and T5are 37.5 msec. The amplitude A2of the second sound wave is substantially 3 dB greater than the amplitude A1of the first sound wave. The user can obtain a finely focused impression by listening to the second sound wave after the first sound wave. According to the setting the amplitude A2of the second sound wave to be greater than the amplitude A1of the first sound wave, the above impression can be stronger. 3. SUMMARY As described above, the digital camera100according to the first embodiment includes the image sensor115, the controller135, and the annunciator165. The image sensor115captures an object image entering via the optical system110. The controller135controls focusing operation to focus the object image by the optical system110. The annunciator165outputs focusing sound that has a predetermined frequency characteristic according to the focusing operation. The frequency characteristic of the focusing sound includes a fundamental tone component based on the first frequency, and a higher-pitched tone component based on second frequency that is higher than the first frequency and lower than twice frequency of the first frequency. According to this, while having a degree of consonance lower than a degree of consonance of an overtone, the digital camera100can reduce straightness, and can relieve audible impact that sounds ear-piercing to the user. A first and second frequencies are set to cause the focusing sound to be a consonant chord with the fundamental tone component and the higher-pitched tone component. According to this, the digital camera100can output focusing sound with which the user feels comfortable and has an impression of recalling fineness of a machine, owing to the focusing sound constituting a consonant chord. The annunciator165outputs focusing sound so as to fade the focusing sound out. According to this, the digital camera100can resolve a feeling of uneasiness that a lingering tone is impure, and can output crispy focusing sound. Focusing sound includes the first sound wave W1, and the second sound wave W2output at an interval of a time period after the first sound wave W1is output, and amplitude of the second sound wave is greater than amplitude of the first sound wave. According to this, the digital camera100can give the user a focusing stop feeling. OTHER EMBODIMENTS As the above, an embodiment has been described as exemplification of the techniques disclosed in the present application. However, the techniques in the present disclosure are not limited to this, and can be applied to an embodiment to which a change, replacement, addition, omission, or the like, can be made as appropriate. Furthermore, it is also possible to combine each component described in the above embodiment to form a new embodiment. Therefore, other embodiments will be exemplified below. Although the digital camera100according to the first embodiment outputs focusing sound having the fundamental tone component f11 for which the sound output level peaks at frequency of 4.40 kHz, and the higher-pitched tone component f21 for which a sound output level peaks at frequency of 6.60 kHz, the present disclosure is not limited to this.FIG.6Aillustrates a modification of a frequency characteristic of focusing sound.FIG.6Billustrates a waveform chart of focusing sound according to the modification. In the present embodiment, as illustrated inFIG.6A, the digital camera100may output focusing sound that has a higher-pitched tone component f22 for which a sound output level peaks at frequency of 5.86 kHz, instead of the higher-pitched tone component f21 for which a sound output level peaks at frequency of 6.60 kHz. In this case, a frequency ratio of the fundamental tone component f11 to the higher-pitched tone component f22 is 3:4, thereby achieving a consonant chord of a perfect fourth. As illustrated inFIG.6B, focusing sound in this case is output with periodicity of a waveform different from periodicity in a case of a perfect fifth (FIG.5A). Although the consonant chord of the focusing sound according to the first embodiment is perfect-fifth sound, and the consonant chord of the focusing sound according to a second embodiment is perfect-fourth sound, a consonant chord is not particularly limited to this. A digital camera1according to the present embodiment includes an image sensor115, a controller135, and an annunciator165. The image sensor115captures an object image entering via the optical system110. The controller135controls focusing operation to focus the object image by the optical system110. The annunciator165outputs focusing sound according to the focusing operation. The focusing sound may constitute a consonant chord of at least one of a perfect fifth and a perfect fourth. In the above embodiment, the focusing sound includes a fundamental tone component of 4.40 kHz; however, the present disclosure is not limited to this. In the present embodiment, the focusing sound may include a fundamental tone component of another frequency. In the example inFIGS.5A and5B, the time period T2is 25 msec.; however, the present disclosure is not limited to this. From a viewpoint to allow a user to recognize each of a first sound wave and a second sound wave, a time period T2may be a time period of 10 msec. or longer. For example, the time period T2may be set between 10 msec. and 50 msec. Although the amplitude A2of the second sound wave according to the example inFIGS.5A and5Bis substantially 3 dB greater than the amplitude A1of the first sound wave, the present disclosure is not limited to this. From a viewpoint to allow the user to recognize that volume of the second sound wave is greater than volume of the first sound wave, the amplitude A2of the second sound wave may be at least 3 dB greater than the amplitude A1of the first sound wave. Although the annunciator165outputs a first sound wave W1by adding a fade with a constant gradient so that amplitude becomes 0 after a lapse of 37.5 msec.(T3) inFIG.5B, the present disclosure is not limited to this. For example, the annunciator165may output the first sound wave W1by adding a fade with a gradient so that a time period T3becomes shorter as frequency at which a sound output level peaks is lower. For example, as illustrated inFIG.7A, the time period T3is 50 msec. in a case where the first sound wave includes a fundamental tone component for which a sound output level peaks at 3.520 kHz, and a higher-pitched tone component for which a sound output level peaks at 5.280 kHz. Furthermore, as illustrated inFIG.7B, the time period T3is 25 msec. in a case where the first sound wave includes a fundamental tone component for which a sound output level peaks at 0.880 kHz, and a higher-pitched tone component for which a sound output level peaks at 1.320 kHz. A similar applies to a time period T5. The annunciator165may include, but not limited to, a DA converter. For example, the DA converter may be included in the controller135. The annunciator165may output each of the sound waves W1and W2by adding a fade with, but not limited to, a constant gradient. For example, the annunciator165may output each of the sound waves W1and W2by adding a fade with a changing gradient. In the above embodiment, the focusing sound constitutes a consonant chord with a fundamental tone component and a higher-pitched tone component; however, (peak) frequency of each component can be set within an allowable error range as appropriate. For example, the higher-pitched tone component may be set within a range from a quarter-tone lower interval to a quarter-tone higher interval with respect to frequency higher than frequency of the fundamental tone by a theoretical frequency ratio of a consonant chord. The higher-pitched tone component may be set within a bandwidth of a half width a peak of an output level of the sound that constitutes a consonant chord inFIG.4. In the above embodiment, the focusing sound is constituted from two sound waves W1and W2; however, the focusing sound may be constituted from three or more sound waves. Although the focusing sound according to the present embodiment includes two sound components, which are a fundamental tone component and a higher-pitched tone component, three or more sound components may be included. In the present embodiment, the digital camera100is not limited to a lens-integrated digital camera, but may be, for example, a lens-interchangeable digital camera. As the above, the embodiments have been described as exemplification of the techniques in the present disclosure. To that end, the accompanying drawings and detailed description are provided. Therefore, among the components described in the accompanying drawings and the detailed description, not only a component essential for solving a problem but also a component not essential for solving the problem may be included in order to exemplify the above techniques. Therefore, it should not be immediately recognized that these non-essential components are essential based on a fact that the non-essential components are described in the accompanying drawings and the detailed description. Furthermore, because the above-described embodiments are for exemplifying the techniques in the present disclosure, various changes, replacements, additions, omissions, or the like, can be made within the scope of the claims or an equivalent scope. INDUSTRIAL APPLICABILITY The idea of the present disclosure can be applied to an electronic device (an imaging device such as a digital camera or a camcorder, a mobile phone, a smartphone, or the like) having an imaging function including a focusing function.
29,682
11943535
DESCRIPTION OF THE EMBODIMENTS Hereinafter, with reference to the accompanying drawings, favorable modes of the present invention will be described using embodiments. In each diagram, the same reference signs are applied to the same members or elements, and duplicate description will be omitted or simplified. In the embodiments, an example in which a digital still camera is applied as an image stabilization control device will be described. The image stabilization control device can be applied to a digital movie camera, a smartphone with a camera, a tablet computer with a camera, an in-vehicle camera, a drone camera, a camera mounted on a robot, an electronic device that has an imaging function of a network camera or the like, or the like. First Embodiment A configuration or the like of a camera according to a first embodiment will be described with reference toFIGS.1to11.FIG.1is a side view illustrating a camera according to the first embodiment of the present invention and illustrates a configuration of an image stabilization control device in a (digital still) camera11that includes a camera body11aand an interchangeable lens unit11bthat is detachably mounted on the camera body11a. A CPU12provided in the camera body11ainFIG.1controls an imaging operation or an image stabilization operation in the camera in response to an imaging instruction manipulation or the like from a photographer. The CPU12functions as a control unit that controls an operation of each unit of the entire device, including a blur correction operation based on a computer program stored in a memory serving as a storage medium. A subject light flux along an optical axis10is incident on an image sensor14serving as an imaging unit through an imaging optical system13provided in the interchangeable lens unit11b. The image sensor14is configured with a CMOS image sensor or the like and outputs an image signal in response to the input subject light flux. InFIG.1, the vertical direction ofFIG.1is referred to as a longitudinal direction of the camera. InFIG.1, reference numeral15pgdenotes a first angular velocimeter that serves as an angle blur detection unit and detects a blur angular velocity in a pitch direction indicated by an arrow15psapplied to the camera11. An angle blur signal from the first angular velocimeter15pgis input to the CPU12. Reference numeral16yadenotes a first accelerometer that serves as a parallel blur detection unit and detects a blur acceleration in the longitudinal direction of the camera indicated by an arrow16ysapplied to the camera11. An acceleration signal from the first accelerometer16yais input to the CPU12. In the present embodiment, an example in which the camera body11aincludes an angle blur detection unit (the first angular velocimeter15pg) and a parallel blur detection unit (the first accelerometer16ya) will be described. When signals indicating detection results can be acquired from the detection units, the camera body11amay not include the detection units. For example, the lens unit11bmay include one or both of the angle blur detection unit and the parallel blur detection unit and the camera body11amay acquire such information through communication with the lens unit. Reference numeral13cdenotes a blur correction lens that is driven in the direction of an arrow13yby a driving unit13band corrects an angle blur. Here, a blur correction control unit includes the blur correction lens13cand the driving unit13b. FIG.2is a functional block diagram illustrating main units of the image stabilization control device in the longitudinal direction of the camera inFIG.1. Some of the functional blocks illustrated inFIG.2are implemented by causing the CPU12serving as a computer included in an imaging device to execute a computer program stored in a memory serving as a storage medium (not illustrated). However, some or all of the functional blocks may be implemented by hardware. As the hardware, a dedicated circuit (ASIC), a processor (a reconfigurable processor or a DSP), or the like can be used. Each functional block illustrated inFIG.2may not be embedded in the same casing or the imaging device may be configured with separate devices connected via a signal path. The foregoing description inFIG.2similarly applies toFIGS.4,6,10,12,13,16,18, and19. A signal of the first angular velocimeter15pgis subjected to integration by an angular velocity integration unit12paand is converted into an angle blur indicated by an arrow15pofFIG.1. A signal of the angular velocity integration unit12pais input to an angle blur target value calculation unit12pband is gain-adjusted in accordance with characteristics or a focal distance of an imaging optical system. An angle blur correction target value gain-adjusted by the angle blur target value calculation unit12pbis input to the driving unit13b. The blur correction lens13cis driven in the direction of an arrow13yby the driving unit13band corrects a pitch angle blur ofFIG.1. After a signal of the first accelerometer16yais subjected to two-step integration by an acceleration integration unit12ycand is converted into a displacement amount, only a component of a desired frequency (for example, 1 Hz) is extracted by a displacement bandpass filter12yd. Similarly, from the angle blur which is an output of the above-described angular velocity integration unit12pa, only a component of a desired frequency (for example, 1 Hz) is also extracted by an angle bandpass filter12pe. Here, a bandpass of the angle bandpass filter12peis set to be substantially equal to a bandpass of the displacement bandpass filter12yd. A rotation radius calculation unit12yfcalculates a ratio between a displacement signal and an angle signal of the same frequency band respectively extracted by the displacement bandpass filter12ydand the angle bandpass filter12pe. Then, from this ratio, an angular velocimeter rotation radius17yfrom a rotational center17ycof a blur to the first accelerometer16yais obtained. That is, the rotation radius calculation unit calculate a rotation radius of an angle blur based on an output of the angle blur detection unit (also sometimes referred to as an angle blur signal) and an output of the parallel blur detection unit (also sometimes referred to as a parallel blur signal). Since a distance between the rotational center17ycand the first accelerometer16yais sufficiently large, the angular velocimeter rotation radius17yis approximately displayed as a distance in the optical axis direction between the rotational center17ycand the first accelerometer16ya. Subsequently, a preset rotation radius18yfrom the first accelerometer16yato a main point of the imaging optical system13is added to the angular velocimeter rotation radius17yto obtain an optical system rotation radius19ywhich is a rotation radius of the imaging optical system. Since a distance between the rotational center17ycand a main point of the imaging optical system13is sufficiently large, the optical system rotation radius19yis approximately displayed as a distance in the optical axis direction between the rotational center17ycand the imaging optical system13. The optical system rotation radius19youtput from the rotation radius calculation unit12yfis input to a multiplication unit12yhvia a rotation radius prediction unit12ygto be described below. The multiplication unit12yhobtains a product of the input optical system rotation radius19yand the angle blur input from the angular velocity integration unit12paand recalculates a parallel blur in the direction of the arrow16y. When the optical system rotation radius19yis temporarily ascertained in this manner, stabilization can be achieved with only the signal of the first angular velocimeter15pgwithout using the signal of the first accelerometer16ya, and thus a parallel blur in the direction of the arrow16ycan be detected. An output of the multiplication unit12yhis input to a parallel blur correction target value calculation unit12yiand is gain-adjusted in accordance with characteristics or an image magnification of the imaging optical system13. A parallel blur correction target value gain-adjusted by the parallel blur correction target value calculation unit12yiis input to the driving unit13band a parallel blur in the y direction (the longitudinal direction) ofFIG.1is corrected. FIG.3is a top view illustrating the camera according to the first embodiment. InFIG.3, the vertical direction ofFIG.3is referred to as a transverse direction of the camera. InFIG.3, reference numeral15ygdenotes a second angular velocimeter that serves as an angle blur detection unit and detects a blur angular velocity in a yaw direction indicated by an arrow15ysapplied to the camera11. A signal from a second angular velocimeter15ygis input to the CPU12. Reference numeral16xadenotes a second accelerometer that serves as a parallel blur detection unit and detects a blur acceleration in the transverse direction of the camera indicated by an arrow16xsapplied to the camera11. A signal from the second accelerometer16xais input to the CPU12. FIG.4is a functional block diagram illustrating the main units of the image stabilization control device in the transverse direction of the camera inFIG.3. The camera11is provided with all the configurations ofFIGS.4and2. A signal of the second angular velocimeter15ygis subjected to integration by an angular velocity integration unit12yaand is converted into an angle blur indicated by an arrow15yofFIG.3. A signal of the angular velocity integration unit12yais input to an angle blur correction target value calculation unit12yband is gain-adjusted in accordance with characteristics or a focal distance of the imaging optical system13. An angle blur correction target value gain-adjusted by the angle blur correction target value calculation unit12ybis input to the driving unit13b. The blur correction lens13cis driven in the direction of an arrow13xby the driving unit13band corrects a yaw angle blur. After a signal of the second accelerometer16xais subjected to two-step integration by an acceleration integration unit12xcand is converted into a displacement amount, only a component of a desired frequency (for example, 1 Hz) is extracted by a displacement bandpass filter12xd. Similarly, from a signal of the second angular velocimeter15ygconverted into the angle by the above-described angular velocity integration unit12ya, only a component of a desired frequency (for example, 1 Hz) is also extracted by an angle bandpass filter12ye. Here, a bandpass of the angle bandpass filter12yeis set to be substantially equal to a bandpass of the displacement bandpass filter12xd. A rotation radius calculation unit12xfobtains an angular velocimeter rotation radius17xfrom a rotational center17xcof a blur to the second accelerometer16xafrom a ratio between a displacement signal and an angle signal of the same frequency band respectively extracted by the displacement bandpass filter12xdand the angle bandpass filter12ye. Since a distance between the rotational center17xcand the second accelerometer16xais sufficiently large, inFIG.3, the angular velocimeter rotation radius17xis approximately displayed as a distance in the optical axis direction between the rotational center17xcand the second accelerometer16xa. Subsequently, a preset rotation radius18xfrom the second accelerometer16xato a main point of the imaging optical system13is added to the angular velocimeter rotation radius17xto obtain an optical system rotation radius19xwhich is a rotation radius of the imaging optical system. Since a distance between the rotational center17xcand a main point of the imaging optical system13is sufficiently large, the optical system rotation radius19xis approximately displayed as a distance in the optical axis direction between the rotational center17xcand the imaging optical system13inFIG.3. The optical system rotation radius19xoutput from the rotation radius calculation unit12xfis input to a multiplication unit12xhvia a rotation radius prediction unit12xgto be described below. The multiplication unit12xhobtains a product of the input optical system rotation radius19xand the angle signal input from the angular velocity integration unit12yaand recalculates a blur in the direction of the arrow16x(the transverse direction of the camera). When the optical system rotation radius19xis temporarily ascertained in this manner, stabilization can be achieved with only the signal of the second angular velocimeter15ygwithout using the signal of the second accelerometer16xa, and thus a parallel blur can be detected. A signal of the multiplication unit12xhis input to a parallel blur correction target value calculation unit12xiand is gain-adjusted in accordance with characteristics or an image magnification of the imaging optical system. A parallel blur correction target value gain-adjusted by the parallel blur correction target value calculation unit12xiis input to the driving unit13b. The blur correction lens13cis driven in the direction of the arrow13xby the driving unit13band corrects the parallel blur in the transverse direction of the camera in addition to the above-described yaw angle blur correction. FIG.5is a front view illustrating the camera according to the first embodiment of the present invention. InFIG.5, reference numeral15rgdenotes a third angular velocimeter that serves as an angle blur detection unit and detects a blur angular velocity in a direction indicated by an arrow15rsofFIG.5applied to the camera11. A signal of the third angular velocimeter15rgis input to the CPU12. FIG.6is a functional block diagram illustrating the main units of the image stabilization control device in an optical axis direction inFIG.5. The camera11is provided with all the configurations ofFIGS.6,4, and2. That is, the angle blur detection unit includes a plurality of angle blur detection sensors (first to third angular velocimeters or the like) detecting angle blurs at angles in a plurality of directions. The parallel blur detection unit also includes a plurality of parallel blur detection sensors (first and second angular velocimeters or the like) detecting parallel blurs in a plurality of directions. A signal of the third angular velocimeter15rgis subjected to integration by an angular velocity integration unit12raand is converted into a roll angle blur around an imaging optical axis indicated by an arrow15rofFIG.5. A signal of the angular velocity integration unit12rais input to an angle blur correction target value calculation unit12rband is gain-adjusted. An angle blur correction target value gain-adjusted by the angle blur correction target value calculation unit12rbis input to a driving unit14b. The image sensor14is disposed, for example, above a rotator14aforming a gear in its circumference. By causing the driving unit14bto rotatably drive the rotator14ain the direction of an arrow14r, the image sensor14is rotated to correct a roll angle blur. Here, the image sensor14, the rotator14a, and the driving unit14bform a blur correction control unit. As described above, in the present embodiment, the optical system rotation radii19yand19xand the first and second angular velocimeters15pgand15ygare used to calculate a parallel blur in the y direction (the longitudinal direction of the camera) and the x direction (the transverse direction of the camera) during an exposure period. Accordingly, even when noise occurs in an acceleration system due to vibration of focus driving at the time of exposure or vibration of shutter driving during exposure, there is no influence and parallel blur detection accuracy does not deteriorate. Here, if a rotation radius is merely fixed during the exposure to avoid the influence of noise of the accelerometer during exposure, there is a problem that the parallel blur detection accuracy deteriorates when the rotation radius is changed actually during the exposure. The foregoing problem will be described with reference toFIGS.7A to9B. FIGS.7A and7Bare diagrams illustrating examples of a change in a blur amount/correction amount in an image surface occurring due to a parallel blur. It is preferable to perform blur correction to match a waveform of an image surface blur amount.FIG.7Ais a diagram illustrating examples of an actual image surface blur amount waveform71aand a correction amount waveform71b. That is, reference numeral71adenotes an image surface blur amount waveform indicating an actual blur amount of an image on an image surface due to a parallel blur applied to the camera11and reference numeral71bdenotes an exemplary correction amount waveform when the blur correction control unit is driven based on a parallel blur correction target value calculated by fixing a rotation radius during an exposure period73. FIGS.8A to8Care diagrams illustrating a change in a rotation radius.FIG.8Ais a diagram illustrating a rotation radius when the rotation radius is fixed during the exposure period73andFIG.8Bis a diagram illustrating an exemplary change in the actual rotation radius. The image surface blur amount waveform71aindicating the actual blur amount inFIG.7Ais obtained by a product of a rotation radius waveform72awhich is the actual rotation radius illustrated inFIG.8Band an actual angle blur applied to the camera11. However, as illustrated inFIG.8A, an error between a rotation radius waveform72bwhen the rotation radius is fixed (not updated) during the exposure period73and the actual rotation radius waveform72aillustrated inFIG.8Boccurs in some cases. FIGS.9A and9Bare diagrams illustrating a difference between a waveform of an image surface blur amount and a waveform of a blur correction amount. During the exposure period73, a blur occurs in a waveform, between the image surface blur amount waveform71aand the correction amount waveform71b, as illustrated inFIG.7A. As a result, as illustrated inFIG.9A, a blur remaining waveform (a correction error)74awhich is a difference in the blur between both the waveforms occurs. Accordingly, in the present embodiment, the rotation radius prediction units12ygand12xgare provided to calculate a predicted rotation radius waveform72cillustrated inFIG.8Cby predicting a rotation radius during exposure. That is, the rotation radius prediction unit12xgor12ygpredict a rotation radius during exposure based on a change history of the rotation radius output by the rotation radius calculation unit12xfor12yfat a predetermined time before (for example, 1 second before) the time of starting of the exposure and calculates the predicted rotation radius waveform72c. That is, the rotation radius prediction unit predicts a change in a rotation radius based on an output of the rotation radius calculation unit and outputs a rotation radius prediction signal. FIG.7Bis a diagram illustrating a correction amount waveform71cwhen blur correction is performed for the exposure period73based on the predicted rotation radius waveform72c.FIG.8Cis a diagram illustrating a decrease in a difference between the predicted rotation radius waveform72cand the actual rotation radius waveform72a.FIG.9Bis a diagram illustrating a decrease in a correction error74bwhich is a difference between the image surface blur amount waveform71aand the correction amount waveform71cinFIG.7B. That is, the correction error74billustrated inFIG.9Bwhen the predicted rotation radius waveform72cis calculated and the blur correction is performed can be considerably reduced compared to the correction error74aillustrated inFIG.9A. Here, an example of a prediction method for a rotation radius using an adaptive filter will be described as a prediction method for a rotation radius in the rotation radius prediction unit12ygor12xg. FIG.10is a functional block diagram illustrating the rotation radius prediction unit according to the first embodiment. Reference numeral81denotes a prediction unit and reference numeral82denotes an adaptive unit. Reference numeral83denotes a subtractor, reference numeral84adenotes a prediction unit input switching unit, and reference numerals84a1and84a2denote contact points in the prediction unit input switching unit84a. Reference numeral84bdenotes an adaptive operational switch, reference numeral84cdenotes an output switching unit, and reference numerals84c1and84c2denote contact points in the output switching unit84c. Reference numerals84c,84c1, and84c2form a prediction switching unit84. Reference numeral85denotes an input terminal in the rotation radius prediction unit12xgor12ygand a signal from the rotation radius calculation unit12xfor12yfis input, Reference numeral86denotes an output terminal from the rotation radius prediction unit12xgor12ygand reference numeral87denotes a unit delay unit. Reference numeral u(n), y(n), and e(n) respectively denote an input value, a prediction value, and a prediction error of the rotation radius calculation unit12xfor12yf. First, a case in which a signal is input from the rotation radius calculation unit12xfor12yfbefore start of exposure will be described. In this case, the CPU12switches the prediction unit input switching unit84ato the side of the contact point84a1, turns the adaptive operation switch84bon, and switches the output switching unit84cto the side of the contact point84c1. In the present embodiment, this state is referred to as an adaptive operation or an adaptive operation state. In this case, an input value u(n−1) subjected to unit delay by the unit delay unit87is input to the prediction unit81via the prediction unit input switching unit84a. The prediction unit81outputs a current prediction value y(n) based on a previous input value. That is, the prediction unit81generates a current prediction unit y(n) based on a previous input value u(n−1) earlier by one sample in an n-th sample. The subtractor83calculates a difference e(n)=u(n)−y(n) (that is, a prediction error) between the input value u(n) and the prediction value y(n). The adaptive unit82updates the prediction unit81using the prediction error in accordance with a predetermined adaptive algorithm. Since the output switching unit84ccomes into contact with the side of the contact point84c1, the current input value u(n) input to the input terminal85is selected as an output signal and is output to the output terminal86via the output switching unit84c, as it is. In this way, the signal from the input terminal85is output to the output terminal86, as it is, until the exposure starts and the adaptive unit82performs an adaptive operation of the prediction unit81. Next, an operation after the start of exposure will be described. In this case, the CPU12switches the prediction unit input switching unit84ato the side of the contact point84a2, turns the adaptive operation switch84boff, and switches the output switching unit84cto the side of the contact point84c2. Accordingly, the prediction unit81inputs the previous prediction value y(n−1) to the prediction unit81as a feedback via the prediction unit input switching unit84a. The prediction unit81outputs the prediction value y(n) based on the previous prediction value input as the feedback. Since the adaptive operation switch84bis turned off, the operations of the adaptive unit82and the subtractor83stop. Them, the output switching unit84ccomes into contact with the side of the contact point84c2, the prediction value y(n) is selected as an output signal and is output to the output terminal86. In the present embodiment, this state is referred to as a prediction operation or a prediction operation state. In this way, during the exposure, the prediction value generated by the prediction unit81is output to the output terminal86and a prediction operation is performed. In addition to the prediction scheme described with reference toFIG.10, any of various methods such as linear prediction or a Kaman filter from a rotation radius change history can be used to predict a change in the rotation radius during the exposure. In this way, by predicting the rotation radius during the exposure, a change in a rotation radius of the predicted rotation radius waveform72cduring the exposure period73can be closer to the actual rotation radius waveform72athan a change in the rotation radius of the rotation radius waveform72bduring the exposure period73inFIG.8A. As described above, the correction amount waveform71cinFIG.7Bis obtained by a product of a signal of the rotation radius prediction unit12xgor12ygand a signal of an angular velocity integration unit12xaor12ya. For the blur remaining waveform (correction error)74binFIG.9Bwhich is a difference between the image surface blur amount waveform71aand the correction amount waveform71c, a blur remainder can be reduced than the blur remaining waveform74ainFIG.9A. As illustrated inFIG.8B, the actual rotation radius waveform72ahas a lower frequency than the image surface blur amount waveform71a. Therefore, in the present embodiment, a correction waveform during exposure is not predicted from a history of the parallel blur waveform or the parallel blur correction amount, but the correction waveform is calculated based on a prediction result of the rotation radius waveform72b. Thus, there is an effect that the stable and high-accurate correction amount waveform71ccan be acquired. As described above, noise occurring in an accelerometer due to vibration of focus driving caused in an exposure operation and vibration of shutter driving during exposure may deteriorate detection of a parallel blur. To prevent this, in the present embodiment, a predicted rotation radius is used during the exposure. Further, in the present embodiment, when vibration does not occur even during the exposure period, the rotation radius obtained in calculation is used to predict a rotation radius only during occurrence of vibration for the exposure period. At this time, in the present embodiment, when the output switching unit84cis switched to the side of the contact point84c1from a state in which the prediction value generated by prediction unit81is output to the output terminal86and the predicted rotation radius is supplied, the rotation radius is not considerably changed. That is, the rotation radius is smoothly connected, and thus the stable blur correction can be performed. That is, when an exposure period75is longer than an exposure period73and a rotation radius obtained in real-time calculation is used after the exposure period73inFIG.8C, a gap76does not occur in the final of the exposure period73in the predicted rotation radius waveform72c. Therefore, blur correction is pertained to be continuously connected from a predicted waveform to a calculated waveform by using a dotted waveform77obtained by offsetting the value of a subsequently continuing calculated rotation radius in accordance with the value of the end of the exposure period73of the predicted rotation radius. When the waveform is switched to make a smooth change from the predicted waveform to the calculated waveform, the waveform may be passed through a lowpass filter. FIG.11is a flowchart illustrating image stabilization control of the camera according to the first embodiment. The CPU12serving as a computer executes a computer program stored in a memory to perform an operation of each step of the flowchart ofFIG.11. The flow ofFIG.11starts when the camera11is powered on. In step S901, as described with reference toFIGS.2and4, the rotation radius calculation unit12xfor12yfcalculates a rotation radius of a parallel blur. Angle blur correction also starts. In step S902, the rotation radius prediction unit12xgor12ygstarts an adaptive operation to prepare for rotation radius prediction by using the adaptive unit82based on subsequently input rotation radius information, That is, in the adaptive operation, the prediction unit input switching unit84ainFIG.10is switched to the side of the contact point84a1, adaptive operation switch84bis turned on, and the output switching unit84cis switched to the side of the contact point84c1. In step S903, it is determined whether an exposure manipulation is performed by the photographer. In the case of No, the process returns to step S901to continue the calculation radius and the angle blur correction. By repeating a loop from steps S901to S904here, it is possible to improve adaptive accuracy in the rotation radius prediction unit12xgor12yg. When it is determined in step S903that the exposure starts, the process proceeds to step S904. In step S904, it is determined whether consecutive imaging is performed. When the consecutive imaging is performed, the process proceeds to step S908and the prediction of the rotation radius is not performed. This is because prediction accuracy of the rotation radius is gradually lowered since repeated imaging is performed for a long time at the time of consecutive imaging. When the consecutive imaging is not performed, the process proceeds to step S905. In step S905, the rotation radius prediction unit12xgor12ygstarts outputting the rotation radius predicted by switching the adaptive operation using the adaptive unit82to the prediction operation. That is, in the prediction operation, the prediction unit input switching unit84ainFIG.10is switched to the side of the contact point84a2, the adaptive operation switch84bis turned off, and the output switching unit84cis switched to the side of the contact point84c2to start outputting the predicted rotation radius. In step S906, in addition to the angle blur correction, parallel blur correction based on the predicted rotation radius starts. In step S907, when the predicted rotation radius is considerably different from the rotation radius calculated in the adaptive operation before the exposure (for example, in the case of 1.5 times), it is determined that the prediction fails and the process proceeds to step S908, the prediction of the rotation radius is stopped, and the rotation radius is fixed to a preset rotational center. In the case of No in step S907, the process proceeds to step S909. In step S909, a predetermined time is awaited. The predetermined time is a preset time (for example, 0.2 seconds) necessary until disturbance vibration caused in the exposure operation dies down. The prediction of the rotation radius continues during the predetermined time. When the predetermined time has passed, the process proceeds to step S910. Here, step S909functions as a determination unit that determines whether the disturbance vibration occurs during a period equal to or greater than a predetermined value based on whether the predetermined time has passed. In addition to the method of determining an elapsed time until disturbance vibration dies down as in step S909, the process may proceed to step S910when a vibration sensor is provided and disturbance vibration is reduced. In step S910, the prediction switching unit84switches an operation from the prediction operation to an adaptive operation of outputting an output of the rotation radius calculation unit12xfor12yfto the multiplication unit12xhor12yh. That is, the prediction unit input switching unit84ainFIG.10is switched to the side of the contact point84a1, the adaptive operation switch84bis turned on, and the output switching unit84cis switched to the side of the contact point84c1. At this time, as described with reference toFIG.8C, a process of keeping continuity of the rotation radius is performed. That is, the blur correction control unit operates to decrease a difference between the correction of the parallel blur which is based on the rotation radius prediction signal and the output of the angle blur detection unit and the correction of the parallel blur which is based on the output of the rotation radius calculation unit and the output of the angle blur detection unit. In step S911, it is determined whether the exposure is completed. The process returns to step S904to continue the blur correction until the exposure is completed. When the exposure is completed, the process returns to step S901. In the present embodiment, the blur correction control unit13ais configured by operating the blur correction lens13cin the direction of the arrow13xor13y, but the blur correction control unit may be configured by operating the image sensor14in the direction of the arrow13xor13yusing the driving unit14b. Alternatively, the blur correction control unit may be configured by shifting the reading region of the image sensor14in the direction of the arrow13xor13y. Alternatively, the blur correction control unit may be configured by temporarily storing an image signal read from the image sensor14in a memory and shifting the reading region from the memory in the direction of the arrow13xor13y. In this way, the blur correction control unit includes a reading region changing unit that changes a reading region of an image acquired by the imaging unit. The blur correction control unit may use any method as long as the correction of a parallel blur is controlled based on a rotation radius prediction signal of the rotation radius prediction unit and an output of the angle blur detection unit. In this way, in the present embodiment, when a parallel blur is corrected based on the output of the prediction unit81and the output of the angle blur detection unit for an exposure period in which disturbance vibration easily occurs and the disturbance vibration dies down, the parallel blur is corrected based on the output of the rotation radius calculation unit and the output of the angle blur detection unit. That is, for a period in which the disturbance vibration is equal to or greater than a predetermined value, the blur correction control unit corrects the parallel blur based on a rotation radius prediction signal and the output of the angle blur detection unit. Conversely, for a period in which the disturbance vibration is less than the predetermined value, the parallel blur is corrected based on the output of the rotation radius calculation unit and the output of the angle blur detection unit. In this way, in the present embodiment, a prediction blur with high accuracy can be achieved by predicting a parallel blur for the period in which disturbance vibration easily occurs using a change history of the rotational center. A deterioration in an image can be reduced by performing the blur correction with higher accuracy. Second Embodiment A configuration of a camera according to a second embodiment will be described with reference toFIGS.12to14.FIG.12is a functional block diagram illustrating main units of an image stabilization control device in a longitudinal direction of the camera according to the second embodiment of the present invention and is a diagram in which a part of the functional block diagram ofFIG.2is changed. That is, the functional block diagram ofFIG.12differs from the functional block diagram ofFIG.2in that a rotation radius is obtained at each frequency via a plurality of bandpass filters with different frequencies and an adaptive operation for each rotation radius is in progress in parallel. Further, the functional block diagram differs in that a parallel blur correction target value in the longitudinal direction is obtained based on one rotation radius at each frequency. Differences betweenFIGS.12and2will be described. An output signal of the first accelerometer16yais subjected to two-step integration by the acceleration integration unit12ycto be converted into a displacement amount. A first displacement bandpass filter12yd1extracts only a component of a first frequency (for example, 0.5 Hz) from an output of the acceleration integration unit12yc. Similarly, a first angle bandpass filter12pe1also extracts only a component of the first frequency (for example, 0.5 Hz) from a signal of the first angular velocimeter15pgconverted into an angle by the angular velocity integration unit12pa. A first rotation radius calculation unit12yf1calculates a first angular velocimeter rotation radius17y1in the longitudinal direction based on an output of the first displacement bandpass filter12yd1and an output of the first angle bandpass filter12pe1and further obtains an optical system rotation radius19y1in the longitudinal direction. A first gain rotation radius prediction unit12yg1predicts a first optical system rotation radius in the longitudinal direction during exposure based on a change history of the first optical system rotation radius in the longitudinal direction output by the first rotation radius calculation unit12yf1(for example, 2 seconds before). From an output signal of the acceleration integration unit12yc, only a component of a second frequency (for example, 2 Hz) different from the first frequency is extracted by a second displacement bandpass filter12yd2. Similarly, from an output signal of the angular velocity integration unit12pa, only a component of the second frequency (for example, 2 Hz) different from the first frequency is also extracted by a second angle bandpass filter12pe2. A second rotation radius calculation unit12yf2calculates a second angular velocimeter rotation radius17y2in the longitudinal direction based on an output of the second displacement bandpass filter12yd2and an output of the second angle bandpass filter12pe2and further obtains an optical system rotation radius19y2in the longitudinal direction. A second gain rotation radius prediction unit12yg2predicts a second optical system rotation radius in the longitudinal direction during exposure based on a change history of the second optical system rotation radius in the longitudinal direction output by the rotation radius calculation unit12yf2(for example, 0.5 seconds before). In this way, in the present embodiment, a plurality of bandpass filters extracting different frequency components from an output of the angle blur detection unit and an output of the parallel blur detection unit are provided. The plurality of rotation radius prediction units predict a rotation radius of an angle blur based on the plurality of bandpass filter signals. For the output signals of the first gain rotation radius prediction unit12yg1and the second gain rotation radius prediction unit12yg2, one optical system rotation radius is selected by the rotation radius selection unit12yjand is output to the multiplication unit12yh. A selection standard for the rotation radius selection unit12yjwill be described. A signal determination unit12ykcompares an output of the first displacement bandpass filter12yd1input to the first rotation radius calculation unit12yf1with an output of the second displacement bandpass filter12yd2input to the first rotation radius calculation unit12yf1and transmits a comparison result to the rotation radius selection unit12yj. Based on the comparison result, the rotation radius selection unit12yjselects an optical system rotation radius calculated based on the displacement bandpass filter outputting a relatively large signal. Specifically, when the output of the second displacement bandpass filter12yd2is greater than the output of the first displacement bandpass filter12yd1, it is determined that a parallel blur of 2 Hz is characteristic and a signal of the second gain rotation radius prediction unit12yg2is selected and transmitted to the multiplication unit12yh. Alternatively, the signal determination unit12ykcompares an output of the first angle bandpass filter12pe1input to the first rotation radius calculation unit12yf1with an output of the second angle bandpass filter12pe2and transmits a comparison result to the rotation radius selection unit12yj. The rotation radius selection unit12yjselects an optical system rotation radius calculated based on the angle bandpass filter outputting a relatively large signal based on the comparison result. Specifically, when the output of the second angle bandpass filter12pe2is greater than the output of the first angle bandpass filter12pe1, it is determined that the parallel blur of 2 Hz is characteristic and a signal of the second gain rotation radius prediction unit12yg2is selected and transmitted to the multiplication unit12yh. In this way, in the present embodiment, the rotation radius selection unit selects an output which is used to correct the parallel blur from the outputs of the plurality of rotation radius prediction units. The first gain rotation radius prediction unit12yg1and the second gain rotation radius prediction unit12yg2perform a prediction operation using the prediction method described with reference toFIG.10. The adaptive operation is simultaneously started from a time at which the camera is powered on. In the present embodiment, since the first gain rotation radius prediction unit12yg1and the second gain rotation radius prediction unit12yg2perform the adaptive operation which is calculation for prediction in parallel, measures can be taken instantaneously even when one prediction value is obtained from the rotation radius selection unit12yj. Here, the first gain rotation radius prediction unit12yg1multiplies the optical system rotation radius obtained in the prediction operation by gain 1. On the other hand, the second gain rotation radius prediction unit12yg2multiplies the optical system rotation radius obtained in the prediction operation by gain 0.7. That is, in the present embodiment, to avoid an influence of disturbance noise, the second gain rotation radius prediction unit12yg2sets a gain to be less than that of the first gain rotation radius prediction unit12yg1and reduces deterioration in the blur correction. This is because disturbance noise is easily mixed since extraction frequencies of the second angle bandpass filter12pe2and the second displacement bandpass filter12yd2are higher than extraction frequencies of the first angle bandpass filter12pe1and the first displacement bandpass filter12yd1. In this way, in the present embodiment, there are characteristics that the output gains of the plurality of rotation radius prediction units are different at each frequency. A signal of the angular velocity integration unit12pamultiplied by the multiplication unit12yhis input to the parallel blur correction target value calculation unit12yiand is gain-adjusted in accordance with characteristics of the imaging optical system an image magnification. The parallel blur correction target value gain-adjusted by the parallel blur correction target value calculation unit12yiis input to the driving unit13b. The blur correction lens13cis driven in the direction of the arrow13yby the driving unit13band the correction of the parallel blur in the longitudinal direction of the camera is performed in addition to the above-described correction of the angle blur. FIG.13is a functional block diagram illustrating main units of the image stabilization control device on an optical axis of the camera according to the second embodiment of the present invention and illustrates a modification example of the functional block diagram ofFIG.6corresponding to the front view of the camera ofFIG.5. That is, in the functional block diagram ofFIG.13, there are characteristics that a rotation radius is obtained at each frequency via the plurality of bandpass filters with different frequencies and an operation is in progress in parallel to an adaptive operation of each rotation radius. Further, there are characteristics that the parallel blur correction target value inFIG.5is obtained based on one rotation radius of each frequency. InFIG.13, an optical system rotation radius19rillustrated inFIG.5is calculated. The camera11has all the configurations ofFIGS.13and12. InFIG.13, an output signal of the second accelerometer16xais subjected to two-step integration by the acceleration integration unit12xcto be converted into a displacement amount. A first displacement bandpass filter12xd1extracts only a component of the first frequency (for example, 0.5 Hz) from an output of the acceleration integration unit12xc. Similarly, a signal of the third angular velocimeter15rgis converted into an angle by the angular velocity integration unit12raand only a component of the first frequency (for example, 0.5 Hz) is extracted by a first angle bandpass filter12re1. A first rotation radius calculation unit12xf1calculates an angular velocimeter rotation radius17r1based on an output of the first angle bandpass filter12re1and an output of the first displacement bandpass filter12xd1. Further, an optical system rotation radius19r1is obtained based on a distance18rbetween the third angular velocimeter15rgand the optical axis and an angular velocimeter rotation radius17r1. A first gain rotation radius prediction unit12xg1predicts an optical system rotation radius during exposure based on a change history of the optical system rotation radius output by the first rotation radius calculation unit12xf1(for example, 2 seconds before). From an output signal of the acceleration integration unit12xc, only a component of a second frequency (for example, 2 Hz) different from the first frequency is extracted by a second displacement bandpass filter12xd2is extracted. Similarly, from an output signal of the angular velocity integration unit12ra, only a component of the second frequency (for example, 2 Hz) different from the first frequency is extracted by a second angle bandpass filter12re2is extracted. A second rotation radius calculation unit12xf2calculates an angular velocimeter rotation radius17r2based on an output of the second displacement bandpass filter12xd2and an output of the second angle bandpass filter12re2and obtains an optical system rotation radius19r2. Further, the optical system rotation radius19r2is obtained based on a distance18rbetween the third angular velocimeter15rgand the optical axis and the angular velocimeter rotation radius17r2. A second gain rotation radius prediction unit12xg2predicts an optical system rotation radius during exposure based on a change history of the optical system rotation radius output by the second rotation radius calculation unit12xf2(for example, 0.5 seconds before). For the output signals of the first gain rotation radius prediction unit12xg1and the second gain rotation radius prediction unit12xg2, one rotation radius is selected by the rotation radius selection unit12xjand is output to the multiplication unit12xh. A signal determination unit12xkcompares an output of the first displacement bandpass filter12xd1input to the first rotation radius calculation unit12xf1with an output of the second displacement bandpass filter12xd2and transmits a comparison result to the rotation radius selection unit12xj. Based on the comparison result, the rotation radius selection unit12xjselects an optical system rotation radius calculated based on the displacement bandpass filter outputting a relatively large signal. Specifically, when the output of the second displacement bandpass filter12xd2is greater than the output of the first displacement bandpass filter12xd1, it is determined that a parallel blur of 2 Hz is characteristic and a signal of the second gain rotation radius prediction unit12xg2is selected and transmitted to the multiplication unit12xh. Alternatively, the signal determination unit12xkcompares an output of the first angle bandpass filter12re1input to the first rotation radius calculation unit12xf1with an output of the second angle bandpass filter12re2and transmits a comparison result to the rotation radius selection unit12xj. The rotation radius selection unit12xjselects an optical system rotation radius calculated based on the angle bandpass filter outputting a relatively large signal based on the comparison result. Specifically, when the output of the second angle bandpass filter12re2is greater than the output of the first angle bandpass filter12re1, it is determined that the parallel blur of 2 Hz is characteristic and a signal of the second gain rotation radius prediction unit12xg2is selected and transmitted to the multiplication unit12xh. The first gain rotation radius prediction unit12xg1and the second gain rotation radius prediction unit12xg2perform a prediction operation using the prediction method described with reference toFIG.10. The adaptive operation is simultaneously started from a time at which the camera is powered on. In the present embodiment, since the first gain rotation radius prediction unit12xg1and the second gain rotation radius prediction unit12xg2perform the adaptive operation which is calculation for prediction in parallel, a measure can be taken even when one prediction value is obtained from the rotation radius selection unit12xj. Here, the first gain rotation radius prediction unit12xg1multiplies the rotation radius obtained in the prediction operation by, for example, gain 1. On the other hand, the second gain rotation radius prediction unit12xg2multiplies the rotation radius obtained in the prediction operation by, for example, gain 0.7. That is, in the present embodiment, to avoid an influence of disturbance noise, the second gain rotation radius prediction unit12xg2sets a gain to be less than that of the first gain rotation radius prediction unit12xg1and reduces deterioration in the blur correction. This is because disturbance noise is easily mixed since extraction frequencies of the second angle bandpass filter12re2and the second displacement bandpass filter12xd2are higher than extraction frequencies of the first angle bandpass filter12re1and the first displacement bandpass filter12xd1. The multiplication unit12xhmultiplies the optical system rotation radius19rand an integration angle of an output signal of the third angular velocimeter15rgoutput from the angular velocity integration unit12ra. An output of the multiplication unit12xhis input to the parallel blur correction target value calculation unit12xiand is gain-adjusted in accordance with characteristics or an image magnification of the imaging optical system. A parallel blur correction target value gain-adjusted by the parallel blur correction target value calculation unit12xiis input to the driving unit13b. The blur correction lens13cis driven in the direction of the arrow16xby the driving unit13band corrects the parallel blur in the transverse direction of the camera. In this way, in the first embodiment, the integration angle of the second angular velocimeter15yginFIG.3is multiplied by the optical system rotation radius19xto obtain the parallel blur16x. However, in a second embodiment, the third angular velocimeter15rgand the rotation radius19rare used. InFIG.13, the angle blur correction target value is calculated using the angle blur target value calculation unit12pbby integrating an output of the second angular velocimeter15ygin the angular velocity integration unit12ya. However, outputs of the first angular velocimeter15pgand a third angular velocimeter15rgmay be each integrated by the angular velocity integration unit, each angle blur correction target value may be calculated using the angle blur target value calculation unit, and the angle blur may be corrected in accordance with the angle blur correction target value. FIG.14is a flowchart illustrating image stabilization control of the camera according to the second embodiment. Since steps with the same reference numerals as those of the flowchart ofFIG.11are similar processes, description thereof will be omitted. The CPU12serving as a computer executes a computer program stored in a memory to perform an operation of each step of the flowchart ofFIG.14. In step S1201, the rotation radius selection unit12xjselects one of a predicted optical system rotation radius from the first gain rotation radius prediction unit12xg1and a predicted rotation radius from the second gain rotation radius prediction unit12xg2based on an output of the signal determination unit12xk. One of a predicted optical system rotation radius from the first gain rotation radius prediction unit12yg1and a predicted rotation radius from the second gain rotation radius prediction unit12yg2is selected based on the output of the signal determination unit12xk. In step S1202, the first and second accelerometers16yaand16xadetect disturbance vibration applied to the camera. Until the disturbance vibration dies down, this step is circulated and awaited to continue the prediction operation. Third Embodiment A configuration of a camera according to a third embodiment will be described with reference toFIGS.15to19.FIG.15is a side view illustrating a camera according to a third embodiment of the present invention. A difference fromFIG.1is that a motion vector14yin the direction of the arrow14ydetected from the image sensor14instead of the first accelerometer16yais input to the CPU12. Although not illustrated, in the present embodiment, a motion vector detection unit that detects a motion vector of an image acquired from the image sensor14is included. The motion vector detection unit functions as a parallel blur detection unit. FIG.16is a functional block diagram illustrating main units of an image stabilization control device in a longitudinal direction of the camera inFIG.15. A difference fromFIG.2is that a displacement acquired from the motion vector14yis input to the displacement bandpass filter12yd. FIG.17is a front view illustrating the camera according to the third embodiment. A difference fromFIG.5is that a second motion vector14xin the direction of the arrow14xdetected from the image sensor14is input to the CPU12instead of using the second accelerometer16xa. FIG.18is a functional block diagram illustrating the main units of the image stabilization control device in an optical axis direction of the camera inFIG.17. A difference fromFIG.4is that a displacement achieved from the second motion vector14xin the x direction is input to the displacement bandpass filter12xdinFIG.17. As in the second embodiment, the rotation radius calculation unit12xfobtains the optical system rotation radius19rusing the third angular velocimeter15rg. In this way, an accelerometer is not used as the parallel blur detection unit and an optical system rotation radius is obtained from a ratio between a motion vector obtained from the image sensor14and an integration value of an angular velocity obtained from an angular velocimeter which is an angle detection unit. The rotation radius prediction unit outputs a rotation radius before exposure in the adaptive operation and outputs a predicted rotation radius in the prediction operation at the time of exposure to correct a parallel blur. Further, in the third embodiment, a reliability determination unit that determines reliability of a predicted rotation radius is included. The blur correction control unit changes the predicted rotation radius of the rotation radius prediction unit based on determination of the reliability determination unit. FIG.19is a functional block diagram illustrating a reliability determination unit according to the third embodiment. InFIG.19, a reliability determination unit1701determines reliability of a predicted rotation radius of the rotation radius prediction unit12xgor12ygbased on comparison of predicted rotational radii of blurs in a plurality of different directions. In the reliability determination unit1701, signals of the rotation radius prediction units12xgand12ygare input to a predicted value comparison unit1701a. The predicted value comparison unit1701acalculates a ratio between optical system rotation radii inch are predicted values of the rotation radius prediction units12xgand12yg. When a change in the ratio increases (when a derivative value of the ratio is greater than a predetermined threshold), it is determined that the reliability is lowered and set predicted value fixed signals are supplied to prediction switching units1702bxand1702by. That is, the reliability determination unit determines the reliability of the predicted rotation radius based on the change in the predicted rotation radii of the rotation radius prediction units. The prediction switching units1702bxand1702bysupply output signals from the rotation radius calculation units to the multiplication units12xhand12yhwhen the rotation radius prediction units12xgand12ygdo not output predicted values (when the adaptive operation is performed). When the rotation radius prediction units12xgand12ygoutput the predicted values (when the prediction operation is performed) and a predicted value fixed value is not supplied from the predicted value comparison unit1701a, predicted values from the rotation radius prediction units12xgand12ygare output to the multiplication units12xhand12yh. Conversely, when the rotation radius prediction units12xgand12ygoutput the predicted values (when the prediction operation is performed) and a predicted value fixed signal is output from the predicted value comparison unit1701a, a predicted rotation radius at that time point is output as a fixed value to the multiplication units12xhand12yh. That is, the blur correction control unit fixes the predicted rotation radius of the rotation radius prediction unit to a predetermined value based on the reliability of the predicted rotation radius of the rotation radius prediction unit. In this way, the reliability determination unit1701determines the reliability of the optical system rotation radius predicted based on the change in the ratio between two optical system rotation radii19xand19y. When a change in the rotation radius occurs, the change is substantially the same in any direction, and therefore a rotation radius with a small change in the ratio is used. InFIG.19, the reliability is determined based on the optical system rotation radii19xand19y, but the reliability may be determined based on comparison between the optical system rotation radii19xand19ror comparison between the optical system rotation radii19yand19r. Alternatively, when a change ratio of each of the optical system rotation radii19x,19y, and19rexceeds a predetermined threshold, it may be determined that the reliability is low and the change ratio may be switched to a fixed value. In each unit according to the foregoing embodiments may include a discrete electronic circuit, or some or all of the units may be configured with a processor such as an FPGA or a CPU or a computer program. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation to encompass all such modifications and equivalent structures and functions. In addition, as a part or the whole of the control according to the embodiments, a computer program realizing the function of the embodiments described above may be supplied to the image stabilization control device or the like through a network or various storage media. Then, a computer (or a CPU, an MPU, or the like) of the image stabilization control device or the like may be configured to read and execute the program. In such a case, the program and the storage medium storing the program configure the present invention. This application claims the benefit of Japanese Patent Application No. 2021-133021 filed on Aug. 17, 2021, which is hereby incorporated by reference herein in its entirety.
59,492
11943536
DESCRIPTION OF THE EMBODIMENTS Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings. Throughout the accompanying drawings, elements assigned the same reference numerals represent identical or similar elements. The technical scope of the present invention is determined by claims, and is not limited to the following individual exemplary embodiments. Not all of combinations of features described in the exemplary embodiments are indispensable to the present invention. Features described in separate exemplary embodiments can be suitably combined. In the following exemplary embodiments, a vibration applied to an imaging apparatus is referred to as a “shake”, and an influence on a captured image caused by the shake applied to the imaging apparatus is referred to as an “image blur”. FIG.1is a block diagram illustrating a configuration of an imaging apparatus100including an image blur correction apparatus. The imaging apparatus100is a lens-interchangeable digital camera capable of capturing still images and moving images. However, the first exemplary embodiment is not limited to application to the lens-interchangeable digital camera. The present exemplary embodiment is applicable to various types of imaging apparatuses. The imaging apparatus100is a system including an interchangeable lens100aand a camera body100b. The interchangeable lens100ais configured to be attachable to the camera body100b. The imaging apparatus100is used with the interchangeable lens100aattached to the camera body100b. A zoom unit101of the interchangeable lens100aincludes a zoom lens for changing magnification. A zoom drive control unit102drives and controls the zoom unit101. A diaphragm unit103has a function of a diaphragm. A diaphragm drive control unit104drives and controls the diaphragm unit103. A lens-type image blur correction unit105includes an image blur correction lens (hereinafter referred to as a “correction lens” or “Optical Image Stabilizer (OIS)”) such as a shift lens. The image blur correction lens is movable in a direction perpendicular to the optical axis of the imaging apparatus100. A lens-type image blur correction control unit106drives and controls the lens-type image blur correction unit105. A focus unit107includes a focusing lens that performs focus adjustment and forms a subject image. A focus drive control unit108drives and controls the focus unit107. A lens operation unit109is used by a user to operate the interchangeable lens100a. A lens shake detection unit110detects the amount of shake applied to (occurring in) the imaging apparatus100or the interchangeable lens100a, and outputs a detection signal to a lens system control unit111. The lens system control unit111including a central processing unit (CPU) totally controls the drive control units and the correction control units of the interchangeable lens100ato control the entire interchangeable lens100a. The lens system control unit111communicates with a camera communication control unit127in the camera body100bvia a lens communication control unit112. More specifically, in a state where the interchangeable lens100ais attached to and electrically connected with the camera body100b, the interchangeable lens100aand the camera body100bcommunicate with each other via the lens communication control unit112and the camera communication control unit127. The camera body100bwill be described below. The camera body100bincludes a shutter unit113. A shutter drive control unit114drives and controls the shutter unit113. An imaging unit115including an image sensor photoelectrically converts an optical image that has passed through each lens group into an electrical signal, and outputs the electrical signal. The image sensor of the imaging unit115is movable in a direction perpendicular to the optical axis of the imaging apparatus100. An imaging plane image blur correction unit117includes an imaging plane image blur correction unit (hereinafter referred to as an “imaging plane correction unit” or “In-Body Image Stabilization (IBIS)”) that moves the image sensor of the imaging unit115to correct an image blur. An imaging plane image blur correction control unit116drives and controls the imaging plane image blur correction unit117. An imaging signal processing unit118converts the electrical signal output from the imaging unit115into a video signal. A video signal processing unit119processes the video signal output from the imaging signal processing unit118depending on usage. For example, the video signal processing unit119changes a clipping position of the video signal based on a correction amount of an electronic image blur correction control unit125. The electronic image blur correction control unit125controls the image blur correction through image clipping. A display unit120displays an image based on a signal output from the video signal processing unit119as required. A recording unit121stores various pieces of data such as video information. A power source unit122supplies power to the entire imaging apparatus100depending on usage. A camera operation unit123, which is an operation unit used by the user to operate the camera body100b, outputs an operation signal to a camera system control unit126. A camera shake detection unit124detects the amount of shake applied to (occurring in) the imaging apparatus100or the camera body100b, and outputs a detection signal to the camera system control unit126. The camera system control unit126including a CPU totally controls the entire camera body100b. The camera system control unit126communicates with the lens communication control unit112in the interchangeable lens100avia the camera communication control unit127. More specifically, in a state where the interchangeable lens100ais attached to and electrically connected with the camera body100b, the interchangeable lens100aand the camera body100bcommunicate with each other via the lens communication control unit112and the camera communication control unit127. An overall operation of the imaging apparatus100will be described below. The lens operation unit109and the camera operation unit123each include an image blur correction switch that turns image blur correction on or off. When the user operates the image blur correction switch to turn the image blur correction on, the lens system control unit111and the camera system control unit126each instruct the lens-type image blur correction control unit106, the imaging plane image blur correction control unit116, and the electronic image blur correction control unit125to perform an image blur correction operation. Until an instruction for turning the image blur correction off is issued, each of the image blur correction control units controls the image blur correction. The camera operation unit123includes an image blur correction mode switch for selecting between a first mode and a second mode for image blur correction. The first mode is a mode in which image blur correction is performed using a combination of optical image blur correction and imaging plane image blur correction. The second mode is a mode in which image blur correction is performed using optical image blur correction, imaging plane image blur correction, and electronic image blur correction together. When the first mode is selected, performing image blur correction with a collaboration of the optical image blur correction and the imaging plane image blur correction enables implementing a wider correction angle, making it possible to correct a large shake. A reading position of the imaging unit115becomes constant, and image capturing with a wider angle is supported by expanding a reading range. When the second mode is selected, a clipping range of the video signal by the video signal processing unit119decreases, but a larger shake can be dealt with by changing the clipping position based on the image blur correction amount. The camera operation unit123includes a shutter release button configured so that a first switch (SW1) and a second switch (SW2) sequentially turn on depending on an amount of pressing. When the user presses the shutter release button halfway down, SW1turns on. When the user presses the shutter release button all the way down, SW2turns on. When SW1turns on, the focus drive control unit108drives the focus unit107to perform focus adjustment, and the diaphragm drive control unit104drives the diaphragm unit103to set a suitable exposure amount. When SW2turns on, image data obtained from an optical image exposed by the imaging unit115is stored in the recording unit121. The camera operation unit123includes a moving image recording switch. After the user presses the moving image recording switch, the imaging apparatus100starts moving image capturing. When the user presses the moving image recording switch again during recording, the recording operation ends. When the user operates the shutter release button during moving image capturing to turn SW1and SW2on, the imaging apparatus100performs processing for acquiring and recording a static image during moving image recording. The camera operation unit123includes a playback mode selection switch for selecting the playback mode. When the playback mode is selected by operating the playback mode selection switch, the imaging apparatus100stops the image blur correction operation. One of the lens-type image blur correction unit105and the imaging plane image blur correction unit117(illustrated inFIG.1) having a higher image blur correction performance functions as a first image blur correction unit (hereinafter referred to as a first optical image stabilization unit), and the other thereof functions as a second image blur correction unit (hereinafter referred to as a second optical image stabilization unit). In the present exemplary embodiment, examples of both of the units will be described below, and which of the two units is assumed will be suitably described in the descriptions. Image blur correction control performed by the lens system control unit111and the camera system control unit126will be described below with reference toFIG.2.FIG.2is a block diagram illustrating the image blur correction control performed by driving the lens-type image blur correction unit105and the imaging plane image blur correction unit117based on information about a shake applied to the imaging apparatus100. InFIG.2, the lens shake detection unit110includes an angular velocity sensor201and an analog-to-digital (A/D) converter202. The lens system control unit111implements an image blur correction amount calculation unit203, a lens storage unit911, and a first correction amount division unit921. The lens-type image blur correction control unit106includes a drive amount conversion unit207, a subtracter208, a control filter209, an OIS drive unit210, and a position sensor212. The camera shake detection unit124includes an angular velocity sensor901and an A/D converter902. The camera system control unit126implements a camera storage unit912, a correction division setting unit913, and a second correction amount division unit922. The imaging plane image blur correction control unit116includes a drive amount conversion unit213, a subtracter214, a control filter215, an IBIS drive unit216, and a position sensor218. In the present exemplary embodiment, the imaging apparatus100acquires the correction amount for image blur correction by using the angular velocity sensor201and then drives the lens-type image blur correction unit105. The imaging apparatus100also acquires the correction amount for image blur correction by using the angular velocity sensor901and then drives the imaging plane image blur correction unit117. The imaging apparatus100exchanges information required to determine a division state and information about the division state via the lens communication control unit112and the camera communication control unit127. The angular velocity sensor201detects the angular velocity of the shake applied to the imaging apparatus100and then outputs a voltage corresponding to the angular velocity. The output voltage of the angular velocity sensor201is converted into digital data by the A/D converter202(acquired as angular velocity data) and then supplied to the image blur correction amount calculation unit203. Likewise, the output voltage of the angular velocity sensor901is converted into digital data by the A/D converter902and then supplied to an image blur correction amount calculation unit903. A series of processes from the acquisition of angular velocity data to driving of the image blur correction units105and117is repetitively performed at sufficiently high speed intervals with respect to a shake frequency band of 1 to 20 Hz, for example, at intervals of 1,000 Hz. The image blur correction amount calculation unit203calculates the correction amount for correcting an image blur caused by a shake applied to the imaging apparatus100. Likewise, the image blur correction amount calculation unit903calculates the correction amount for correcting an image blur caused by a shake applied to the imaging apparatus100. The imaging apparatus100includes two different image blur correction units: the lens-type image blur correction unit105and the imaging plane image blur correction unit117. However, the correction amounts calculated by the image blur correction amount calculation units203and903are not the correction amounts for the two image blur correction units but the correction amounts for correcting the image blur of the entire imaging apparatus100. FIG.3is a block diagram illustrating details of the image blur correction amount calculation unit203. One of the image blur correction amount calculation units is described below since the image blur correction amount calculation unit903has the same structure. A high-pass filter (HPF)301is used to remove a direct current (DC) component and a low-frequency component of the angular velocity data detected by the A/D converter202. The angular velocity data through the HPF301is subjected to the first-order integration by an integrator303to be converted into angular displacement data. A cutoff frequency of the HPF301is determined by characteristics of the angular velocity sensor201. More specifically, when the angular velocity sensor201has a large drift (fluctuation at low-frequencies, also referred to as a random walk), the cutoff frequency is increased to sufficiently reduce noise. When the angular velocity sensor201has a small drift, a cutoff band is decreased to obtain data close to the complete integral. A smaller drift enables providing a higher image stabilization performance. To prevent saturation, the following integration operation is incomplete integration which is performed by using a generally-known first-order low-pass filter (LPF). The angular displacement data calculated by the integrator303is supplied to a framing control unit305and a limiter304. The limiter304limits the angular displacement data so that the lens-type image blur correction unit105and the imaging plane image blur correction unit117do not reach the end of a movable range. The angular displacement data limited by the limiter304is output to the first correction amount division unit921as the output of the image blur correction amount calculation unit203. The framing control unit305determines whether an operation intended by the user, such as panning or tilting, has been performed, and performs control to return the angular displacement data to the center. In other words, the framing control unit305removes a shake component intended by the user due to framing of the imaging apparatus100from the angular velocity detected by the angular velocity sensor201(the angular displacement data acquired by the A/D converter202). This enables correcting an image blur due to a camera shake while performing the framing intended by the user. More specifically, a predetermined threshold value is provided on the inner side of a control end of the angular displacement data provided on the limiter304. When the angular displacement data output from the integrator303exceeds the threshold value, panning is determined to have been performed. When panning is determined to have been performed, the framing control unit305limits the angular velocity data by increasing the cutoff frequency of the HPF301to remove as many lower frequency components as possible. Alternatively, the framing control unit305may subtract an offset from the angular velocity data input to the integrator303so that the output of the integrator303shifts back to the center. Alternatively, the framing control unit305may increase the cutoff frequency of the LPF calculation performed by the integrator303so that the output of the integrator303shifts back to the center. This enables controlling the lens-type image blur correction unit105and the imaging plane image blur correction unit117to remain within the movable range even if the shake intended by the user, such as panning or tilting, occurs. The above-described configuration of the image blur correction amount calculation unit203is also included in the image blur correction amount calculation unit903. Referring back toFIG.2, the correction division setting unit913functions as a division ratio determination unit that determines a division ratio of the first and second optical image stabilization units based on information about the image stabilization performance and the movable range stored in the lens storage unit911and the camera storage unit912. The information stored in the lens storage unit911and the camera storage unit912will be described below. The information about the movable range refers to information about the operable range (** μm, ** pulses) of the lens-type image blur correction unit105and the imaging plane image blur correction unit117or information about the operable range converted into what is called an image blur amount of a captured image (** deg). The information about the image stabilization performance is determined by the transfer characteristics ranging from the angular velocity sensors201and901to the lens-type image blur correction unit105and the imaging plane image blur correction unit117. The information can be stored in the form of the transfer characteristics (frequency response) or stored as numeric values acquired by using a certain test method. The image stabilization performance of the interchangeable lens100arefers to the performance in correcting the shake acquired by the angular velocity sensor201by using the lens-type image blur correction unit105. The information is stored in the lens storage unit911. The image stabilization performance of the camera body100brefers to the performance in correcting the shake acquired by the angular velocity sensor901by using the imaging plane image blur correction unit117. The information is stored in the lens storage unit911. Division information determined by the correction division setting unit913is transmitted to the first correction amount division unit921and the second correction amount division unit922. Suitable processing is performed based on the outputs from the image blur correction amount calculation units203and903and the division information. Then, the image blur correction operation is performed. More specifically, the correction division setting unit913transmits the division information to the first correction amount division unit921and the second correction amount division unit922to set the division ratio, thus controlling the image blur correction operation of the lens-type image blur correction unit105and the imaging plane image blur correction unit117. These operations will be described in detail below with reference toFIGS.4A,4B,4C,5A,5B, and5C. In the entire imaging apparatus100, the correction division setting unit913, the first correction amount division unit921, and the second correction amount division unit922constitute a division control unit. One of the lens storage unit911and the camera storage unit912is a first storage unit, and the other thereof is a second storage unit. For example, the first storage unit will be the storage unit of one of the image blur correction units having a higher performance. When the lens-type image blur correction unit105included in the interchangeable lens100ahas a higher image stabilization performance, the lens storage unit911serves as the first storage unit. FIGS.4A,4B, and4Care block diagrams illustrating examples of configurations of the correction division setting unit913, the first correction amount division unit921, and the second correction amount division unit922included in the division control unit.FIG.4Aillustrates an example of dividing the image blur correction amount for the entire imaging apparatus100into the first and second correction amounts by using the gain.FIGS.4B and4Cillustrate examples of dividing the image blur correction amount for the entire imaging apparatus100by using filters. InFIG.4A, a multiplier401multiplies the image blur correction amount calculated by the image blur correction amount calculation unit203by a first magnification K1determined by the correction division setting unit913, and outputs a first correction amount. K1denotes the magnification that satisfies Formula 1: 0≤K1≤1  (Formula 1) The image blur correction amount multiplied by the first magnification K1by the multiplier401serves as the first correction amount that is the correction amount used when image blur correction is performed by the lens-type image blur correction unit105. Likewise, a multiplier402multiplies the image blur correction amount calculated by the image blur correction amount calculation unit903by a second magnification K2determined by the correction division setting unit913, and outputs a second correction amount. K2denotes the magnification that satisfies Formula 2: K1+K2=1  (Formula 2) The image blur correction amount multiplied by the second magnification K2by the multiplier402serves as the second correction amount that is the correction amount used when image blur correction is performed by the imaging plane image blur correction unit117. As expressed in Formula 2, the image blur correction amount is divided in such a way that the first and second correction amounts added together gives the image blur correction amount for the entire imaging apparatus100. While, inFIG.4A, the image blur correction amount is divided based on the predetermined ratio (K1:K2), the image blur correction amount may be divided based on a frequency band.FIGS.4B and4Cillustrate examples of a configuration of the correction division setting unit913when the image blur correction amount is divided based on the frequency band. InFIG.4B, HPFs403aand403bpass only high-frequency bands. The HPFs403aand403bhave the same characteristics. The HPF403apasses only high-frequency bands of the image blur correction amount calculated by the image blur correction amount calculation unit203, and calculates the image blur correction amount as the first correction amount. Likewise, the HPF403bpasses only high-frequency bands of the image blur correction amount calculated by the image blur correction amount calculation unit903. A subtracter404extracts the second correction amount (low-frequency components) by subtracting the amount (high-frequency components) calculated by the HPF403b. Generally, dominant shakes acting on the imaging apparatus100are low-frequency components. Thus, either the lens-type image blur correction unit105or the imaging plane image blur correction unit117having a higher image blur correction performance (hereinafter referred to as an image stabilization performance) is to be assigned to the low-frequency components. More specifically, in the example inFIG.4B, the imaging plane image blur correction unit117driven with the second correction amount is the first optical image stabilization unit having a relatively high image stabilization performance. If the lens-type image blur correction unit105has the higher performance, the configurations of the first correction amount division unit921and the second correction amount division unit922are to be exchanged. In the configuration as illustrated inFIG.4B, the correction amount is divided so that the first and second correction amounts added together gives the image blur correction amount for the entire imaging apparatus100. InFIG.4C, LPFs405aand405bpass only low-frequency bands. The LPFs405aand405bhave the same characteristics. The LPF405apasses only low-frequency bands of the image blur correction amount calculated by the image blur correction amount calculation unit203, and calculates the image blur correction amount as the first correction amount. Likewise, the LPF405bpasses only low-frequency bands of the image blur correction amount calculated by the image blur correction amount calculation unit903. A subtracter406extracts the second correction amount (high-frequency components) by subtracting the amount (low-frequency components) calculated by the LPF405b. More specifically, in the example inFIG.4C, the lens-type image blur correction unit105driven with the first correction amount is the first optical image stabilization unit having the relatively high image stabilization performance. As inFIG.4B, if the imaging plane image blur correction unit117has the higher performance, the configurations of the first correction amount division unit921and the second correction amount division unit922are to be exchanged. In the configuration as illustrated inFIG.4C, the correction amount is divided so that the first and second correction amounts added together gives the image blur correction amount for the entire imaging apparatus100. Operations of the division control unit will be described in more detail below. In the example inFIG.4A, as described above, the correction amount is divided at a suitable division ratio by using the multiplier401and the first magnification K1. A method for determining the first magnification K1by the correction division setting unit913will be described below. When the lens-type image blur correction unit105and the imaging plane image blur correction unit117have the same image stabilization performance, the correction amount may be divided based on a ratio between image blur correction possible amounts of the lens-type image blur correction unit105and the imaging plane image blur correction unit117on the imaging plane. More specifically, when the two units have the same correction possible amount, the first and second magnifications are equal (K1=K2=0.5). When one of the units has a larger correction amount, the first and second magnifications are determined based on the ratio. For example, when the lens-type image blur correction unit105has a larger correction amount, K1>0.5 and K2<0.5. With the above-described setting, sufficiently utilizing the correction possible amount enables preventing an image blur even if a large shake acts on the imaging apparatus100. The image blur correction possible amount on the imaging plane refers to the correction amount obtained by converting the movable ranges of the lens-type image blur correction unit105and the imaging plane image blur correction unit117into image blur amounts. More specifically, the image blur correction possible amount on the imaging plane refers to the image blur correction amount when each of the lens-type image blur correction unit105and the imaging plane image blur correction unit117moves to the end of the movable range. The image blur correction possible amount of the imaging plane image blur correction unit117on the imaging plane changes depending on a focal length of an imaging optical system. Thus, when an interchangeable lens having a variable focal length is attached, the ratio between the first and second magnifications (K1:K2) is determined taking into consideration not only the movable range but also the focal length. On the other hand, the image blur correction performances of the lens-type image blur correction unit105and the imaging plane image blur correction unit117are not generally equal. More specifically, the transfer characteristics of the system formed of the subtracter208, the control filter209, the OIS drive unit210, the lens-type image blur correction unit105, and the position sensor212(the frequency response of the lens-type image blur correction unit) do not coincide with the transfer characteristics of the system formed of the subtracter214, the control filter215, the IBIS drive unit216, the imaging plane image blur correction unit117, and the position sensor218(the frequency response of the imaging plane image blur correction unit). This is because the same correction performance cannot be implemented even if the control filters209and215are used because of differences in mass of a movable unit and type of an actuator. As described above, the performances of the angular velocity sensors201and901may not be equal. The image stabilization performances determined by the performance ranging from the blur detection to the position control for the correction unit is stored in the lens storage unit911and the camera storage unit912. In addition, information about the movable range of the lens-type image blur correction unit105is stored in the lens storage unit911, and information about the movable range of the imaging plane image blur correction unit117is stored in the camera storage unit912. As an example, a case where the image stabilization performance of the interchangeable lens100ais higher than that of the camera body100bis described. In this case, the lens-type image blur correction unit105corresponds to the first optical image stabilization unit, the imaging plane image blur correction unit117corresponds to the second optical image stabilization unit, the lens storage unit911corresponds to the first storage unit, and the camera storage unit912corresponds to the second storage unit. When only the movable range is focused, it is desirable that the image blur correction amount corrected for the entire imaging apparatus100(outputs of the image blur correction amount calculation units203and903) is divided based on the ratio between the movable ranges (more specifically, the image blur correction possible amounts on the imaging plane, hereinafter referred to as image stabilization ranges) converted into the image blur amounts. For example, when the division ratio of the image stabilization ranges of the lens-type image blur correction unit105and the imaging plane image blur correction unit117is 3:7, moving the two units based on the division ratio enables easy handling of a large shake. The division method corresponds to a first division ratio that maximizes the image blur correction possible range (hereinafter referred to as an image stabilization range). On the other hand, when the image stabilization range is ignored and only the image stabilization performance is focused, it is desirable to alternatively use the unit having a higher image stabilization performance (the frequency response is close to Gain=1, Phase=0 deg. against a shake acting on the imaging apparatus100). The division method corresponds to a second division ratio that maximizes the image stabilization performance. In this case, what is noteworthy is that, when the first division ratio is selected and a small shake acts on the imaging apparatus100, a lower image stabilization performance is provided than when the second division ratio is selected. The degree of performance degradation is determined by the ratio between the image stabilization performances of the units. When a large shake acts on the imaging apparatus100, the second division ratio is to be selected, and if only the first optical image stabilization unit is used, the correction possible amount runs short, resulting in a degraded image stabilization performance. As described above, the first and second division ratios are each suitable when the magnitude of the shake and the image stabilization performance are focused. From the viewpoint of the magnitude of the shake, the use of the second division ratio is desirable for a small shake, and the use of the first division ratio is desirable for a large shake. Meanwhile, it may be desirable to divide the image blur correction amount by using a third division ratio between the first and second division ratios depending on imaging conditions. For example, when the second division ratio is used with only the image stabilization performance focused, the image stabilization range slightly runs short. When the first division ratio is used with only the image stabilization range focused, the image stabilization range is sufficient, and the image stabilization performance is remarkably degraded. A method for determining the division ratio to be actually used to control the lens-type image blur correction unit105and the imaging plane image blur correction unit117(hereinafter referred to as a final division ratio) will be described below with reference toFIGS.5A,5B, and5C.FIGS.5A and5Billustrate the final division ratio when the image blur correction amount is divided based on the gain, and correspond to the division method inFIG.4A.FIG.5Cillustrates the final division ratio when the image blur correction amount is divided by using filters, and corresponds to the division methods inFIGS.4B and4C. FIG.5Aillustrates examples of K1and K2settings when the interchangeable lens100ahas a relatively high image stabilization performance and the image blur correction amount is divided based on a predetermined ratio. The value to the left of the slash (/) denotes K1, and the value to the right of the slash denotes K2.FIG.5Billustrates examples of K1and K2settings when the imaging plane image blur correction unit117has a relatively high image stabilization performance and the image blur correction amount is divided based on a predetermined ratio. The value to the left of the slash (/) denotes K1, and the value to the right of the slash denotes K2. InFIGS.5A,5B, and5C, a field indicated with “o” denotes the division ratio that maximizes the image stabilization range, i.e., the above-described first division ratio. A field indicated with “●” denotes the division ratio that maximizes the image stabilization performance, i.e., the above-described second division ratio. Other fields each denote a division ratio between the first and second division ratios, i.e., the above-described third division ratio. When a small image blur occurs during exposure, a wide image stabilization range is not required (a small image stabilization range is acceptable) although an image blur occurs if the image blur correction is not performed. Thus, it is desirable to alternatively operate one of the units having a higher image stabilization performance with the second division ratio set as the final division ratio. In the example inFIG.5A, when the exposure time is 1/60 [s] or less and the focal length is 70 mm or less, or when the exposure time is 1/60 to 1/15 [s] and the focal length is 24 mm or less, it is determined that an image blur occurring during exposure is small, and the first magnification K1=1 and the second magnification K2=0 are set. With these settings, the image blur correction operation is performed only by the interchangeable lens100ahaving a relatively high image stabilization performance, and a high image stabilization performance is obtained. On the other hand, when a large image blur occurs during exposure, the image stabilization range is assumed to run short with the second division ratio. In this case, it is assumed that the insufficient image stabilization range causes a larger image blur than the relatively degraded image stabilization performance does. Thus, it is desirable to set the first division ratio as the final division ratio, and operate the imaging apparatus100so that a wide image stabilization range is obtained. In the example inFIG.5A, when the exposure time is 4 [s] or more and the focal length is 70 mm or more, or when the exposure time is 1 to 4 [s] and the focal length is 200 mm or more, a large image blur is determined to occur during exposure, and the first magnification K1=0.3 and the second magnification K2=0.7 are set. The magnification ratio may be acquired by reading information indicating the image stabilization range stored in the lens storage unit911and the camera storage unit912, and then determined based on the information. This ensures a wide image stabilization range, achieving suitable image stabilization even if a large blur occurs. As other conditions, the third division ratio between the first and second division ratios is set as the final division ratio, with K1+K2=1 (Formula 2) satisfied. This setting ensures the image stabilization range corresponding to the image blur amount which may occur during exposure and, at the same time, implements control that utilizes the image stabilization performance. In the example inFIG.5A, the focal length and exposure time are described as examples of the imaging conditions. The correction division setting unit913functions as an imaging condition acquisition unit that acquires the imaging conditions and determines the final division ratio in the table based on these pieces of information. A larger image blur occurs with a longer focal length and a longer exposure time. Other dominant factors of the image blur amount include the imaging magnification of the imaging optical system, and a camera shake acted on the camera during the time period before the image capturing. A larger image forming magnification will cause a larger image blur. The camera shake acted on the camera during the time period before the image capturing can be obtained through analysis of a shake acting on the imaging apparatus100prior to exposure and prediction of an image blur occurring during exposure. For simple example, it is predicted that, when the imaging apparatus100is installed on a tripod, a small image blur occurs during exposure because almost no shake is observed. When the imaging apparatus100is held by hand to capture an image, the magnitude of what is called a camera shake by the user is observed, and a suitable division ratio is selected based on the magnitude. FIG.5Bis a table illustrating examples of K1and K2settings when the camera body100bhas a relatively high image stabilization performance, and the image blur correction amount is divided at a predetermined ratio. As inFIG.5A, a field indicated with “o” denotes the division ratio that maximizes the image stabilization range, i.e., the above-described first division ratio. A field supplied with “●” denotes the division ratio that maximizes the image stabilization performance, i.e., the above-described second division ratio. Other fields each denote a division ratio between the first and second division ratios, i.e., the above-described third division ratio. As illustrated inFIG.5A, when a small image blur occurs during exposure, a small image stabilization range is acceptable. In this case, it is desirable to alternatively operate one of the units having a higher image stabilization performance (the camera body100bin this case) with the second division ratio set as the final division ratio. On the contrary, when a large image blur occurs during exposure, the relatively degraded image stabilization performance is acceptable. In this case, it is desirable to operate the imaging apparatus100so that a wide image stabilization range is obtained with the first division ratio set as the final division ratio. In the example inFIG.5B, the first magnification K1=0.6 and the second magnification K2=0.4 are set. As other conditions, the third division ratio between the first and second division ratios is set as the final division ratio, with K1+K2=1 (Formula 2) satisfied. This setting ensures the image stabilization range corresponding to the image blur amount which may occur during exposure and, at the same time, implements control that utilizes the image stabilization performance. FIG.5Cis a table illustrating an example using filters. The values in the table denote the cutoff frequencies of the LPF and HPF filters. A case is described where the configuration inFIG.4Band the table inFIG.5Care used in combination. The configuration inFIG.4Buses HPFs to determine the division ratio. As inFIG.5A, a field indicated with “o” denotes the division ratio that maximizes the image stabilization range, i.e., the above-described first division ratio. A filed indicated with “●” denotes the division ratio that maximizes the image stabilization performance, i.e., the above-described second division ratio. Other fields each denote a division ratio between the first and second division ratios, i.e., the above-described third division ratio. When a small image blur occurs during exposure, a small image stabilization range is acceptable. In this case, it is desirable to alternatively operate one of the units having a higher stabilization performance with the second division ratio set as the final division ratio. In the example inFIG.5C, as in the table inFIG.5A, when the exposure time is 1/60 [s] or less and the focal length is 70 mm or less, or when the exposure time is 1/60 to 1/15 [s] and the focal length is 24 mm or less, a small image blur is determined to occur during exposure. With these imaging conditions, the cutoff frequency is set to 50 Hz. In almost all cases, a shake acting on the imaging apparatus100has a frequency lower than 50 Hz. Thus, the first correction amount inFIG.4Bbecomes almost zero, and the image stabilization is performed only with the second correction amount. In this way, the image stabilization is almost performed only by the camera body100bhaving a relatively high image stabilization performance, and thus a high image stabilization performance is achieved. On the contrary, when a large image blur occurs during exposure, the relatively degraded image stabilization performance is acceptable. In this case, it is desirable to operate the imaging apparatus100so that a wide image stabilization range is obtained with the first division ratio set as the final division ratio. In the example inFIG.5C, when the exposure time is 4 [s] or more and the focal length is 70 mm or more, or when the exposure time is 1 to 4 [s] and the focal length is 200 mm or more, a large image blur is determined to occur during exposure, and the cutoff frequency is set to 1 Hz. In this case, the first correction amount inFIG.4Bdeals with a shake of 1 Hz or higher, and the second correction amount deals with a shake of 1 Hz or lower. Generally, components of a shake acting on the imaging apparatus100when a person holds the imaging apparatus100are known. Thus, the ratio between the correction amounts can be adjusted through adjustment of the cutoff frequency. The cutoff frequency may be determined based on the information indicating the image stabilization range stored in the lens storage unit911and the camera storage unit912. This ensures a wide image stabilization range, and thus suitable image stabilization is achieved even if a large blur occurs. As other conditions, the third division ratio between the first and second division ratios is used in the operation, with K1+K2=1 (Formula 2) satisfied. This setting ensures the image stabilization range corresponding to the image blur amount which may occur during exposure and, at the same time, implements control that utilizes the image stabilization performance. Lastly, a case is described where the configuration inFIG.4Cand the table inFIG.5Care used in combination. A field indicated with “o” denotes the division ratio that maximizes the image stabilization range, i.e., the above-described first division ratio. A field indicated with “●” denotes the division ratio that maximizes the image stabilization performance, i.e., the above-described second division ratio. Other fields each denote a division ratio between the first and second division ratios, i.e., the above-described third division ratio. When a small image blur occurs during exposure, a small image stabilization range is acceptable. In this case, it is desirable to alternatively operate one of the units having a higher stabilization performance with the second division ratio set as the final division ratio. In the example inFIG.5C, the cutoff frequency is set to 50 Hz. In almost all cases, a shake acting on the imaging apparatus100has a frequency lower than 50 Hz. Thus, the second correction amount inFIG.4Cbecomes almost zero, and the image stabilization is performed only with the first correction amount. In this way, the image stabilization is almost performed only by the interchangeable lens100ahaving a relatively high image stabilization performance, and thus a high image stabilization performance is achieved. On the contrary, when a large image blur occurs during exposure, the relatively degraded image stabilization performance is acceptable. In this case, it is desirable to operate the imaging apparatus100so that a wide image stabilization range is obtained with the first division ratio set as the final division ratio. For a condition supplied with “o”, the cutoff frequency is set to 1 Hz. In this case, the first correction amount inFIG.4Cdeals with a shake of 1 Hz or lower, and the second correction amount deals with a shake of 1 Hz or higher. As other conditions, the third division ratio between the first and second division ratios is used in the operation, with K1+K2=1 (Formula 2) satisfied. This setting ensures the image stabilization range corresponding to the image blur amount which may occur during exposure and, at the same time, implements control that utilizes the image stabilization performance. Effects of using the third division ratio as the final division ratio will be described below with reference toFIGS.6A,6B, and6C.FIGS.6A,6B, and6Care charts illustrating the relation between the amount of shake acting on the imaging apparatus100(hereinafter referred to as the shake amount) and the amount of image blur occurring in a captured image. The charts illustrate the image blur amount that has been unable to be corrected (hereinafter referred to as a remaining image blur amount) when the image blur correction is performed. The horizontal axes of the charts inFIGS.6A,6B, and6Cindicate the magnitude of a shake acting on the imaging apparatus100during exposure. The vertical axes thereof indicate the image blur amount in a captured image caused by a shake. Differences between the first and second optical image stabilization units, and the relation with the remaining image blur amount will be described below with reference toFIG.6A. A straight line1001indicates a case where the image stabilization is not performed. A straight line1002indicates a case where the image stabilization is performed only by the second optical image stabilization unit having a relatively low image stabilization performance A straight line1003indicates a case where the image stabilization is performed only by the first optical image stabilization unit having a relatively high image stabilization performance. InFIG.6A, the gradient of the straight line1002is a half of that of the straight line1001. This means that the image blur amount can be halved in comparison with a case where the image stabilization is not performed (this state is referred to as level one in a camera shake correction effect according to the Camera & Imaging Products Association (CIPA) standards). The gradient of the straight line1003is ⅛ that of the straight line1001. This means that the image blur amount can be reduced to ⅛ in comparison with a case where the image stabilization is not performed (this state is referred to as level three in the image stabilization performance). InFIG.6A, when a shake amount1004acts on the imaging apparatus100during exposure, an image blur amount1001aoccurs if the image stabilization is not performed. Likewise, when the image stabilization is performed by the second optical image stabilization unit, the image blur correction is not completed, resulting in a remaining image blur amount1002a. When the image stabilization is performed by the first optical image stabilization unit, the image blur correction is not completed, resulting in a remaining image blur amount1003a. Obviously, the image blur amounts1002aand1003aare smaller than the image blur amount1001a. This means that the image blur amount in a case where the image stabilization is performed is reduced to a further extent than in a case where the image stabilization is not performed. This is an effect of the image stabilization. Further, the remaining image blur amount1003ais smaller than the remaining image blur amount1002a. This means that the image blur amount in a case where the first optical image stabilization unit having a relatively high performance is used is reduced to a further extent than in a case where the second optical image stabilization unit is used. This is a difference in the image stabilization performance. The influence of the image stabilization range of an image stabilization apparatus and a drive ratio will be described below with reference toFIG.6B. An image stabilization range1005of the first optical image stabilization unit having a relatively high image stabilization performance inFIG.6Bis smaller than an image stabilization range1006of the second optical image stabilization unit. A range that is a sum of image stabilization ranges of the first and second optical image stabilization units is represented by an image stabilization range1007. A polygonal line consisting of straight lines1010aand1010bindicate the relation between the shake amount and the image blur amount when only the first optical image stabilization unit is used. A polygonal line consisting of straight lines1020aand1020bindicate the relation between the shake amount and the image blur amount when only the second optical image stabilization unit is used. Broken lines1010cand1020cindicate the relation between the shake amount and the image blur amount when only the first optical image stabilization unit is used, and the relation between the shake amount and the image blur amount when only the second optical image stabilization unit is used, respectively, when the image stabilization range is not taken into consideration. A broken line1010chas the same gradient as the straight line1003inFIG.6A, and the broken line1020chas the same gradient as the straight line1002inFIG.6A. Actually, the states of the broken lines1010cand1020ccannot be implemented because of the insufficient image stabilization range. Points1011and1021indicate points where the image stabilization ranges of the first and second optical image stabilization units run out, respectively. A straight line1031is drawn from the point1011with the same gradient as the straight line1020a. A point1032is an intersecting point of the straight line1031and an end of the image stabilization range1007. A straight line1030connects the origin and the point1032. A point1033is the intersecting point of the straight line1030and the straight line1010b. In the descriptions inFIG.6A, the image stabilization ranges of the optical image stabilization units are ignored. However, actually, the image stabilization ranges are limited as illustrated inFIG.6B. For example, when a shake larger than the image stabilization range1005of the first optical image stabilization unit acts on the imaging apparatus100, the image stabilization cannot be performed only by the first optical image stabilization unit. Thus, the straight line1010ais broken at the point1011, and the gradient of the straight line1010bfor a larger shake is the same as the gradient of the straight line1001when the image stabilization is not performed. As a result, the image blur amount when only the first optical image stabilization unit is used is as represented by the straight lines1010aand1010b. Likewise, when only the second optical image stabilization unit is used, the straight line1020ais broken at the point1021, and the gradient of the straight line1020bfor a larger shake is the same as the gradient of the straight line1001. As a result, the image blur amount when only the second optical image stabilization unit is used is as represented by the straight lines1020aand1020b. As described above, the broken lines1010cand1020ccannot be implemented since these are out of the image stabilization range. In the example inFIG.6B, the division ratio of the image stabilization range1005of the first optical image stabilization unit to the image stabilization range1006of the second optical image stabilization unit is 0.33:0.67 (=1:2). This is equivalent to the first division ratio. The straight line1030indicates the relation between the shake amount and the image blur amount when the first and second optical image stabilization units are operated based on the division ratio. The second division ratio that is the ratio that maximizes the image stabilization performance when the image stabilization range is ignored is used in a case where the first optical image stabilization unit is alternatively used. In this case, the second division ratio is 1:0. The relation between the shake amount and the image blur amount in this case is represented by the straight lines1010aand1010b. InFIG.6B, a case where the shake amount (image stabilization range)1005(=the image stabilization range of the first optical image stabilization unit) acts on the imaging apparatus100will be considered below. In this case, an image blur amount1051when the first optical image stabilization unit is alternatively used based on the second division ratio is smaller than an image blur amount1052when the second optical image stabilization unit is alternatively used. Further, the image blur amount1051when the first optical image stabilization unit is alternatively used is smaller than an image blur amount1053with the first division ratio that maximizes the image stabilization range. This means that, when the shake amount1005acts on the imaging apparatus100, it is suitable to operate the imaging apparatus100based on the second division ratio (to enable maximization of the image stabilization performance). InFIG.6B, a case where the shake amount (image stabilization range)1007(=the image stabilization range of the first optical image stabilization unit+the image stabilization range of the second optical image stabilization unit) acts on the imaging apparatus100will be considered below. In this case, an image blur amount1042when the second optical image stabilization unit is alternatively used is smaller than an image blur amount1041when the first optical image stabilization unit is alternatively used with the second division ratio. Further, an image blur amount1043with the first division ratio that maximizes the image stabilization range is smaller than the image blur amount1042when the second optical image stabilization unit is alternatively used. This means that, when the shake amount1007acts on the imaging apparatus100, it is suitable to operate the imaging apparatus100based on the first division ratio (to enable maximizing the image stabilization performance). As described above with reference toFIG.6B, when the image stabilization range is limited, it is appropriate to provide a suitable division ratio depending on the magnitude of the shake acting on the imaging apparatus100. The straight lines1020aand1020bthat indicate the relation when only the second optical image stabilization unit having a relatively low performance is alternatively used constantly causes a larger image blur amount than the straight line1030indicating the first division ratio. Therefore, the straight lines1020aand1020bare not suitable as targets of operation assignment. Thus, to avoid complexity in the drawings, this option is omitted in illustration ofFIG.6C. The method discussed in Japanese Patent Application Laid-Open No. 2019-129373 will be described below with reference toFIG.6B. Japanese Patent Application Laid-Open No. 2019-129373 discloses a technique used when a plurality of blur correction units is provided. The technique mainly operates one of the image blur correction units having a higher performance, and operates the other image blur correction unit when the one of the units approaches the stroke end. When this operation is suitably implemented, the relation between the shake amount and the image blur amount traces the following path: origin-point1011-(straight line1031)-point1032. This method enables minimization of the image blur amount. On the other hand, the use of this method requires nonlinear processing in the vicinity of the point1011. If shakes complicatedly act on the imaging apparatus100across a boundary, the imaging apparatus100may be unable to suitably operate because of a communication delay, nonlinear processing such as initial movement and quick stop, and responses from the image blur correction units. It is also necessary to constantly monitor the approach to the end of the image stabilization range, and switch processing at a high speed. The processing requires a large amount of resources and thus cannot easily be implemented by general built-in devices. Thus, in the present exemplary embodiment, there is proposed a method for changing the final division ratio depending on the imaging conditions. The method enables reducing the remaining image blur amount in a simpler way than the method discussed in Japanese Patent Application Laid-Open No. 2019-129373 and to a further extent than the configuration using only the first and second division ratios. FIG.6Cillustrate an example of a case where the third division ratio is suitable. A straight line1110indicates a relation between the shake amount and the image blur amount when the first and second optical image stabilization units are operated based on the third division ratio. In the example inFIG.6C, the third division ratio of the first and second optical image stabilization units is 1:1. In this case, the gradient of the straight line1110is a value between the gradients of the straight lines1010aand1020aillustrated inFIG.6B. Since the first division ratio is 1:2, the gradient of the straight line1110is smaller than that of the straight line1030. An accurate gradient can be calculated from the image blur amount and the division ratio. For example, when the division ratio is 1:1, the gradient is ½×½+⅛×½= 5/16 based on the gradients ½ and ⅛. When the division ratio is 1:2, the gradient is ½×⅔+⅛×⅓=⅜= 6/16. When a shake1100acts on the imaging apparatus100, an image blur amount1104when the imaging apparatus100is operated based on the third division ratio is smaller than an image blur amount1101when the imaging apparatus100is operated based on the second division ratio and an image blur amount1103when the imaging apparatus100is operated based on the first division ratio. More specifically, the image stabilization performance when the imaging apparatus100is operated based on the third division ratio is higher than that when the imaging apparatus100is operated based on the first or second division ratio. This is an effect of using the third division ratio. When a plurality of image stabilization units is provided, a high performance can be obtained by suitably performing the image blur correction by a simple method. Referring back toFIG.2, the drive amount conversion unit207converts the correction amount (the correction amount and the correction angle on the imaging plane) output from the first correction amount division unit921into a moving amount for suitably performing the image blur correction by the lens-type image blur correction unit105, and outputs the moving amount as a target drive position. The position sensor212detects positional information for the lens-type image blur correction unit105. The subtracter208subtracts the positional information for the lens-type image blur correction unit105from the target drive position to obtain deviation data. The deviation data is input to the control filter209, subjected to various types of signal processing such as gain amplification and phase compensation, and then supplied to the OIS drive unit210. The OIS drive unit210drives the lens-type image blur correction unit105based on the output of the control filter209. Thus, the correction optical system moves in a direction perpendicular to the optical axis. Then, the positional information for the lens-type image blur correction unit105that has moved is detected by the position sensor212again. Then, the next deviation data is calculated. More specifically, a feedback loop is formed in which the lens-type image blur correction unit105is controlled so that a difference between the target drive position and the positional information is minimized. This control enables driving of the correction optical system to follow the target drive position. The drive amount conversion unit213converts the second correction amount output from the second correction amount division unit922into a moving amount for suitably performing the image blur correction via the imaging plane image blur correction unit117, and outputs the moving amount as the target drive position. The position sensor218detects the positional information for the imaging plane image blur correction unit117. The subtracter214subtracts the positional information for the imaging plane image blur correction unit117from the target drive position to obtain deviation data. The deviation data is input to the control filter215, subjected to various types of signal processing such as gain amplification and phase compensation, and then supplied to the IBIS drive unit216. The IBIS drive unit216drives the imaging plane image blur correction unit117based on the output of the control filter215. This control moves the imaging plane in a direction perpendicular to the optical axis. In this way, the lens-type image blur correction unit105and the imaging plane image blur correction unit117collaboratively operate to share the correction of the image blur corresponding to the shake acting on the entire imaging apparatus100. Such a collaborative operation enables expanding the image blur correction possible range. When alternatively using only either of the correction units, a division ratio of 1:0 is also referred to as collaboration in a broad sense. As described above, in the first exemplary embodiment, when the imaging apparatus100includes a plurality of image stabilization units, the collaborative image blur correction can be performed at the division ratio determined depending on the imaging conditions. Accordingly, in the first exemplary embodiment, an image blur control apparatus capable of obtaining a totally high image stabilization performance can be provided. In addition, the collaborative image blur correction can be performed by using a division ratio (third division ratio) between the division ratio in consideration of the image stabilization range and the division ratio in consideration of the image stabilization performance. Accordingly, an image blur control apparatus capable of obtaining a totally high image stabilization performance can be provided. While, in the present exemplary embodiment, the correction division setting unit913is provided in the camera body100b, the correction division setting unit913may be provided in the interchangeable lens100a. While, in the present exemplary embodiment, the correction division setting unit913that acquires the final division ratio also functions as a unit for acquiring the first division ratio and a unit for acquiring the second division ratio based on the information acquired from the lens storage unit911and the camera storage unit912. However, these units may be configured as separate blocks. A second exemplary embodiment will be described below. In the present exemplary embodiment, the basic configuration of the imaging apparatus100is similar to that according to the first exemplary embodiment (seeFIG.1). Differences from the first exemplary embodiment will be mainly described below. The first exemplary embodiment has been described above centering on a configuration in which the interchangeable lens100aacquires the image blur correction amount for the entire imaging apparatus100by using the angular velocity sensor201in the interchangeable lens100a, and the camera body100bacquires the image blur correction amount for the entire imaging apparatus100by using the angular velocity sensor901in the camera body100b. The first exemplary embodiment has been further described above centering on a configuration in which the acquired image blur correction amount for the entire imaging apparatus100is divided by the first correction amount division unit921and the second correction amount division unit922, and the lens-type image blur correction unit105and the imaging plane image blur correction unit117perform the image blur correction in a collaborative way. On the other hand, the second exemplary embodiment will be described below centering on a configuration in which the correction amount for driving each image blur correction unit is acquired by using the angular velocity sensor901included in the camera body100b. The camera body100bcontrols the image blur correction by the lens-type image blur correction unit105by transmitting the first correction amount to the interchangeable lens100a. The camera body100balso controls the imaging plane image blur correction unit117by using a second image blur correction amount calculated by the camera body100b. In this case, the total of the first and second blur correction amounts is 1 (that is equal to the output of the image blur correction amount calculation unit903). FIG.7is a block diagram illustrating the image blur correction control according to the second exemplary embodiment. The configuration inFIG.7differs from the configuration inFIG.2in that the first correction amount division unit921is not provided but a correction amount division unit2000is provided. The configuration inFIG.7also differs from the configuration inFIG.2in that the angular velocity sensor201, the A/D converter202, and the image blur correction amount calculation unit203included in the interchangeable lens100aare not connected to other blocks because these units are not used to control the image blur correction operation. InFIG.7, the correction amount division unit2000is implemented by the camera system control unit126. The correction division setting unit913and the correction amount division unit2000serve as division control units according to the present exemplary embodiment. InFIG.7, the drive amount conversion unit207receives a drive amount (first correction amount) from the correction amount division unit2000via the camera communication control unit127and the lens communication control unit112, and operates the lens-type image blur correction unit105. FIGS.8A,8B, and8Care block diagrams illustrating examples of configurations of the correction division setting unit913and the correction amount division unit2000serving as the division control units. The correction amount division unit2000has functions of the first correction amount division unit921and the second correction amount division unit922according to the first exemplary embodiment, and divides the image blur correction amount for the entire imaging apparatus100into the first and second correction amounts.FIG.8Aillustrates an example of dividing the image blur correction amount by using the gain, andFIGS.8B and8Cillustrate examples of dividing the image blur correction amount by using filters. InFIG.8A, a multiplier2001multiplies the image blur correction amount calculated by the image blur correction amount calculation unit903by the first magnification K1determined by the correction division setting unit913, and outputs the first correction amount. In this case, the first magnification K1satisfies Formula 1 as in the first exemplary embodiment. The image blur correction amount multiplied by the first magnification K1by the multiplier2001serves as the first correction amount used by the lens-type image blur correction unit105to perform the image blur correction. A subtracter2002subtracts the amount calculated by the multiplier2001(first correction amount) from the correction amount for the entire imaging apparatus100calculated by the image blur correction amount calculation unit903to calculate the second correction amount. The image blur correction amount is divided in such a way that the first and second correction amounts added together gives the image blur correction amount for the entire imaging apparatus100. As in the first exemplary embodiment, the first and second correction amounts may be acquired by using the first magnification K1and the second magnification K2(K2=1−K1). While, in the example illustrated inFIG.8A, the image blur correction amount is divided based on a predetermined ratio, the image blur correction amount may be divided based on a frequency band.FIGS.8B and8Cillustrate examples of configurations of the correction amount division unit2000when the image blur correction amount is divided based on a frequency band. InFIG.8B, an HPF2003passes only the high-frequency band. The HPF2003passes only the high-frequency band of the image blur correction amount calculated by the image blur correction amount calculation unit903, and acquires the band as the first correction amount. A subtracter2004subtracts the first correction amount (high-frequency components) acquired by the HPF2003from the correction amount for the entire imaging apparatus100to extract the second correction amount (low-frequency components). As in the first exemplary embodiment, either the lens-type image blur correction unit105or the imaging plane image blur correction unit117having a higher image blur correction performance is to be assigned to the low-frequency components. The configuration inFIG.8Cis similar toFIG.8Bexcept that an LPF2005is used as the filter. The second correction amount (high-frequency components) is extracted by subtracting the first correction amount (low-frequency components) acquired by the LPF2005from the correction amount for the entire imaging apparatus100. This configuration enables division of the correction amount for the entire imaging apparatus100into the first and second correction amounts based on the final division ratio, as in the first exemplary embodiment. This enables control of the image blur correction by the lens-type image blur correction unit105and the imaging plane image blur correction unit117based on the final division ratio. The method for acquiring the final division ratio is similar to that according to the first exemplary embodiment. More specifically, the first division ratio that maximizes the image stabilization range, the second division ratio that maximizes the image stabilization performance, or the third division ratio between the first and second division ratios is selected depending on the imaging conditions of the camera. As in the first exemplary embodiment, when a small image blur amount is predicted and the image stabilization performance is to be maximized, it is desirable to alternatively operate one of the units having a higher image stabilization performance by setting the second division ratio as the final division ratio. If the image stabilization performance is assumed to be identical, the units may be operated at 1:1. When a large image blur amount is predicted and the image stabilization range is to be maximized, the first division ratio is set as the final division ratio. The third division ratio between the first and second division ratios may be set as the final division ratio depending on the magnitude of the shake expected to act on the imaging apparatus100. As a result, when the imaging apparatus100includes a plurality of image stabilization units, the imaging apparatus100suitably performs the image blur correction by a simple method, making it possible to obtain a totally high image stabilization performance. While, in the present exemplary embodiment, the correction division setting unit913and the correction amount division unit2000are included in the camera body100b, these units may be included in the interchangeable lens100a. While, in the configuration according to the present exemplary embodiment (FIG.7), the angular velocity sensors201and901are included in the interchangeable lens100aand the camera body100b, respectively, a control system is configured by using only the angular velocity sensor901. The other angular velocity sensor201may also be used. More desirably, which of the angular velocity sensors201and901is to be used may be selected based on information stored in the lens storage unit911and the camera storage unit912. More specifically, as described above in the first exemplary embodiment, an HPF701is included in each of the image blur correction amount calculation units203and903(seeFIG.3). The characteristics of the filter are determined by the performances of the angular velocity sensors201and901. Thus, information about the performances of the angular velocity sensors201and901, such as the cutoff frequency of the HPF701, is stored in the lens storage unit911and the camera storage unit912, respectively. With this configuration, either of the angular velocity sensors having a lower cutoff frequency of the HPF701is assumed to have a higher performance. Thus, information about the angular velocity sensor having a higher performance is to be input to the correction amount division unit2000. A third exemplary embodiment will be described below. In the third exemplary embodiment, the basic configuration of the imaging apparatus100is similar to that according to the first exemplary embodiment (seeFIGS.1and2). Differences from the first exemplary embodiment will be mainly described below. In the first exemplary embodiment, the image stabilization ranges of the image blur correction units are respectively acquired from the lens storage unit911and the camera storage unit912, and the first division ratio that maximizes the image stabilization range is acquired. The present exemplary embodiment differs from the first exemplary embodiment in that the first division ratio is predetermined and stored in the storage unit of either the interchangeable lens100aor the camera body100bthat includes the correction division setting unit913. The imaging apparatus100is an imaging system that uses the interchangeable lens100aand the camera body100bin combination. In such an imaging system, a combination of various types of lenses and cameras is considered. However, in a certain imaging system, the difference in the image stabilization range is assumed to be small depending on the interchangeable lens100aor the camera body100b. In this case, the imaging system can determine the first division ratio based on an assumed approximate image stabilization range instead of acquiring the image stabilization range from respective storage units and then strictly acquiring the first division ratio, to determine the final division ratio in an easier way than in the first exemplary embodiment. For example, in an imaging system where the operation ranges of the interchangeable lens100aand the camera body100bare considered to be approximately equal, the first division ratio can be predetermined as 1:1. In the present exemplary embodiment, information about the image stabilization performance is acquired from the first and second storage units, but information about the image stabilization range is not acquired since the first division ratio has already been determined. The correction division setting unit913sets a fourth division ratio as the final division ratio. In the fourth division ratio, the ratio of the first optical image stabilization unit having a relatively high image stabilization performance is made larger than that in the first division ratio. This setting enables obtaining a similar effect to the effect obtained in a case where the imaging apparatus100is operated based on the third division ratio according to the first exemplary embodiment. More specifically, it becomes possible to ensure the image stabilization range corresponding to the image blur amount which may occur during exposure and, at the same time, implement control that utilizes the image stabilization performance. It is often the case that an image blur that can be dealt with by the interchangeable lens100ais similarly ensured regardless of the focal length. On the other hand, generally, an image blur that can be dealt with by the camera body100brelatively decreases with an increase in focal length. An image blur amount Δx is represented by Formula 3, where f denotes the focal length and Δθ denotes the shake amount. Δx=f tan Δθ  (Formula 3) As understood from Formula 3, the image blur occurring on the imaging plane increases with an increase in focal length. On the other hand, the operation range of the imaging plane image blur correction unit117included in the camera body100bis invariable, and hence a small range (Δθ) of shakes can be dealt with. Accordingly, the relations may be stored in a table for reference and use. For example, for a focal length of 50 mm, the first division ratio of the interchangeable lens100aand the camera body100bis set to 1:2 (the ratio of the camera body100bis larger). For a focal length of 100 mm, the first division ratio of the interchangeable lens100aand the camera body100bis set to 1:1 (the ratios of the camera body100band the interchangeable lens100aare equal). For a focal length of 200 mm, the first division ratio of the interchangeable lens100aand the camera body100bis set to 2:1 (the ratio of the interchangeable lens100ais larger). The first division ratio may be determined based on the focal length in this way, and the correction division setting unit913may acquire the first division ratio predetermined based on the focal length of the imaging optical system from the storage unit. In the present exemplary embodiment, the fourth division ratio is acquired and set as the final division ratio. In the fourth division ratio, the ratio of the image stabilization unit having a higher performance is increased with reference to the first division ratio. In the above-described example, when the camera body100bhas a higher performance and the focal length is 100 mm, the ratio of the camera body100bmay be increased (for example, the division ratio may be changed from 1:1 to 1:2). As in the first exemplary embodiment, how much the division ratio is to be changed may be determined taking the imaging conditions into consideration. As described above, in the third exemplary embodiment, an imaging apparatus is provided which is capable of suitably performing the collaborative image blur correction with a simple configuration and obtaining a totally high image stabilization performance when the imaging apparatus includes a plurality of image stabilization units. While, in the above-described exemplary embodiments, the shake detection is performed by using angular velocity sensors, the shake detection may be performed by using another configuration. In an applicable configuration, for example, the shake amount is calculated from the acceleration by using an acceleration sensor, or motion information is detected from image data to calculate the shake amount of the imaging apparatus100. While, in the above-described exemplary embodiments, the image blur correction unit included in the interchangeable lens100ais the lens-type image blur correction unit, an optical image blur correction unit using a non-lens optical element such as a prism is also applicable. In the above-described first and second exemplary embodiments, there are provided imaging conditions for setting each of the first and second division ratios as the final division ratio. However, the third division ratio may be constantly set depending on the settable focal length and the size of the image stabilization range. Other Embodiments Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™, a flash memory device, a memory card, and the like. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2021-108059, filed Jun. 29, 2021, which is hereby incorporated by reference herein in its entirety.
82,669
11943537
DETAILED DESCRIPTION Certain embodiments disclosed herein provide systems and methods for initiating a rescan of a portion of a sample responsive to detecting a mechanical vibration during image acquisition that exceeds a predetermined threshold. After reading this description it will become apparent to one skilled in the art how to implement the invention in various alternative embodiments and alternative applications. However, although various embodiments of the present invention will be described herein, it is understood that these embodiments are presented by way of example only, and not limitation. As such, this detailed description of various alternative embodiments should not be construed to limit the scope or breadth of the present invention as set forth in the appended claims. 1. Example Method FIG.1is a flow diagram illustrating an example process for initiating a rescan of a portion of a glass slide responsive to a mechanical vibration according to an embodiment of the invention. The process may be carried out by a digital pathology scanning apparatus such as is later described with respect toFIGS.2A-D. Initially, in step100the digital pathology scanning apparatus begins scanning a portion of a sample, for example a specimen on a glass slide. Scanning a sample comprises moving the scanning stage in a controlled direction relative to an imaging system that digitizes an image of the sample. A processor of the digital pathology scanning apparatus controls movement of the scanning stage. The controlled direction may include any direction that the processor instructs the scanning stage to move. In one embodiment, the processor may instruct the scanning stage to move in any X,Y,Z direction for any distance and the controlled direction includes both the direction and the distance of instructed movements. Next, and during scanning of the sample, one or more sensors generate sensor data and the sensor data is received in step150by a processor of the digital pathology scanning device. The one or more sensors may include position encoders that sense position/movement information. The one or more sensors may also include accelerometers that sense position/movement information. Next, in step200the processor analyzes the sensor data. If the processor identifies any information in the sensor data that does not correspond to a movement in a controlled direction (e.g., an instruction from the processor to move the scanning stage in a certain direction for a certain distance), then in step250an event is identified that corresponds to the movement in the non-controlled direction. The processor continues to receive and analyze sensor data and identify events during the entire scanning process of the sample. For each event that is identified, in step300the processor determines an amount of movement in the non-controlled direction. For example, the processor determines an amount (e.g., a distance) of the movement in the non-controlled direction. In an embodiment, this amount may be an amount of position error recorded by a position sensor. In step350the processor also determines a duration of the movement in the non-controlled direction. If a combination of the amount and the duration of the non-controlled movement of the event exceeds a predetermined threshold, as determined in step400then in step450the processor initiates a re-scan of the portion of the sample that was being digitized at the time of the event. Alternatively, if only the amount of the non-controlled movement of the event exceeds a predetermined threshold, the processor may still initiate a re-scan of the portion of the sample that was being digitized at the time of the event. Alternatively, if only the duration of the non-controlled movement of the event exceeds a predetermined threshold, the processor may still initiate a re-scan of the portion of the sample that was being digitized at the time of the event. In one embodiment, a re-scan comprises re-scanning a complete stripe. However, if the event or any element of the event (e.g., duration or amount) does not exceed the predetermined threshold, then the processor continues scanning the sample. In one embodiment, movement in a non-controlled direction may, for example, be caused by a vibration that is imparted to the scanning stage by the digital pathology scanning apparatus itself or by some force outside of the digital pathology scanning apparatus. A significant problem with vibration imparted to the scanning stage is that movement of the scanning stage in a non-controlled direction can adversely impact the quality of the resulting digital slide image. For example, the focus of the digital slide image may be adversely impacted. Also, the ability of a portion of the digital slide image (e.g., a stripe) to be combined with other portions of the digital slide image may be adversely impacted. For example, if a vibration caused the scanning stage to drift such that the resulting image stripe did not overlap with its adjacent stripe, the non-overlapping area would frustrate the ability of the digital pathology scanning apparatus to combine the stripes into a whole slide image. 2. Example Embodiments In one embodiment, a digital pathology scanning apparatus includes a scanning stage that is configured to support a sample and move the sample in a controlled direction relative to an imaging system to digitize a portion of the sample. The digital pathology scanning apparatus also includes one or more sensors configured to generate sensor data during movement of the scanning stage in the controlled direction. In one embodiment, the digital pathology scanning apparatus includes three sensors and each of the sensors is configured to sense movement in a specific axis, for example a first sensor is configured to sense movement in the X axis, a second sensor is configured to sense movement in the Y axis, and a third sensor is configured to sense movement in the Z axis. In one embodiment, the sensors are position encoders. The sensor data may include position data or movement data or both. The digital pathology scanning apparatus also includes a processor that is configured to control movement of the scanning stage in the controlled direction and further configured to analyze the sensor data generated by the one or more sensors during movement of the scanning stage in the controlled direction. The processor is also configured to identify an event in the analyzed sensor data based on one or more of a duration and a distance of a movement of the scanning stage in a non-controlled direction during movement of the scanning stage in the controlled direction. The processor is also configured to initiate a re-scan of the portion of the sample being scanned by the imaging system when the identified event occurred. In one embodiment, the one or more sensors comprise one or more position encoders that operate in cooperation with the processor to control movement of the scanning stage in the controlled direction. In one embodiment, the one or more sensors comprise an accelerometer configured to detect a movement of the scanning stage in the non-controlled direction. In one embodiment, the one or more sensors comprise one or more position encoders configured to detect a position of the scanning stage and one or more accelerometers configured to detect a movement of the scanning stage in the non-controlled direction. In one embodiment, the predetermined threshold is based solely on a duration of a detected movement in a non-controlled direction. In one embodiment, the predetermined threshold is based solely on a distance of a detected movement in a non-controlled direction. In one embodiment, the predetermined threshold is based a combination of a distance of a detected movement in a non-controlled direction and a duration of a detected movement in a non-controlled direction. In one embodiment, a method of digitizing a sample using a digital pathology scanning apparatus comprises moving a scanning stage in a controlled direction relative to an imaging system and digitizing a portion of the sample supported by the scanning stage during said movement of the scanning stage in the controlled direction. During movement of the scanning stage in the controlled direction, the method uses one or more sensors to generate sensor data. The sensor data may include position data or movement data or both. The method further includes analyzing the sensor data generated by the one or more sensors during movement of the scanning stage in the controlled direction and identifying an event in the sensor data based on the analysis, wherein the identified event comprises one or more of a duration and a distance of a detected movement of the scanning stage in a non-controlled direction. The method also includes initiating a re-digitizing of the portion of the sample being digitized by the imaging system when the identified event occurred, if the identified event exceeds a predetermined threshold. In one embodiment, the one or more sensors comprise position encoders that operate in cooperation with a processor to control movement of the scanning stage in the controlled direction. In one embodiment, the one or more sensors comprise an accelerometer configured to detect a movement of the scanning stage in the non-controlled direction. In one embodiment, the predetermined threshold is based solely on a duration of a detected movement in a non-controlled direction. In one embodiment, the predetermined threshold is based solely on a distance of a detected movement in a non-controlled direction. In one embodiment, the predetermined threshold is based a combination of a distance of a detected movement in a non-controlled direction and a duration of a detected movement in a non-controlled direction. 3. Example Digital Slide Scanning Apparatus The various embodiments described herein may be implemented using a digital pathology scanning device such as described with respect toFIGS.2A-2D. FIG.2Ais a block diagram illustrating an example processor enabled device550that may be used in connection with various embodiments described herein. Alternative forms of the device550may also be used as will be understood by the skilled artisan. In the illustrated embodiment, the device550is presented as a digital imaging device (also referred to as a digital slide scanning apparatus, digital slide scanner, scanner, scanner system or a digital imaging device, etc.) that comprises one or more processors555, one or more memories565, one or more motion controllers570, one or more interface systems575, one or more movable stages580that each support one or more glass slides585with one or more samples590, one or more illumination systems595that illuminate the sample, one or more objective lenses600that each define an optical path605that travels along an optical axis, one or more objective lens positioners630, one or more optional epi-illumination systems635(e.g., included in a fluorescence scanner system), one or more focusing optics610, one or more line scan cameras615and/or one or more area scan cameras620, each of which define a separate field of view625on the sample590and/or glass slide585. The various elements of the scanner system550are communicatively coupled via one or more communication busses560. Although there may be one or more of each of the various elements of the scanner system550, for simplicity in the description, these elements will be described in the singular except when needed to be described in the plural to convey the appropriate information. The one or more processors555may include, for example, a central processing unit (“CPU”) and a separate graphics processing unit (“GPU”) capable of processing instructions in parallel or the one or more processors555may include a multicore processor capable of processing instructions in parallel. Additional separate processors may also be provided to control particular components or perform particular functions such as image processing. For example, additional processors may include an auxiliary processor to manage data input, an auxiliary processor to perform floating point mathematical operations, a special-purpose processor having an architecture suitable for fast execution of signal processing algorithms (e.g., digital signal processor), a slave processor subordinate to the main processor (e.g., back-end processor), an additional processor for controlling the line scan camera615, the stage580, the objective lens225, and/or a display (not shown). Such additional processors may be separate discrete processors or may be integrated with the processor555. The memory565provides storage of data and instructions for programs that can be executed by the processor555. The memory565may include one or more volatile and/or non-volatile computer-readable storage mediums that store the data and instructions, including, for example, a random access memory, a read only memory, a hard disk drive, removable storage drive, and the like. The processor555is configured to execute instructions that are stored in memory565and communicate via communication bus560with the various elements of the scanner system550to carry out the overall function of the scanner system550. The one or more communication busses560may include a communication bus560that is configured to convey analog electrical signals and may include a communication bus560that is configured to convey digital data. Accordingly, communications from the processor555, the motion controller570, and/or the interface system575via the one or more communication busses560may include both electrical signals and digital data. The processor555, the motion controller570, and/or the interface system575may also be configured to communicate with one or more of the various elements of the scanning system550via a wireless communication link. The motion control system570is configured to precisely control and coordinate XYZ movement of the stage580and the objective lens600(e.g., via the objective lens positioner630). The motion control system570is also configured to control movement of any other moving part in the scanner system550. For example, in a fluorescence scanner embodiment, the motion control system570is configured to coordinate movement of optical filters and the like in the epi-illumination system635. In an embodiment, the motion control system570comprises one or more sensors that are configured to generate sensor data during movement of the scanning stage580in a controlled direction in cooperation with the processor555. For example, the one or more sensors may include one or more position encoders that provide information to the processor555during movement of the scanning stage580in the controlled direction in part to allow the processor555to precisely control movement of the scanning stage580and in part to allow the processor555to analyze the position encoder information to identify any movements of the scanning stage580in a non-controlled direction. Movement in a non-controlled direction may, for example, be caused by a vibration that is imparted to the scanning stage580by the digital pathology scanning apparatus itself or by some force outside of the digital pathology scanning apparatus. Additionally, or alternatively, the one or more sensors may include one or more accelerometers that provide information to the processor555during movement of the scanning stage580in the controlled direction, wherein the information is related to movement of the scanning stage580in the controlled direction and movement of the scanning stage580in a non-controlled direction. The information from the accelerometers allows the processor555to precisely control movement of the stage in the controlled direction and also allows the processor555to analyze the accelerometer information to identify any movements of the scanning stage580in a non-controlled direction. Advantageously, the one or more sensors (e.g., position encoders or accelerometers) are configured to precisely detect the position and/or movement of the scanning stage during digitization of a sample and the processor555is configured to analyze the sensor data to identify any undesired movement of the scanning stage in a non-controlled direction based on the precisely detected position and/or movement of the scanning stage during digitization of the sample. The interface system575allows the scanner system550to interface with other systems and human operators. For example, the interface system575may include a user interface to provide information directly to an operator and/or to allow direct input from an operator. The interface system575is also configured to facilitate communication and data transfer between the scanning system550and one or more external devices that are directly connected (e.g., a printer, removable storage medium, etc.) or external devices such as an image server system, an operator station, a user station, and an administrative server system that are connected to the scanner system550via a network (not shown). The illumination system595is configured to illuminate a portion of the sample590. The illumination system595may include, for example, a light source and illumination optics. The light source could be a variable intensity halogen light source with a concave reflective mirror to maximize light output and a KG-1 filter to suppress heat. The light source could also be any type of arc-lamp, laser, or other source of light. In an embodiment, the illumination system595illuminates the sample590in transmission mode such that the line scan camera615and/or area scan camera620sense optical energy that is transmitted through the sample590. Alternatively, or additionally, the illumination system595may be configured to illuminate the sample590in reflection mode such that the line scan camera615and/or area scan camera620sense optical energy that is reflected from the sample590. Overall, the illumination system595is configured to be suitable for interrogation of the microscopic sample590in any known mode of optical microscopy. In an embodiment, the scanner system550optionally includes an epi-illumination system635to optimize the scanner system550for fluorescence scanning. Fluorescence scanning is the scanning of samples590that include fluorescence molecules, which are photon sensitive molecules that can absorb light at a specific wavelength (excitation). These photon sensitive molecules also emit light at a higher wavelength (emission). Because the efficiency of this photoluminescence phenomenon is very low, the amount of emitted light is often very low. This low amount of emitted light typically frustrates conventional techniques for scanning and digitizing the sample590(e.g., transmission mode microscopy). Advantageously, in an optional fluorescence scanner system embodiment of the scanner system550, use of a line scan camera615that includes multiple linear sensor arrays (e.g., a time delay integration (“TDI”) line scan camera) increases the sensitivity to light of the line scan camera by exposing the same area of the sample590to each of the multiple linear sensor arrays of the line scan camera615. This is particularly useful when scanning faint fluorescence samples with low emitted light. Accordingly, in a fluorescence scanner system embodiment, the line scan camera615is preferably a monochrome TDI line scan camera. Advantageously, monochrome images are ideal in fluorescence microscopy because they provide a more accurate representation of the actual signals from the various channels present on the sample. As will be understood by those skilled in the art, a fluorescence sample590can be labeled with multiple florescence dyes that emit light at different wavelengths, which are also referred to as “channels.” Furthermore, because the low and high end signal levels of various fluorescence samples present a wide spectrum of wavelengths for the line scan camera615to sense, it is desirable for the low and high end signal levels that the line scan camera615can sense to be similarly wide. Accordingly, in a fluorescence scanner embodiment, a line scan camera615used in the fluorescence scanning system550is a monochrome 10 bit 64 linear array TDI line scan camera. It should be noted that a variety of bit depths for the line scan camera615can be employed for use with a fluorescence scanner embodiment of the scanning system550. The movable stage580is configured for precise X-Y axes movement under control of the processor555or the motion controller570. The movable stage may also be configured for movement in a Z axis under control of the processor555or the motion controller570. The moveable stage is configured to position the sample in a desired location during image data capture by the line scan camera615and/or the area scan camera. The moveable stage is also configured to accelerate the sample590in a scanning direction to a substantially constant velocity and then maintain the substantially constant velocity during image data capture by the line scan camera615. In an embodiment, the scanner system550may employ a high precision and tightly coordinated X-Y grid to aid in the location of the sample590on the movable stage580. In an embodiment, the movable stage580is a linear motor based X-Y stage with high precision encoders employed on both the X and the Y axis. For example, very precise nanometer encoders can be used on the axis in the scanning direction and on the axis that is in the direction perpendicular to the scanning direction and on the same plane as the scanning direction. The stage is also configured to support the glass slide585upon which the sample590is disposed. The sample590can be anything that may be interrogated by optical microscopy. For example, a glass microscope slide585is frequently used as a viewing substrate for specimens that include tissues and cells, chromosomes, DNA, protein, blood, bone marrow, urine, bacteria, beads, biopsy materials, or any other type of biological material or substance that is either dead or alive, stained or unstained, labeled or unlabeled. The sample590may also be an array of any type of DNA or DNA-related material such as cDNA, RNA or protein that is deposited on any type of slide or other substrate, including any and all samples commonly known as microarrays. The sample590may be a microtiter plate, for example a 96-well plate. Other examples of the sample590include integrated circuit boards, electrophoresis records, petri dishes, film, semiconductor materials, forensic materials, and machined parts. Objective lens600is mounted on the objective positioner630which, in an embodiment, may employ a very precise linear motor to move the objective lens600along the optical axis defined by the objective lens600. For example, the linear motor of the objective lens positioner630may include a 50 nanometer encoder. The relative positions of the stage580and the objective lens600in XYZ axes are coordinated and controlled in a closed loop manner using motion controller570under the control of the processor555that employs memory565for storing information and instructions, including the computer-executable programmed steps for overall operation of the scanning system550. In an embodiment, the objective lens600is a plan apochromatic (“APO”) infinity corrected objective with a numerical aperture corresponding to the highest spatial resolution desirable, where the objective lens600is suitable for transmission mode illumination microscopy, reflection mode illumination microscopy, and/or epi-illumination mode fluorescence microscopy (e.g., an Olympus 40×, 0.75 NA or 20×, 0.75 NA). Advantageously, objective lens600is capable of correcting for chromatic and spherical aberrations. Because objective lens600is infinity corrected, focusing optics610can be placed in the optical path605above the objective lens600where the light beam passing through the objective lens becomes a collimated light beam. The focusing optics610focus the optical signal captured by the objective lens600onto the light-responsive elements of the line scan camera615and/or the area scan camera620and may include optical components such as filters, magnification changer lenses, and/or the like. The objective lens600combined with focusing optics610provides the total magnification for the scanning system550. In an embodiment, the focusing optics610may contain a tube lens and an optional 2× magnification changer. Advantageously, the2X magnification changer allows a native20X objective lens600to scan the sample590at40X magnification. The line scan camera615comprises at least one linear array of picture elements (“pixels”). The line scan camera may be monochrome or color. Color line scan cameras typically have at least three linear arrays, while monochrome line scan cameras may have a single linear array or plural linear arrays. Any type of singular or plural linear array, whether packaged as part of a camera or custom-integrated into an imaging electronic module, can also be used. For example, a 3 linear array (“red-green-blue” or “RGB”) color line scan camera or a 96 linear array monochrome TDI may also be used. TDI line scan cameras typically provide a substantially better signal-to-noise ratio (“SNR”) in the output signal by summing intensity data from previously imaged regions of a specimen, yielding an increase in the SNR that is in proportion to the square-root of the number of integration stages. TDI line scan cameras comprise multiple linear arrays. For example, TDI line scan cameras are available with 24, 32, 48, 64, 96, or even more linear arrays. The scanner system550also supports linear arrays that are manufactured in a variety of formats including some with 512 pixels, some with 1024 pixels, and others having as many as 4096 pixels. Similarly, linear arrays with a variety of pixel sizes can also be used in the scanner system550. The salient requirement for the selection of any type of line scan camera615is that the motion of the stage580can be synchronized with the line rate of the line scan camera615so that the stage580can be in motion with respect to the line scan camera615during the digital image capture of the sample590. The image data generated by the line scan camera615is stored in a portion of the memory565and processed by the processor555to generate a contiguous digital image of at least a portion of the sample590. The contiguous digital image can be further processed by the processor555and the processed contiguous digital image can also be stored in the memory565. In an embodiment with two or more line scan cameras615, at least one of the line scan cameras615can be configured to function as a focusing sensor that operates in combination with at least one of the line scan cameras615that is configured to function as an imaging sensor. The focusing sensor can be logically positioned on the same optical axis as the imaging sensor or the focusing sensor may be logically positioned before or after the imaging sensor with respect to the scanning direction of the scanner system550. In an embodiment with at least one line scan camera615functioning as a focusing sensor, the image data generated by the focusing sensor is stored in a portion of the memory565and processed by the one or more processors555to generate focus information to allow the scanner system550to adjust the relative distance between the sample590and the objective lens600to maintain focus on the sample during scanning. Additionally, in an embodiment the at least one line scan camera615functioning as a focusing sensor may be oriented such that each of a plurality of individual pixels of the focusing sensor is positioned at a different logical height along the optical path605. In operation, the various components of the scanner system550and the programmed modules stored in memory565enable automatic scanning and digitizing of the sample590, which is disposed on a glass slide585. The glass slide585is securely placed on the movable stage580of the scanner system550for scanning the sample590. Under control of the processor555, the movable stage580accelerates the sample590to a substantially constant velocity for sensing by the line scan camera615, where the speed of the stage is synchronized with the line rate of the line scan camera615. After scanning a stripe of image data, the movable stage580decelerates and brings the sample590to a substantially complete stop. The movable stage580then moves orthogonal to the scanning direction to position the sample590for scanning of a subsequent stripe of image data, e.g., an adjacent stripe. Additional stripes are subsequently scanned until an entire portion of the sample590or the entire sample590is scanned. For example, during digital scanning of the sample590, a contiguous digital image of the sample590is acquired as a plurality of contiguous fields of view that are combined together to form an image strip. A plurality of adjacent image strips are similarly combined together to form a contiguous digital image of a portion of the sample590or the entire sample590. The scanning of the sample590may include acquiring vertical image strips or horizontal image strips. The scanning of the sample590may be either top-to-bottom, bottom-to-top, or both (bi-directional) and may start at any point on the sample. Alternatively, the scanning of the sample590may be either left-to-right, right-to-left, or both (bi-directional) and may start at any point on the sample. Additionally, it is not necessary that image strips be acquired in an adjacent or contiguous manner. Furthermore, the resulting image of the sample590may be an image of the entire sample590or only a portion of the sample590. In an embodiment, computer-executable instructions (e.g., programmed modules or other software) are stored in the memory565and, when executed, enable the scanning system550to perform the various functions described herein. In this description, the term “computer-readable storage medium” is used to refer to any media used to store and provide computer executable instructions to the scanning system550for execution by the processor555. Examples of these media include memory565and any removable or external storage medium (not shown) communicatively coupled with the scanning system550either directly or indirectly (e.g., via a network). FIG.2Billustrates a line scan camera having a single linear array640, which may be implemented as a charge coupled device (“CCD”) array. The single linear array640comprises a plurality of individual pixels645. In the illustrated embodiment, the single linear array640has 4096 pixels. In alternative embodiments, linear array640may have more or fewer pixels. For example, common formats of linear arrays include 512, 1024, and 4096 pixels. The pixels645are arranged in a linear fashion to define a field of view625for the linear array640. The size of the field of view varies in accordance with the magnification of the scanner system550. FIG.2Cillustrates a line scan camera having three linear arrays, each of which may be implemented as a CCD array. The three linear arrays combine to form a color array650. In an embodiment, each individual linear array in the color array650detects a different color intensity, (e.g., red, green, or blue). The color image data from each individual linear array in the color array650is combined to form a single field of view625of color image data. FIG.2Dillustrates a line scan camera having a plurality of linear arrays, each of which may be implemented as a CCD array. The plurality of linear arrays combine to form a TDI array655. Advantageously, a TDI line scan camera may provide a substantially better SNR in its output signal by summing intensity data from previously imaged regions of a specimen, yielding an increase in the SNR that is in proportion to the square-root of the number of linear arrays (also referred to as integration stages). A TDI line scan camera may comprise a larger variety of numbers of linear arrays. For example common formats of TDI line scan cameras include 24, 32, 48, 64, 96, 120 and even more linear arrays. The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles described herein can be applied to other embodiments without departing from the spirit or scope of the invention. Thus, it is to be understood that the description and drawings presented herein represent a presently preferred embodiment of the invention and are therefore representative of the subject matter which is broadly contemplated by the present invention. It is further understood that the scope of the present invention fully encompasses other embodiments that may become obvious to those skilled in the art and that the scope of the present invention is accordingly not limited.
33,109
11943538
With regard to description of the drawings, identical or similar reference numerals may be used to refer to identical or similar components. DETAILED DESCRIPTION Embodiments of the disclosure may be described with reference to accompanying drawings. Accordingly, those of ordinary skill in the art will recognize that modification, equivalent, and/or alternative on the various embodiments described herein can be variously made without departing from the scope and spirit of the disclosure. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device or a home appliance. The electronic devices are not limited to those described above. Various embodiments of the present disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. A singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B or C,” “at least one of A, B, and C,” and “at least one of A, B or C,” may include any one of or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly or via a third element. FIG.1is a front perspective view of an electronic device100according to an embodiment.FIG.2is a rear perspective view of the electronic device100according to an embodiment. Referring toFIGS.1and2, the electronic device100may include a housing110that includes a first surface (or a front surface)110A, a second surface (or a rear surface)110B, and a third surface (or a side surface)110C surrounding the space between the first surface110A and the second surface110B. Alternatively, the housing110may refer to a structure that forms some of the first surface110A, the second surface110B, and the third surface110C. The first surface110A may be formed by a front plate102, at least a portion of which is substantially transparent (e.g., a glass plate including various coating layers or a polymer plate). The second surface110B may be formed by a back plate111that is substantially opaque. The back plate111may be formed of, for example, coated or colored glass, ceramic, a polymer, metal (e.g., aluminum, stainless steel (STS) or magnesium) or a combination of at least two of the aforementioned materials. The third surface110C may be formed by a side bezel structure (or a side member)118that is coupled with the front plate102and the back plate111and that contains metal and/or a polymer. Alternatively, the back plate111and the side bezel structure118may be integrally formed with each other and may contain the same material (e.g., a metallic material such as aluminum). The front plate102may include two first areas110D that curvedly and seamlessly extend from partial areas of the first surface110A toward the back plate111. The first areas110D may be located at opposite long edges of the front plate102. The back plate111may include two second areas110E that curvedly and seamlessly extend from partial areas of the second surface110B toward the front plate102. The second areas110E may be located at opposite long edges of the back plate111. Alternatively, the front plate102(or the back plate111) may include only one of the first areas110D (or the second areas110E) (or may not include a part of the first areas110D (or the second areas110E). When viewed from a side of the electronic device100, the side bezel structure118may have a first thickness (or width) at sides (e.g., short sides) not including the first areas110D or the second areas110E and may have a second thickness at sides (e.g., long sides) including the first areas110D or the second areas110E, the second thickness being smaller than the first thickness. The electronic device100may include at least one of a display101, audio modules103,104, and107(e.g., an audio module770ofFIG.24), a sensor module (e.g., a sensor module776ofFIG.24), camera modules105112, and113(e.g., a camera module780ofFIG.24), key input devices117(e.g., an input device750ofFIG.24), a light emitting element or a connector hole108(e.g., a connecting terminal778ofFIG.24). Alternatively, at least one component (e.g., the key input devices117or the light emitting element) among the aforementioned components may be omitted from the electronic device100or other component(s) may be additionally included in the electronic device100. The display101may be visually exposed through most of the front plate102. For example, at least a portion of the display101may be visually exposed through the front plate102that includes the first surface110A and the first areas110D of the third surface110C. The display101may be disposed on the rear surface of the front plate102. The periphery of the display101may be formed to be substantially the same as the shape of the adjacent outside edge of the front plate102. Alternatively, the gap between the outside edge of the display101and the outside edge of the front plate102may be substantially constant to expand the area by which the display101is visually exposed. A surface of the housing110(or the front plate102) may include a screen display area that is formed as the display101is visually exposed. For example, the screen display area may include the first surface110A and the first areas110D of the side surface110C. Alternatively, the screen display area110A and110D may include a sensing area that is configured to obtain biometric information of a user. When the screen display area110A and110D includes the sensing area, this may mean that at least a portion of the sensing area overlaps the screen display area110A and110D. For example, the sensing area may refer to an area capable of displaying visual information by the display101like other areas of the screen display area110A and110D and additionally obtaining biometric information (e.g., a fingerprint) of the user. The screen display area110A and110D of the display101may include an area through which the first camera module105(e.g., a punch hole camera) is visually exposed. For example, at least a portion of the periphery of the area through which the first camera module105is visually exposed may be surrounded by the screen display area110A and110D. The first camera module105may include a plurality of camera modules (e.g., the camera module780ofFIG.24). The display101may be configured such that at least one of an audio module, a sensor module, a camera module (e.g., the first camera module305) or a light emitting element is disposed on the rear surface of the screen display area110A and110D. For example, the electronic device100may be configured such that the first camera module105(e.g., an under display camera (UDC)) is disposed on the rear side (e.g., the side facing the −z-axis direction) of the first surface110A (e.g., the front surface) and/or the side surface IOC (e.g., at least one surface of the first areas110D) so as to face toward the first surface110A and/or the side surface110C. For example, the first camera module105may be disposed under the display101and may not be visually exposed through the screen display area110A and110D. The display101may be coupled with or disposed adjacent to, touch detection circuitry, a pressure sensor capable of measuring the intensity (pressure) of a touch, and/or a digitizer that detects a stylus pen of a magnetic field type. The audio modules103,104, and107may include the microphone holes103and104and the speaker hole107. The microphone holes103and104may include the first microphone hole103formed in a partial area of the third surface110C and the second microphone hole104formed in a partial area of the second surface110B. A microphone for obtaining an external sound may be disposed in the microphone holes103and104. The microphone may include a plurality of microphones to sense the direction of a sound. The second microphone hole104formed in the partial area of the second surface110B may be disposed adjacent to the camera modules105,112, and113. For example, the second microphone hole104may obtain sounds when the camera modules105,112, and113are executed or may obtain sounds when other functions are executed. The speaker hole107may include an external speaker hole107and a receiver hole for telephone call. The external speaker hole107may be formed in a portion of the third surface110C of the electronic device100. Alternatively, the external speaker hole107and the microphone hole103may be implemented as a single hole. The receiver hole for telephone call may be formed in another portion of the third surface110C. For example, the receiver hole for telephone call may be formed in another portion (e.g., a portion facing the +y-axis direction) of the third surface110C that faces the portion (e.g., a portion facing the −y-axis direction) of the third surface110C in which the external speaker hole107is formed. The receiver hole for telephone call may not be formed in a portion of the third surface110C and may be formed by the separation space between the front plate102(or the display101) and the side bezel structure118. The electronic device100may include at least one speaker that is configured to output a sound outside the housing110through the external speaker hole107or the receiver hole for telephone call. The speaker may include a piezoelectric speaker not including the speaker hole107. The sensor module may generate an electrical signal or a data value that corresponds to an operational state inside the electronic device100or an environmental state external to the electronic device100. For example, the sensor module may include at least one of a proximity sensor, a heart rate monitor (HRM) sensor, a fingerprint sensor, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a color sensor, an infrared (IR) sensor, a biosensor, a temperature sensor, a humidity sensor or an illuminance sensor. The camera modules105,112, and113may include the first camera module105(e.g., a punch hole camera) exposed on the first surface110A of the electronic device100, the second camera module112exposed on the second surface110B, and/or the flash113. The first camera module105may be visually exposed through a portion of the screen display area110A and110D of the display101. For example, the first camera module105may be visually exposed on a partial area of the screen display area110A and110D through an opening that is formed in a portion of the display101. In another example, the first camera module105(e.g., a UDC) may be disposed on the rear surface of the display101and may not be visually exposed through the screen display area110A and110D. The second camera module112may include a plurality of cameras (e.g., a dual camera, a triple camera or a quad camera). However, the second camera module112is not necessarily limited to including the plurality of cameras and may include one camera. The first camera module105and the second camera module112may include one or more lenses, an image sensor, and/or an image signal processor (ISP). A flash113may include, for example, a light emitting diode or a xenon lamp. Alternatively, two or more lenses (an IR camera lens, a wide angle lens, and a telephoto lens) and image sensors may be disposed on one surface of the electronic device100. The key input devices117may be disposed on the third surface110C (e.g., the first areas110D and/or the second areas110E) of the housing110. Alternatively, the electronic device100may not include all or some of the key input devices117, and the key input devices117not included may be implemented in a different form, such as a soft key, on the display101. Alternatively, the key input devices117may include a sensor module that forms the sensing area that is included in the screen display area110A and110D. The connector hole108may accommodate a connector. The connector hole108may be disposed in the third surface110C of the housing110. For example, the connector hole108may be disposed in the third surface110C so as to be adjacent to at least a part of the audio modules (e.g., the microphone hole103and the speaker hole107). Alternatively, the electronic device100may include the first connector hole108capable of accommodating a connector (e.g., a universal serial bus (USB) connector) for transmitting/receiving power and/or data with an external electronic device, and/or a second connector hole capable of accommodating a connector (e.g., an earphone jack) for transmitting/receiving audio signals with an external electronic device. The electronic device100may include the light emitting element. For example, the light emitting element may be disposed on the first surface110A of the housing110. The light emitting element may provide state information of the electronic device100in the form of light. Alternatively, the light emitting element may provide a light source that operates in conjunction with an operation of the first camera module105. For example, the light emitting element may include an LED, an IR LED, and/or a xenon lamp. FIG.3is an exploded perspective view of the electronic device100according to an embodiment. Referring toFIG.3, the electronic device100may include a front plate120(e.g., the front plate102ofFIG.1), a display130(e.g., the display101ofFIG.1), a side member140(e.g., the side bezel structure118ofFIG.1), a printed circuit board (PCB)150, a rear case160, a battery170, a back plate180(e.g., the back plate111ofFIG.2), and an antenna. The electronic device100may not include at least one component (e.g., the rear case160) among the aforementioned components or may additionally include other component(s). Some of the components of the electronic device100illustrated inFIG.3may be the same as or similar to, some of the components of the electronic device illustrated inFIGS.1and2(e.g., the electronic device100ofFIGS.1and2), and therefore repetitive descriptions will hereinafter be omitted. The front plate120and the display130may be coupled to the side member140. For example, the front plate120and the display130may be disposed under the side member140based onFIG.3. The front plate120and the display130may be located in the +z-axis direction from the side member140. For example, the display130may be coupled to the bottom of the side member140, and the front plate120may be coupled to the bottom of the display130. The front plate120may form a portion of the outer surface (or the exterior) of the electronic device100. The display130may be disposed between the front plate120and the side member140so as to be located inside the electronic device100. The side member140may be disposed between the display130and the back plate180. For example, the side member140may be configured to surround the space between the back plate180and the display130. The side member140may include a frame structure141forming a portion of the side surface (e.g., the third surface110C ofFIG.1) of the electronic device100and a plate structure142extending inward from the frame structure141. The plate structure142may be disposed inside the frame structure141so as to be surrounded by the frame structure141. The plate structure142may be connected with the frame structure141or may be integrally formed with the frame structure141. The plate structure142may be formed of a metallic material and/or a nonmetallic (e.g., polymer) material. The plate structure142may support other components included in the electronic device100. For example, at least one of the display130, the PCB150, the rear case160or the battery170may be disposed on the plate structure142. For example, the display130may be coupled to one surface (e.g., the surface facing the +z-axis direction) of the plate structure142, and the PCB150may be coupled to an opposite surface (e.g., the surface facing the −z-axis direction) facing away from the one surface. The rear case160may be disposed between the back plate180and the plate structure142. The rear case160may be coupled to the side member140so as to overlap at least a portion of the PCB150. For example, the rear case160may face the plate structure142with the PCB150therebetween. A processor (e.g., a processor720ofFIG.24), memory (e.g., memory730ofFIG.24), and/or an interface (e.g., an interface777ofFIG.24) may be mounted on the PCB150. The processor may include, for example, one or more of a central processing unit (CPU), an application processor (AP), a graphics processing unit (GPU), an ISP, a sensor hub processor or a communication processor (CP). The memory may include, for example, volatile memory or nonvolatile memory. The interface may include, for example, a high-definition multimedia interface (HDMI), a USB interface, a secure digital (SD) card interface, and/or an audio interface. The interface may electrically or physically connect the electronic device100with an external electronic device and may include a USB connector, an SD card/multimedia card (MMC) connector or an audio connector. The battery170may supply power to at least one component of the electronic device100. For example, the battery170may include a primary cell that is not rechargeable, a secondary cell that is rechargeable or a fuel cell. At least a portion of the battery170may be disposed on substantially the same plane as the PCB150. The battery170may be integrally disposed inside the electronic device100or may be disposed to be detachable from the electronic device100. The antenna may be disposed between the back plate180and the battery170. The antenna may include, for example, a near field communication (NFC) antenna, a wireless charging antenna, and/or a magnetic secure transmission (MST) antenna. For example, the antenna may perform short-range communication with an external device or may wirelessly transmit and receive power required for charging. The first camera module105may be disposed on at least the plate structure142of the side member140such that a lens receives external light through a partial area of the front surface110A. For example, the lens of the first camera module105may be visually exposed through a camera area137of the front plate120. The second camera module112may be disposed on the PCB150such that a lens receives external light through a camera area184of the rear surface1101B of the electronic device100. For example, the lens of the second camera module112may be visually exposed through the camera area184. The second camera module112may be disposed in at least a portion of the inner space formed in the housing110of the electronic device100and may be electrically connected to the PCB150through a connecting member (e.g., a connector). The camera area184may be formed in the rear surface310B of the back plate180. The camera area184may be formed to be at least partially transparent such that external light is incident on the lens of the second camera module112. At least a portion of the camera area184may protrude to a predetermined height from the surface of the back plate180. However, without being necessarily limited thereto, the camera area184may form substantially the same plane as the surface of the back plate180. FIG.4is a perspective view of a camera module200according to an embodiment.FIG.5is an exploded perspective view of the camera module200according to an embodiment.FIG.6illustrates a movable member250and a connecting member400of the camera module200according to an embodiment. Referring toFIGS.4,5and6, the camera module200may include a camera housing210, a lens assembly220, an image stabilization assembly230, a second circuit board290, and the connecting member400. The camera housing210may form at least a portion of the exterior of the camera module200. A surface of the camera housing210may form a portion of the outer surface or the exterior of the camera module200. The camera housing210may accommodate a part of other components of the camera module200. The lens assembly220may be accommodated in the camera housing210. The camera housing210may be connected with a part of the image stabilization assembly230. For example, the camera housing210may be connected with a frame242and/or a cover244of the image stabilization assembly230. The camera housing210may form the exterior of the camera module200together with the frame242and/or the cover244. The camera housing210may be integrally formed with the frame242and/or the cover244. The camera housing210may have, in one surface thereof, a light receiving area211through which a reflective member224is visually exposed. For example, the light receiving area211may be formed in a portion of a first surface210afacing the +z-axis direction of the camera housing210. For example, the light receiving area211may include an opening area (or a through-hole through which a portion of the reflective member224is directly exposed outside the camera housing210. The light receiving area211may include a transparent area (e.g., a window cover). The first surface210aof the camera housing210may be parallel to a portion of the rear surface110B of the electronic device100or may form a portion of the rear surface110B of the electronic device100. In an embodiment, external light may be incident on the reflective member224, which is disposed in the camera housing210, through the light receiving area211. As illustrated inFIG.4, at least a portion of the reflective member224may be visually exposed outside the camera housing210through the light receiving area211. For example, at least a portion of the reflective member224may overlap the light receiving area211when the first surface210aof the camera housing210is viewed from above. The lens assembly220may be disposed in the camera housing210. The lens assembly220may include a lens unit222and the reflective member224. For example, the lens unit222and the reflective member224of the lens assembly220may be located in the camera housing210. The lens unit222and the reflective member224may be aligned with the image stabilization assembly230(or an image sensor252in the direction of an optical axis L of a lens. For example, the optical axis L of the lens may be defined as a virtual axis extending in the direction in which light incident on the lens through the reflective member224passes through the lens. For example, the optical axis L may extend substantially parallel to the x-axis. The lens unit222may include at least one lens. For example, the lens unit222may include one lens or may include a plurality of lenses. At least a portion of the lens unit222may be accommodated in a lens carrier, and the lens carrier may be disposed in the camera housing210. The lens carrier may be configured to move in the direction of the optical axis L inside the camera housing210. For example, the lens unit222may move straight (or may move linearly) in the direction of the optical axis L together with the lens carrier. The lens unit222may be disposed between the reflective member224and the image stabilization assembly230. For example, the lens unit222may be located between the reflective member224and the image sensor252based on the direction of the optical axis L. The reflective member224, the lens unit222, and the image sensor252may be at least partially disposed on the optical axis L. For example, the image sensor252may be disposed in a first optical axis direction L1(e.g., the +x-axis direction) from the lens unit222, and the reflective member224may be disposed in a second optical axis direction L2(e.g., the −x-axis direction) from the lens unit222. As illustrated inFIG.5, the reflective member224, the lens unit222, and the image sensor252may be sequentially disposed along the first optical axis direction L1. Herein, external light may be incident on the reflective member224through the light receiving area211and may be reflected or refracted by the reflective member224to travel toward the lens unit222and/or the image sensor252. The reflective member224may be disposed in the camera housing210to face the image sensor252with the lens unit222therebetween. For example, the reflective member224may be located in the second optical axis direction L2(e.g., the −x-axis direction) with respect to the lens unit222. For example, the lens unit222and the image sensor252may be sequentially disposed in the first optical axis direction L1(e.g., the +x-axis direction) from the reflective member224. The reflective member224may be configured to reflect or refract external light incident through the light receiving area211. For example, external light reflected from an object may be incident on the reflective member224in a direction (e.g., the z-axis direction) perpendicular to the optical axis L through the light receiving area211. The light incident on the reflective member224may be reflected and/or refracted in the direction of the optical axis L by the reflective member224and may pass through the lens of the lens unit222, and the light passing through the lens may be incident on the image sensor252. The plurality of lenses of the lens unit222may condense the light reflected or refracted by the reflective member224. The condensed light may form an image on the image sensor252of the image stabilization assembly230. The reflective member224may include a prism or an inclined mirror. The lens assembly220may further include a focus drive unit that is configured to move at least a portion of the lens unit222in the direction of the optical axis L. For example, the focus drive unit may include a magnet that is disposed on one of the lens unit222and the camera housing210and a coil that is disposed on the other one of the lens unit222and the camera housing210. For example, the magnet and the coil may be configured to electromagnetically interact with each other. The lens unit222may be configured to move in the direction of the optical axis L by an electromagnetic force (e.g., Lorentz force) generated between the coil and the magnet. The camera module200may be configured to perform a zoom function and/or an auto focus (AF) function by moving the lens unit222in the first optical axis direction L1or the second optical axis direction L2using the focus drive unit. The image stabilization assembly230may perform an optical image stabilizer (OIS) function in response to external noise (e.g., a shaking movement of the user's hand or vibration) applied to the camera module200. The image stabilization assembly230may include a fixed member240, the movable member250, a guide member260, and a drive member270. The fixed member240may be coupled with the camera housing210so as to be fixed to the camera housing210. The fixed member240may include the cover244and the frame242. For example, the cover244and/or the frame242may be connected to the camera housing210or may be integrally formed with the camera housing210. The cover244and the frame242may be integrally formed with each other or may be coupled so as to be detachable from each other. For example, the cover244and the frame242may form a predetermined space in which the movable member250is disposed. For example, the frame242may include an extending portion246surrounding at least a portion of the connecting member400. The movable member250may be configured to move in a direction perpendicular to the optical axis L. The movable member250may move relative to the camera housing210and the fixed member240in the direction perpendicular to the optical axis L. For example, the image stabilization assembly230may perform an image stabilization function by moving the movable member250in the direction perpendicular to the optical axis L (e.g., the y-axis direction and/or the z-axis direction). For example, light reflected from the object may pass through the lens assembly220and may form an image on the image sensor252. The image formed on the image sensor252may be shaken by external noise. For example, the image stabilization assembly230may move the image sensor252to compensate for the shake in response to the external noise. When the image sensor252moves, the optical axis L may be located to deviate from the center of the image sensor252. The image stabilization assembly230may compensate for the image shake by moving the movable member250including the image sensor252in the direction perpendicular to the optical axis L. The movable member250may include a holder251, a first circuit board253, and the image sensor252. For example, the holder251, the first circuit board253, and the image sensor252may be coupled or connected so as to move together in the direction perpendicular to the optical axis L. The holder251may be coupled to the first circuit board253so as to move together with the first circuit board253. For example, the holder251may be coupled to the first circuit board253through insertion-coupling. The holder251may include protrusions that are press-fit into holes of the first circuit board253or may include holes into which protrusions formed on the first circuit board253are press-fit. For example, the holder251may move together with the image sensor252and the first circuit board253when the image stabilization function is performed. The holder251may have a first opening area254formed therein to be aligned with the image sensor252in the direction of the optical axis L. For example, the image sensor252may face the lens unit222through the first opening area254. Light passing through the lens unit222may be incident on the image sensor252through the first opening area254. The first circuit board253may be configured such that the image sensor252is electrically connected to one portion of the first circuit board253and the connecting member400is electrically connected to another portion of the first circuit board253. The first circuit board253may include a first portion255substantially perpendicular to the optical axis L, a second portion256disposed to be substantially perpendicular to the first portion255, and a connecting portion257connecting the first portion255and the second portion256. For example, the image sensor252may be connected to or disposed on, the first portion255of the first circuit board253, and the connecting member400may be connected to the second portion256of the first circuit board253. For example, the second portion256may be disposed substantially parallel to the optical axis L. The connecting portion257may be formed in a shape in which one portion is bent such that the first portion255and the second portion256are perpendicular to each other. The first circuit board253may be configured such that the first portion255, the second portion256, and the connecting portion257are integrally formed or are manufactured as separate parts and assembled together. The first portion255of the first circuit board253may include a first surface255afacing toward the lens assembly220and a second surface255bfacing away from the first surface255a. For example, the first surface255amay be a surface facing the second optical axis direction L2(e.g., the −x-axis direction), and the second surface255bmay be a surface facing the first optical axis direction L1(e.g., the +x-axis direction). The image sensor252may be disposed on the first surface255aof the first portion255of the first circuit board253. One or more coils271and273may be disposed on the second surface255bof the first portion255of the first circuit board253. The first circuit board253may be disposed such that the first opening area254of the holder251and the image sensor252are aligned with each other with respect to the optical axis L. The image sensor252may be at least partially aligned with the optical axis L. The image sensor252may be electrically connected with the first circuit board253or may be disposed on the first surface255aof the first circuit board253(e.g., the first portion255). The image sensor252may be configured to receive light passing through the lens and generate an electrical signal based on the received light signal. For example, the image sensor252may face the lens included in the lens unit222through the first opening area254of the holder251. To compensate for a shake, the image stabilization assembly230may move the movable member250(e.g., the holder251, the first circuit board253, and the image sensor252) in at least one direction perpendicular to the optical axis L in response to the direction in which the camera module200is shaken. The plurality of coils271and273may be located on the second surface255bof the first portion255of the first circuit board253. The plurality of coils271and273may include the first coil271that provides a driving force for a movement of the first circuit board253(or the image sensor252) in the y-axis direction and the second coil273that provides a driving force for a movement of the first circuit board253(or the image sensor252) in the z-axis direction. The plurality of coils271and273may interact with a plurality of magnets on the frame242of the fixed member240to provide the driving forces for moving the first circuit board253. The image stabilization assembly230(or the camera module200) may perform the image stabilization function by moving the first circuit board253(or the image sensor252) in a direction perpendicular to the optical axis L (e.g., the y-axis direction and/or the z-axis direction) by applying electrical signals to the plurality of coils271and273. For example, the first coil271and the second coil273may electromagnetically interact with the plurality of magnets on the frame242. For example, when the electrical signals are applied to the plurality of coils271and273, a magnetic field may be formed, and an electromagnetic force may be generated between the plurality of coils271and273and the plurality of magnets. The movable member250may be configured to move in the y-axis direction and/or the z-axis direction relative to the lens assembly220and the fixed member240by the electromagnetic force. The guide member260may be configured to support a movement of the movable member250. For example, the guide member260may be coupled to be movable relative to the holder251and the fixed member240. For example, the guide member260may be coupled to the holder251so as to be movable in the y-axis direction and may be coupled to the frame242of the fixed member240so as to be movable in the z-axis direction. The guide member260may be coupled to the frame242such that a movement of the guide member260relative to the frame242in the y-axis direction is limited. When the image stabilization function is performed, the guide member260may move together with the movable member250or may be fixed without moving together with the movable member250. For example, the guide member260may be configured to move in the z-axis direction together with the movable member250when the movable member250moves in the z-axis direction. For example, the guide member260may be configured to be separated from the movement of the movable member250in the y-axis direction by being fixed to the frame242when the movable member250moves in the y-axis direction. For example, the guide member260may support the movement of the movable member250in the y-axis direction. The guide member260may have a second opening area261formed therein through which the plurality of coils271and273on the first circuit board253face the plurality of magnets on the frame242of the fixed member240. The second circuit board290may be configured to electrically connect the camera module200with a main circuit board (e.g., the PCB150ofFIG.3) of an electronic device (e.g., the electronic device100ofFIGS.1to3). For example, the second circuit board290may be electrically connected with the first circuit board253of the camera module200and the main circuit board150of the electronic device100. The second circuit board290may be electrically connected with the first circuit board253through the connecting member400. The second circuit board290may include a connector295. For example, the connector295may be disposed on or connected to, a portion of the second circuit board290. The connector295may be electrically connected to the main circuit board (e.g., the PCB150ofFIG.3) of the electronic device (e.g., the electronic device100ofFIGS.1to3). The second circuit board290may be fixedly disposed in the housing of the electronic device100(e.g., the housing110ofFIGS.1and2). For example, the second circuit board290may be fixed inside the housing110by connection of the connector295to the main circuit board150of the electronic device100. When the first circuit board253moves as the image stabilization function is performed, the second circuit board290may remain fixed without moving together. The connecting member400may electrically connect the first circuit board253and the second circuit board290. For example, the connecting member400may be connected to the first circuit board253and the second circuit board290and may perform a function of transferring an electrical signal and/or a control signal between the processor of the electronic device100(e.g., the processor720ofFIG.24) and the camera module200. For example, a control signal and/or an electrical signal generated from the processor720disposed on the main circuit board (e.g., the PCB150ofFIG.3) may be transferred to a part (e.g., the image stabilization assembly230or the lens assembly220) of the camera module200through the connecting member400. Furthermore, an image-related electrical signal generated by the image sensor252may be transferred to the processor of the electronic device100(e.g., the processor720ofFIG.24) through the connecting member400. When the first circuit board253moves relative to the second circuit board290as the image stabilization function is performed, the connecting member400, which electrically connects the first circuit board253and the second circuit board290, may be deformed while at least a portion of the connecting member400moves together. The connecting member400may include one or more flexible circuit boards410and420. For example, the flexible circuit boards410and420may include a flexible PCB (FPCB). The connecting member400may include the first flexible circuit board410connected to the first circuit board253and the second flexible circuit board420connected to the second circuit board290. The connecting member400may include two first flexible circuit boards410and two second flexible circuit boards420. For example, the two first flexible circuit boards410may be disposed on the first circuit board253so as to be symmetrical to each other, and the two second flexible circuit boards420may be disposed on the second circuit board290so as to be symmetrical to each other. However, the numbers of first flexible circuit boards410and second flexible circuit boards420are not limited to the illustrated embodiment, and the connecting member400may include one, three or more first flexible circuit boards410and one, three or more second flexible circuit boards420. The number of first flexible circuit boards410and the number of second flexible circuit boards420may differ from each other. The first flexible circuit boards410and the second flexible circuit boards420may be electrically connected. For example, the first flexible circuit boards410and the second flexible circuit boards420may be directly connected while being integrally formed with each other as the second flexible circuit boards420extend from portions of the first flexible circuit boards410. In another example, the first flexible circuit boards410and the second flexible circuit boards420may be indirectly connected through connecting circuit boards. When the movable member250moves in a direction perpendicular to the optical axis L, the second circuit board290may be fixed, and the first circuit board253may move relative to the second circuit board290. For example, the first flexible circuit boards410and the second flexible circuit boards420may be at least partially deformed or may move, in response to the movement of the first circuit board253, as will be described with reference toFIGS.9to14. The connecting member400may be configured to be a part separate from the first circuit board253and the second circuit board290. However, without being limited thereto, at least some of the connecting member400, the first circuit board253, and the second circuit board290may be integrally formed. The connecting member400, the first circuit board253, and the second circuit board290may be integrally formed and may be configured to be one circuit board structure. For example, the circuit board structure may be implemented with a rigid FPCB (RFPCB) so as to include the first circuit board253and the second circuit board290and the connecting member400. FIG.7illustrates the movable member250, the guide member260, and the drive member270of the image stabilization assembly230according to an embodiment.FIG.8illustrates a coupling structure of the guide member260and the frame242of the image stabilization assembly230according to an embodiment. Referring toFIGS.7and8, the images stabilization assembly230of the camera module200may include the frame242the movable member250, the guide member260, and the drive member270. Some of the components of the image stabilization assembly230illustrated inFIGS.7and8are the same as or similar to, some of the components of the camera module200illustrated inFIGS.4to6, and therefore repetitive descriptions will hereinafter be omitted. The movable member250may include the holder251, the first circuit board253coupled to the holder251, and the image sensor252disposed on one area of the first circuit board253(e.g., the first surface255aof the first portion255). The movable member250may be configured to move in the direction of a first shift axis S1and the direction of a second shift axis S2perpendicular to the optical axis L (e.g., the x-axis direction) when an image stabilization function is performed. For example, the first shift axis S1and the second shift axis S2may be perpendicular to each other. For example, the first shift axis S1may be substantially parallel to the y-axis, and the second shift axis S2may be substantially parallel to the z-axis. The drive member270may be configured to provide a driving force for moving the movable member250in a direction perpendicular to the optical axis L. For example, the drive member270may generate a driving force for moving the movable member250in the direction of the first shift axis S1and/or the direction of the second shift axis S2. The drive member270may include the plurality of coils271and273and a plurality of magnets275and277. The plurality of coils271and273may be disposed on the first circuit board253of the movable member250. The plurality of magnets275and277may be disposed on the frame242of the fixed member240. Accordingly, the movable member250may be configured to move relative to the fixed member240by an interaction between the plurality of magnets275and277and the plurality of coils271and273. The plurality of coils271and273and the plurality of magnets275and277may be disposed to at least partially overlap each other when viewed in the direction of the optical axis L (e.g., the x-axis direction). For example, the plurality of coils271and273may be disposed on the second surface255bof the first portion255of the first circuit board253. The plurality of magnets275and277may be disposed on a sidewall248of the frame242to face the plurality of coils271and273. The sidewall238of the frame242and the second surface255bof the first portion255of the first circuit board253may be located to face each other in the direction of the optical axis L. The drive member270may include a first drive member270for moving the movable member250in the direction of the first shift axis S1(e.g., the y-axis direction) and a second drive member270for moving the movable member250in the direction of the second shift axis S2(e.g., the z-axis direction). The first drive member270may include the first coil271and the first magnet275. The first coil271may be disposed on the second surface255bof the first circuit board253, and the first magnet275may be disposed on the sidewall248of the frame242. The first coil271and the first magnet275may be disposed to at least partially overlap each other when viewed in the direction of the optical axis L (e.g., the x-axis direction). An electrical signal may be applied to the first coil271. For example, the electrical signal may be applied through the second circuit board290, the connecting member400, and/or the first circuit board253. A processor of the electronic device may control the direction and/or strength of an electric current passing through the first coil271. An electromagnetic force (e.g., Lorentz force) may be applied to the first magnet275to correspond to the direction of the electric current passing through the first coil271. The movable member250may be moved (linearly moved) in the direction of the first shift axis S1by the electromagnetic force. The first coil271may have a shape that is longer in the y-axis direction than in the z-axis direction. For example, the first coil271may include a conductive wire surrounding any axis parallel to the optical axis L (e.g., an axis parallel to the x-axis direction) or may include a conductive pattern formed in the direction surrounding the any axis (e.g., the axis parallel to the x-axis direction). For example, the first coil271may be formed such that a conductive wire or a conductive pattern that extends in the y-axis direction is longer than a conductive wire or a conductive pattern that extends in the z-axis direction. The first magnet275may be formed such that the surface facing the first coil271includes two different polarities. For example, the first magnet275may be formed such that an N-pole area and an S-pole area are arranged in the direction of the first shift axis S1(e.g., the y-axis direction). The first magnet275may be configured such that the N-pole area and the S-pole area at least partially overlap the first coil271when viewed in the direction of the optical axis L (e.g., the x-axis direction). The first drive member270may include a plurality of first coils271and a plurality of first magnets275. For example, the plurality of first coils271may include a first sub coil271-1and a second sub coil271-2. The first sub coil271-1and the second sub coil271-2may be arranged in the z-axis direction. For example, the first sub coil271-1may be located in the +z-axis direction with respect to the second sub coil271-2. For example, the plurality of first magnets275may include a first sub magnet275-1corresponding to the first sub coil271-1and a second sub magnet275-2corresponding to the second sub coil271-2. The first sub magnet275-1and the second sub magnet275-2may be arranged in the z-axis direction. For example, the first sub magnet275-1may be located in the +z-axis direction with respect to the second sub magnet275-2. The second drive member270may include the second coil273and the second magnet277. The second coil273may be disposed on the second surface255bof the first circuit board253, and the second magnet277may be disposed on the sidewall248of the frame242. The second coil273and the second magnet277may be disposed to at least partially overlap each other when viewed in the direction of the optical axis L (e.g., the x-axis direction). An electrical signal may be applied to the second coil273. For example, the electrical signal may be applied through the second circuit board290, the connecting member400, and/or the first circuit board253. The processor720and/or the iSP of the electronic device100may control the direction and/or strength of an electric current passing through the second coil273. An electromagnetic force (e.g., Lorentz force) may be applied to the second magnet277to correspond to the direction of the electric current passing through the second coil273. The movable member250may be moved (linearly moved) in the direction of the second shift axis S2by the electromagnetic force. The second coil273may have a shape that is longer in the z-axis direction than in the y-axis direction. For example, the second coil273may include a conductive wire surrounding any axis parallel to the optical axis L (e.g., an axis parallel to the x-axis direction) or may include a conductive pattern formed in the direction surrounding the any axis (e.g., the axis parallel to the x-axis direction). For example, the second coil273may be formed such that a conductive wire or a conductive pattern that extends in the z-axis direction is longer than a conductive wire or a conductive pattern that extends in the y-axis direction. The second magnet277may be formed such that the surface facing the second coil273includes two different polarities. For example, the second magnet277may be formed such that an N-pole area and an S-pole area are arranged in the direction of the second shift axis S2(e.g., the z-axis direction). The second magnet277may be configured such that the N-pole area and the S-pole area at least partially overlap the second coil273when viewed in the direction of the optical axis L (e.g., the x-axis direction). The second drive member270may include a plurality of second coils273and a plurality of second magnets277. For example, the plurality of second coils273may include a third sub coil273-1and a fourth sub coil273-2. The third sub coil273-1and the fourth sub coil273-2may be arranged in the y-axis direction. For example, the third sub coil273-1may be located in the +y-axis direction with respect to the fourth sub coil273-2. For example, the plurality of second magnets277may include a third sub magnet277-1corresponding to the third sub coil273-1and a fourth sub magnet277-2corresponding to the fourth sub coil273-2. The third sub magnet277-1and the fourth sub magnet277-2may be arranged in the y-axis direction. For example, the third sub magnet277-1may be located in the +y-axis direction with respect to the fourth sub magnet277-2. The plurality of coils271and273of the drive member270may be located on the first circuit board253, and the plurality of magnets275and277of the drive member270may be located on the frame242. However, the disclosure is not necessarily limited thereto. Alternatively, the plurality of coils271and273may be located on the sidewall248of the frame242, and the plurality of magnets275and277may be disposed on the first circuit board253or the holder251. Alternatively, the camera module200and/or the image stabilization assembly230may further include a separate additional connecting member (e.g., a circuit board) for applying electrical signals (e.g., currents) to the plurality of coils271and273located on the frame242. The guide member260may be located between the holder251of the movable member250and the sidewall248of the frame242. The guide member260may be coupled to the holder251and the sidewall248so as to be movable. For example, the guide member260may be coupled to the holder251so as to be movable in the y-axis direction and may be coupled to the sidewall248of the frame242so as to be movable in the z-axis direction. The guide member260may have the second opening area261formed therein to be aligned with the second surface255bof the first circuit board253in the direction of the optical axis L. The second surface255bof the first circuit board253may face the sidewall248of the frame242through the second opening area261. For example, the plurality of coils271and273located on the second surface255bof the first circuit board253and the plurality of magnets275and277located on the sidewall248of the frame242may face each other in the direction of the optical axis L through the second opening area261. The image stabilization assembly230may further include a first ball guide structure and a second ball guide structure for guiding a movement of the movable member250. The first ball guide structure may include one or more first balls265disposed between the guide member260and the holder251. For example, a plurality of first balls265may be formed. The holder251may include the first opening area254in which the image sensor252is located and a first peripheral area258surrounding the first opening area254. On the first peripheral area258, the holder251may have first recesses259in which at least portions of the first balls265are accommodated. As many first recesses259as the first balls265may be formed. For example, the first recesses259may be formed in a shape extending in the y-axis direction. The guide member260may have second recesses264overlapping the first recesses259in the direction of the optical axis L (e.g., the x-axis direction). The second recesses264, together with the first recesses259, may form spaces in which the first balls265are accommodated. For example, the second recesses264may be formed in a shape extending in the y-axis direction. As many second recesses264as the first balls265may be formed. The first balls265may be configured to roll in the spaces between the first recesses259and the second recesses264. For example, when the holder251moves in the y-axis direction, the first balls265may rotate while linearly moving in the y-axis direction in the spaces between the first recesses259and the second recesses264or may rotate in position in the spaces between the first recesses259and the second recesses264. The second ball guide structure may include one or more second balls266disposed between the guide member260and the sidewall248of the frame242. For example, a plurality of second balls266may be formed. The guide member260may include the second opening area261in which the plurality of coils271and273are located and a second peripheral area262surrounding the second opening area261. On the second peripheral area262, the guide member260may have third recesses263in which at least portions of the second balls266are accommodated. As many third recesses263as the second balls266may be formed. For example, the third recesses263may be formed in a shape extending in the z-axis direction. On the sidewall248, the frame242may have fourth recesses249overlapping the third recesses263in the direction of the optical axis L (e.g., the x-axis direction). The fourth recesses249, together with the first recesses263, may form spaces in which the second balls266are accommodated. For example, the fourth recesses249may be formed in a shape extending in the z-axis direction. As many fourth recesses249as the second balls266may be formed. The second balls266may be configured to roll in the spaces between the third recesses263and the fourth recesses249. For example, when the guide member260moves in the z-axis direction together with the holder251, the second balls266may rotate while linearly moving in the z-axis direction in the spaces between the third recesses263and the fourth recesses249or may rotate in position in the spaces between the third recesses263and the fourth recesses249. FIG.9is a perspective view of the flexible circuit board410of the connecting member400according to an embodiment.FIG.10is a sectional view of the flexible circuit board410of the connecting member400according to an embodiment.FIG.11illustrates a first layer440and a second layer450of the flexible circuit board410of the connecting member400according to an embodiment. FIG.10is a sectional view of the flexible circuit board410taken along line B-B′ inFIG.9. The flexible circuit board410illustrated inFIGS.9to11may be referred to as the first flexible circuit board410. The first flexible circuit board410and the second flexible circuit board420of the connecting member400may be formed in substantially the same shape and/or structure.FIGS.9to11may be views for describing the first flexible circuit board410, and the shape and/or structure of the first flexible circuit board410to be described with reference toFIGS.9to11may be identically applied to the second flexible circuit board420. Referring toFIGS.9to11, the flexible circuit board410(e.g., the first flexible circuit board410) of the connecting member400may include the first layer440, the second layer450, VIAs461, conductive pads441and451, and an adhesive member462. For example, the flexible circuit board410may be implemented in a structure in which the first layer440and the second layer450are stacked on each other. The flexible circuit board410may be configured such that only one portion of the first layer440and only one portion of the second layer450are coupled with each other and the remaining portion of the first layer440and the remaining portion of the second layer450are deformed by moving in directions toward or away from, each other. Identically to the structure of the first flexible circuit board410illustrated inFIGS.9to11, the second flexible circuit board420may include a first layer470, a second layer480, VIAs491, conductive pads471and481, and an adhesive member492. The first layer440and the second layer450may be connected through the VIAs461and the adhesive member462. For example, the first layer440and the second layer450may be electrically connected through the VIAs461. For example, the first layer440and the second layer450may be physically coupled through the adhesive member462disposed between a partial area (e.g., a first VIA area443) of the first layer440and a partial area (e.g., a second VIA area453) of the second layer450. For example, the adhesive member462may contain various adhesive materials (e.g., an optical clear adhesive (OCA) or a pressure sensitive adhesive (PSA)). For example, the first layer440and the second layer450may be formed in substantially the same shape. The first layer440may include the first VIA area443in which the VIAs461are located and a first pad area442on which the first conductive pads441are located. The first VIA area443may be located adjacent to one end portion of the first layer440, and the first pad area442may be located adjacent to an opposite end portion of the first layer440. For example, the first layer440may include a first surface445on which the first conductive pads441are disposed and a second surface446facing away from the first surface445. The first conductive pads441may be located on at least partial areas of the first surface445. The VIAs461may pass through at least portions of the second surface446. The second layer450may include the second VIA area453in which the VIAs461are located and a second pad area452on which the second conductive pads451are located. The second VIA area453may be located adjacent to one end portion of the second layer450, and the second pad area452may be located adjacent to an opposite end portion of the second layer450. For example, the second layer450may include a third surface455on which the second conductive pads451are disposed and a fourth surface456facing away from the third surface455. The second conductive pads451may be located on at least partial areas of the third surface455. The VIAs461may pass through at least portions of the fourth surface456. The first layer440and the second layer450may be disposed such that the second surface446of the first layer440and the fourth surface456of the second layer450face each other. The adhesive member462may be disposed between the first VIA area443of the second surface446of the first layer440and the second VIA area453of the fourth surface456of the second layer450. The first layer440and the second layer450may be spaced apart from each other by a predetermined gap G, and the gap G may be increased or decreased as the flexible circuit board410is deformed. As illustrated inFIG.9, the flexible circuit board410may be configured such that the gap between the first layer440and the second layer450is changed. The flexible circuit board410may be disposed between two circuit boards (e.g., a fixed circuit board (FB) and a movable circuit board (MB)) disposed parallel to each other and may electrically connect the two circuit boards, and one of the two circuit boards may be configured to move relative to the other. For example, the first layer440of the flexible circuit board410may be coupled to the fixed circuit board FB, and the second layer450may be coupled to the movable circuit board MB. The first conductive pads441of the first layer440may be electrically connected with the fixed circuit board FB, and the second conductive pads451of the second layer450may be electrically connected with the movable circuit board MB. For example, the pad areas442and452of the first layer440and the second layer450may be coupled to move together with the fixed circuit board FB and the movable circuit board MB, respectively. When the movable circuit board MB moves relative to the fixed circuit board FB, the flexible circuit board410may be deformed in response to the movement of the movable circuit board MB as the gap between the first conductive pads441and the second conductive pads451is increased or decreased. The VIAs461may electrically connect the first layer440and the second layer450. For example, a first conductive pattern444may be formed in the first layer440, and a second conductive pattern454may be formed in the second layer450. The VIAs461may be configured to electrically connect the first conductive pattern444formed in the first layer440and the second conductive pattern454formed in the second layer450. For example, the VIAs461may pass through the second surface446of the first layer440, the adhesive member462, and the fourth surface456of the second layer450to electrically connect the first conductive pattern444and the second conductive pattern454. The VIAs461may include through-holes that penetrate the first layer440, the adhesive member462, and the second layer450and a conductive material that fills the through-holes or is plated on the inner walls of the through-holes. The adhesive member462may be disposed between the first VIA area443of the first layer440and the second VIA area453of the second layer450. For example, the adhesive member462, which is disposed between the VIA areas443and453of the first layer440and the second layer450, may physically couple the first VIA area443of the first layer440and the second VIA area453of the second layer450. For example, the adhesive member462may be disposed on an area corresponding to the first VIA area443of the second surface446of the first layer440or may be disposed on an area corresponding to the second VIA area453of the fourth surface456of the second layer450. The conductive pads441and451may be disposed on partial areas of the first surface445of the first layer440and partial areas of the third surface455of the second layer450. The conductive pads441and451may include the first conductive pads441and the second conductive pads451. For example, the first conductive pads441may be surface-mounted or disposed on the first pad area442of the first surface445of the first layer440. For example, the second conductive pads451may be surface-mounted or disposed on the second pad area452of the third surface455of the second layer450. The conductive pads441and451may be electrically connected with at least a portion of the first circuit board (e.g., the first circuit board253ofFIGS.13to15B), the second circuit board290or the connecting circuit board430. For example, the pad areas442and452of the first layer440and the second layer450may be coupled to the first circuit board253, the second circuit board290or the connecting circuit board430to electrically connect the conductive pads441and451with the first circuit board253, the second circuit board290or the connecting circuit board430. FIG.11illustrates a plan view of the first layer440when the first surface445of the first layer440is viewed and a plan view of the second layer450when the third surface455of the second layer450is viewed. The first layer440and the second layer450may have substantially the same shape. For example, the first layer440and the second layer450of the flexible circuit board410may be manufactured by using FPCBs having substantially the same shape. A partial area of the second surface446of the first layer440and a partial area of the fourth surface456of the second layer450may be attached to each other by the adhesive member462such that the first surface445and the third surface455face away from each other. The flexible circuit board410may be configured such that the first layer440is integrally formed with the second layer450. FIG.12illustrates an image stabilization operation of the camera module200according to an embodiment.FIG.13illustrates an operation of the connecting member400of the camera module200according to an embodiment.FIG.14illustrates an operation of the connecting member400of the camera module200according to an embodiment. FIG.12is a sectional view of the camera module200taken along line A-A′ inFIG.4. For example,FIG.12may be a schematic view for describing the image stabilization operation of the camera module200.FIGS.13and14are plan views when the image sensor252is viewed in the +x-axis direction. Referring toFIG.12, the camera module200may include the camera housing210, the reflective member224, a lens226, the image sensor252, and the first circuit board253. For example, the lens226may be included in the lens unit illustrated inFIGS.5and6(e.g., the lens unit222ofFIGS.5and6). For example, the image sensor252and the first circuit board253may be included in the movable member250. The camera module200may perform an image stabilization function by moving the first circuit board253and the image sensor252in a direction perpendicular to the optical axis L. For example, in response to external noise (e.g., vibration or a shaking movement of a user's hand), the camera module200may correct image shake by moving the first circuit board253, on which the image sensor252is disposed, in at least one of the directions of the first shift axis S1or the second shift axis S2perpendicular to the optical axis L. The reflective member224and the lens226may be disposed in the camera housing210. The camera housing210may have the light receiving area211formed therein through which external light is incident on the reflective member224. For example, light incident on the reflective member224through the light receiving area211in the direction perpendicular to the optical axis L may be refracted and/or reflected in the direction of the optical axis L by the reflective member224and may travel toward the lens226and the image sensor252. The first circuit board253may be disposed on the camera housing210so as to be movable in the direction perpendicular to the optical axis L. The image sensor252may be disposed on the first circuit board253. The image sensor252may be disposed on one surface of the first circuit board253to face the lens226. The image sensor252may be electrically connected with the first circuit board253. For example, the reflective member224, the lens226, and the image sensor252may be aligned in the direction of the optical axis L. The first circuit board253may be configured to move relative to the camera housing210in the direction of the first shift axis S1and/or the direction of the second shift axis S2. The image sensor252may move together with the first circuit board253and may move relative to the reflective member224and the lens226accordingly. The camera module200may align the optical axis L of the lens226and the image sensor252to a specified position by moving the first circuit board253in at least one of two directions perpendicular to the optical axis L using the drive member270. Referring toFIGS.13and14, the camera module200may include the first circuit board253on which the image sensor252is disposed, the second circuit board290on which the connector295is disposed, and the connecting member400connecting the first circuit board253and the second circuit board290. The first circuit board253may include the first portion255on which the image sensor252is disposed and the second portion256extending from the first portion255at a right angle. For example, the image sensor252may be surface-mounted on the first surface255aof the first portion255. For example, the first flexible circuit board410may be connected to the second portion256. For example, on the second portion256, the first circuit board253may have a conductive area to which the first flexible circuit board410(e.g., the first conductive pads441of the first flexible circuit board410) is electrically connected. The second circuit board290may include a third portion291on which the connector295is disposed and a fourth portion293extending from the third portion291. For example, the connecting member400(e.g., the second flexible circuit board420) may be connected to the fourth portion293. For example, on the fourth portion293, the second circuit board290may have a conductive area to which the first conductive pads471of the second flexible circuit board420is electrically connected. The connecting member400may electrically connect the first circuit board253and the second circuit board290. For example, the first flexible circuit board410of the connecting member400may be connected to the first circuit board253, and the second flexible circuit board420of the connecting member400may be connected to the second circuit board290. Accordingly, the first circuit board253and the second circuit board290may be electrically connected through the connecting member400. The connecting member400may include the first flexible circuit board410connected to the first circuit board253, the second flexible circuit board420connected to the second circuit board290, and the connecting circuit board430electrically connecting the first flexible circuit board410and the second flexible circuit board420. For example, the first flexible circuit board410and the second flexible circuit board420may be formed in the structure and/or shape described with reference toFIGS.9to11. The first flexible circuit board410may be disposed between the first circuit board253and the connecting circuit board430(e.g., a first connecting portion431). For example, the first flexible circuit board410may be disposed between the second portion256of the first circuit board253and the first connecting portion431of the connecting circuit board430. For example, the first flexible circuit board410may be disposed substantially perpendicular to the first portion255and may be disposed perpendicular to the image sensor252accordingly. The first flexible circuit board410may electrically connect the first circuit board253and the connecting circuit board430. For example, a part of the first flexible circuit board410may be electrically connected to the second portion256of the first circuit board253, and another part of the first flexible circuit board410may be electrically connected to the first connecting portion431of the connecting circuit board430. The first flexible circuit board410may be configured such that the first layer440of the first flexible circuit board410is connected to one of the first circuit board253and the connecting circuit board430and the second layer450of the first flexible circuit board410is connected to the other one of the first circuit board253and the connecting circuit board430. For example, the first layer440of the first flexible circuit board410may be coupled to the second portion256such that the first conductive pads441are electrically connected with the conductive area of the second portion256. For example, the second layer450of the first flexible circuit board410may be coupled to the first connecting portion431such that the second conductive pads451are electrically connected with a conductive area of the first connecting portion431. The first layer440and the second layer450of the first flexible circuit board410may be electrically connected through the VIAs461. For example, the second portion256electrically connected with the first conductive pads441of the first layer440may be electrically connected, through the VIAs461, with the first connecting portion431connected with the second conductive pads451of the second layer450. The second flexible circuit board420may be disposed between the second circuit board290and the connecting circuit board430(e.g., a second connecting portion432). For example, the second flexible circuit board420may be disposed between the fourth portion293of the second circuit board290and the second connecting portion432of the connecting circuit board430. The second flexible circuit board420may be disposed substantially perpendicular to the first portion255and may be disposed perpendicular to the image sensor252accordingly. The second flexible circuit board420may electrically connect the second circuit board290and the connecting circuit board430. For example, a part of the second flexible circuit board420may be electrically connected to the fourth portion293of the second circuit board290, and another part of the second flexible circuit board420may be electrically connected to the second connecting portion432of the connecting circuit board430. The second flexible circuit board420may be configured such that the first layer470of the second flexible circuit board420is connected to one of the second circuit board290and the connecting circuit board430and the second layer480of the second flexible circuit board420is connected to the other one of the second circuit board290and the connecting circuit board430. The first layer470of the second flexible circuit board420may be coupled to the fourth portion293such that the first conductive pads471are electrically connected with the conductive area of the fourth portion293. The second layer480of the second flexible circuit board420may be coupled to the second connecting portion432such that the second conductive pads481are electrically connected with a conductive area of the second connecting portion432. The first layer470and the second layer480of the second flexible circuit board420may be electrically connected through the VIAs491. The fourth portion293electrically connected with the first conductive pads471of the first layer470may be electrically connected, through the VIAs491, with the second connecting portion432connected with the second conductive pads481of the second layer480. The connecting circuit board430may electrically connect the first flexible circuit board410and the second flexible circuit board420. The second layer450of the first flexible circuit board410and the second layer480of the second flexible circuit board420may be coupled to the connecting circuit board430. The connecting circuit board430may include the first connecting portion431connected with the first flexible circuit board410, the second connecting portion432connected with the second flexible circuit board420, and a third connecting portion433connecting the first connecting portion431and the second connecting portion432. The first connecting portion431and the second connecting portion432may be disposed perpendicular to each other, and the third connecting portion433may be partially bent to connect the first connecting portion431and the second connecting portion432. The first connecting portion431may be coupled to be electrically connected with the first flexible circuit board410. The first connecting portion431may include the conductive area to which the first flexible circuit board410(e.g., the second conductive pads451of the first flexible circuit board410) is electrically connected. The second connecting portion432may be coupled to be electrically connected with the second flexible circuit board420. The second connecting portion432may include the conductive area to which the second conductive pads481of the second flexible circuit board420are electrically connected. The first flexible circuit board410, the second flexible circuit board420, and the connecting circuit board430connecting the first flexible circuit board410and the second flexible circuit board420may be configured as separate parts. However, the structure of the connecting member400is not limited to the illustrated embodiment. The connecting member400may be integrally formed. The connecting member400may not include the connecting circuit board430and may be configured such that the second layer450of the first flexible circuit board410and the second layer480of the second flexible circuit board420are directly connected. In another example, the connecting member400may be implemented with an RFPCB so as to include a rigid circuit board portion (e.g., the connecting circuit board430) and a flexible circuit board portion (e.g., the first flexible circuit board410and the second flexible circuit board420). Hereinafter, an operation of the connecting member400when the first circuit board253moves as illustrated inFIG.12will be described with reference toFIGS.13and14. FIGS.13and14illustrate an operation in which the first circuit board253moves a predetermined distance in the direction of the first shift axis S1and/or the direction of the second shift axis S2perpendicular to the optical axis L as the image stabilization function of the camera module200is performed. The first flexible circuit board410and the second flexible circuit board420may be deformed such that portions of the first layers440and470and portions of the second layers450and480move toward or away from each other in response to a movement of the first circuit board253in the direction of the first shift axis S1and/or the direction of the second shift axis S2. When the camera module200moves the image sensor252(e.g., the first circuit board253or the movable member250ofFIGS.5to7) in the direction of the first shift axis S1, the first flexible circuit board410may be deformed in response to the gap between the first circuit board253and the first connecting portion431. Referring toFIGS.13and14, when the first circuit board253moves in a +S1direction (e.g., the +y-axis direction), the second circuit board290and the connecting circuit board430may be in a fixed state, and the gap between the second portion256of the first circuit board253and the first connecting portion431of the connecting circuit board430may be increased. When the first circuit board253moves, the gap between the first conductive pads441and the second conductive pads451may be increased as the first conductive pads441of the first layer440of the first flexible circuit board410are coupled to the second portion256of the first circuit board253and the second conductive pads451of the second layer450are coupled to the first connecting portion431of the connecting circuit board430. The first flexible circuit board410may be deformed in a form in which the gap between the first layer440and the second layer450is gradually increased from the VIAs461toward the conductive pads441and451. Prior to the movement of the first circuit board253, the first conductive pads441and the second conductive pads451of the first flexible circuit board410may be spaced apart from each other by a first gap d1. When the first circuit board253moves a specified distance in the +S1direction, the first conductive pads441and the second conductive pads451of the first flexible circuit board410may be spaced apart from each other by a second gap d2greater than the first gap d1. When the first circuit board253moves in a −S1direction opposite to the +S1direction in the state ofFIG.13, the first conductive pads441and the second conductive pads451of the first flexible circuit board410may be spaced apart from each other by a gap smaller than the first gap d1. When the camera module200moves the image sensor252(e.g., the first circuit board253or the movable member250ofFIGS.5to7) in the direction of the second shift axis S2, the second flexible circuit board420may be deformed in response to the gap between the second circuit board290and the second connecting portion432. Referring toFIGS.13and14, when the first circuit board253moves in a +S2direction (e.g., the +z-axis direction), the second circuit board290may be in a fixed state, and the gap between the fourth portion293of the second circuit board290and the second connecting portion432of the connecting circuit board430may be increased as the connecting circuit board430moves in the +S2direction together with the first circuit board253. When the first circuit board253moves, the gap between the first conductive pads471and the second conductive pads481may be increased as the first conductive pads471of the first layer470of the second flexible circuit board420are coupled to the fourth portion293of the second circuit board290and the second conductive pads481of the second layer480are coupled to the second connecting portion432of the connecting circuit board430. The second flexible circuit board420may be deformed in a form in which the gap between the first layer470and the second layer480is gradually increased from the VIAs491toward the conductive pads471and481. Prior to the movement of the first circuit board253, the first conductive pads471and the second conductive pads481of the second flexible circuit board420may be spaced apart from each other by the first gap d1. When the connecting circuit board430moves a specified distance in the +S2direction together with the first circuit board253, the first conductive pads471and the second conductive pads481of the second flexible circuit board420may be spaced apart from each other by the second gap d2greater than the first gap d1. When the first circuit board253moves in a −S2direction opposite to the +S2direction in the state ofFIG.13, the first conductive pads471and the second conductive pads481of the second flexible circuit board420may be spaced apart from each other by a gap smaller than the first gap d1. Referring toFIGS.13and14, the connecting member400may include two first flexible circuit boards410and two second flexible circuit boards420. The first flexible circuit board410may include a first sub circuit board410-1and a second sub circuit board410-2that are disposed between the second portion256of the first circuit board253and the first connecting portion431of the connecting circuit board430so as to be spaced apart from each other. The second flexible circuit board420may include a third sub circuit board420-1and a fourth sub circuit board420-2that are disposed between the fourth portion293of the second circuit board290and the second connecting portion432of the connecting circuit board430so as to be spaced apart from each other. The first sub circuit board410-1and the second sub circuit board410-2may be disposed such that the VIAs461of the first sub circuit board410-1and the VIAs461of the second sub circuit board410-2face each other. The distance between the VIAs461of the first sub circuit board410-1and the VIAs461of the second sub circuit board410-2may be increased or decreased in response to a movement of the first circuit board253. The first sub circuit board410-1and the second sub circuit board410-2may be configured such that when the first circuit board253moves in the +S1direction, the distance between the VIAs461of the first sub circuit board410-1and the VIAs461of the second sub circuit board410-2is increased as the first layers440and the second layers450move away from each other. However, the directions of the first sub circuit board410-1and the second sub circuit board410-2are not limited to the illustrated embodiment and may be changed according to various embodiments (e.g., refer toFIGS.15A and15B). The third sub circuit board420-1and the fourth sub circuit board420-2may be disposed such that the VIAs491of the third sub circuit board420-1and the VIAs491of the fourth sub circuit board420-2face each other. The distance between the VIAs491of the third sub circuit board420-1and the VIAs491of the fourth sub circuit board420-2may be increased or decreased in response to a movement of the first circuit board253. The third sub circuit board420-1and the fourth sub circuit board420-2may be configured such that when the first circuit board253moves in the +S2direction, the distance between the VIAs491of the third sub circuit board420-1and the VIAs491of the fourth sub circuit board420-2is increased as the first layers470and the second layers480move away from each other. However, the directions of the third sub circuit board420-1and the fourth sub circuit board420-2are not limited to the illustrated embodiment and may be changed according to various embodiments (e.g., refer toFIGS.15A and15B). A pair of first flexible circuit boards410and a pair of second flexible circuit boards420may be provided. However, this is illustrative, and the disclosure is not necessarily limited thereto. The number of first flexible circuit boards410and the number of second flexible circuit boards420may be changed depending on the size of the camera module200. For example, when the camera module200has a relatively small size, at least one of the first flexible circuit board410or the second flexible circuit board420may include one flexible circuit board. In another example, when the camera module200has a relatively large size, at least one of the first flexible circuit board410or the second flexible circuit board420may include three or more flexible circuit boards. The first circuit board253, the second circuit board290, and the connecting member400may be integrally formed. The camera module200may include a circuit board structure (e.g., the first circuit board253, the second circuit board290, and the connecting member400) that is electrically connected with the PCB150of the electronic device100. The circuit board structure may include a first circuit board portion253(e.g., the first circuit board253) having the image sensor252disposed thereon, a second circuit board portion290(e.g., the second circuit board290) having the connector295disposed thereon, and a third circuit board portion400(e.g., the connecting member400) flexibly extending from part of the first circuit board portion253to part of the second circuit board portion290. The third circuit board portion400may include a first flexible portion410(e.g., the first flexible circuit board410) connected with the first circuit board portion253and a second flexible portion420(e.g., the second flexible circuit board420) connected with the second circuit board portion290. The first flexible portion410and the second flexible portion420may include first layers440and470, second layers450and480, and VIAs461and491, respectively. The third circuit board portion400may further include the connecting circuit board430connecting the first flexible portion410and the second flexible portion420. The circuit board structure may include a rigid PCB portion that includes the first circuit board portion253, the second circuit board portion290, and the rigid portion430and a flexible PCB portion that includes the first flexible portion410and the second flexible portion420. The circuit board structure may be implemented with an RFPCB. FIG.15Aillustrates an operation of a connecting member400of a camera module200according to an embodiment.FIG.15Billustrates an operation of the connecting member400of the camera module200according to an embodiment. FIGS.15A and15Bmay be views illustrating the camera module200in which the arrangement of at least a part of flexible circuit boards410and420is changed, compared to the camera module200illustrated inFIGS.13and14. Some of the components of the camera module200illustrated inFIGS.15A and15Bare substantially the same as or similar to, some of the components of the camera module200illustrated inFIGS.13and14, and therefore repetitive descriptions will hereinafter be omitted. Referring toFIGS.15A and15B, the camera module200may include a first circuit board253on which an image sensor252is disposed, a second circuit board290on which a connector295is disposed, and the connecting member400connecting the first circuit board253and the second circuit board290. The first flexible circuit board410of the connecting member400may include a first sub circuit board410-1and a second sub circuit board410-2, and the second flexible circuit board420of the connecting member400may include a third sub circuit board420-1and a fourth sub circuit board420-2. The direction in which the first flexible circuit board410is disposed may differ from the direction in which the second flexible circuit board420is disposed. The first sub circuit board410-1and the second sub circuit board410-2may be disposed such that conductive pads441and451of the first sub circuit board410-1and conductive pads441and451of the second sub circuit board410-2are located adjacent to each other. For example, when the image sensor252is viewed from above, the conductive pads441and451of the first sub circuit board410-1and the conductive pads441and451of the second sub circuit board410-2may be disposed to face each other in the z-axis direction. Based on the drawings, the first sub circuit board410-1may be disposed such that the conductive pads441and451face downward (e.g., the −z-axis direction) and VIAs461face upward (e.g., the +z-axis direction), and the second sub circuit board410-2may be disposed such that VIAs461face downward (e.g., the −z-axis direction) and the conductive pads441and451face downward (e.g., the +z-axis direction). However, the arrangement direction of the first flexible circuit board410is not limited to the illustrated embodiment. The third sub circuit board420-1and the fourth sub circuit board420-2may be disposed such that VIAs491of the third sub circuit board420-1and VIAs491of the fourth sub circuit board420-2are located adjacent to each other. For example, when the image sensor252is viewed from above, the VIAs491of the third sub circuit board420-1and the VIAs491of the fourth sub circuit board420-2may be disposed to face each other in the y-axis direction. Based on the drawings, the third sub circuit board420-1may be disposed such that conductive pads471and481face leftward (e.g., the −y-axis direction) and the VIAs491face rightward (e.g., the +y-axis direction), and the fourth sub circuit board420-2may be disposed such that the VIAs491face leftward (e.g., the −y-axis direction) and conductive pads471and481face rightward (e.g., the +y-axis direction). However, the arrangement direction of the second flexible circuit board420is not limited to the illustrated embodiment. The conductive pads441of the first sub circuit board410-1and the second sub circuit board410-2may be fixed to the first circuit board253, and the conductive pads451of the first sub circuit board410-1and the second sub circuit board410-2may be fixed to a connecting circuit board430. Each of the first sub circuit board410-1and the second sub circuit board410-2may be configured such that in response to a movement of the first circuit board253, the portion where the VIAs461are formed moves in a direction perpendicular to the direction of movement of the first circuit board253. The first sub circuit board410-1and the second sub circuit board410-2may be configured such that when the first circuit board253moves in the +S1direction, the VIAs461of the first sub circuit board410-1are pulled in the −S2direction and the VIAs461of the second sub circuit board410-2are pulled in the +S2direction as first layers440and second layers450move away from each other. The arrangement of the flexible circuit boards410and420illustrated inFIGS.15A and15Bis illustrative, and the disclosure is not limited thereto. The arrangement of the first flexible circuit board410and the second flexible circuit board420may be changed to be different from that in the illustrated embodiment. For example, identically to the arrangement of the first flexible circuit board410illustrated inFIGS.15A and15B, the third sub circuit board420-1and the fourth sub circuit board420-2may be disposed such that the conductive pads471and481are located adjacent to each other. In another example, the first flexible circuit board410may be disposed such that the VIAs461of the first sub circuit board410-1and the conductive pads441and451of the second sub circuit board410-2are located adjacent to each other. In such a case, the first flexible circuit board410may be disposed such that the VIAs461of the first sub circuit board410-1and the VIAs461of the second sub circuit board410-2face the same direction and the conductive pads441and451of the first sub circuit board410-1and the conductive pads441and451of the second sub circuit board410-2face the same direction. Likewise, the second flexible circuit board420may be disposed such that the VIAs491of the third sub circuit board420-1and the conductive pads471and481of the fourth sub circuit board420-2are located adjacent to each other. InFIGS.13to15B, the connecting member400may include a pair of first flexible circuit boards410and a pair of second flexible circuit boards420. The pair of first flexible circuit boards410may include a first sub circuit board410-1and a second sub circuit board410-2spaced apart from each other. The pair of second flexible circuit boards420may include a third sub circuit board420-1and a fourth sub circuit board420-2spaced apart from each other. The first flexible circuit board410may be configured such that the first sub circuit board410-1and the second sub circuit board410-2are integrally formed with each other. The first flexible circuit board410may be implemented as one flexible circuit board by connecting the first layer440of the first sub circuit board410-1and the first layer440of the second sub circuit board410-2and connecting the second layer450of the first sub circuit board410-1and the second layer450of the second sub circuit board410-2. The second flexible circuit board420may be configured such that the third sub circuit board420-1and the fourth sub circuit board420-2are integrally formed with each other. The second flexible circuit board420may be implemented as one flexible circuit board by connecting the first layer470of the third sub circuit board420-1and the first layer470of the fourth sub circuit board420-2and connecting the second layer480of the third sub circuit board420-1and the second layer480of the fourth sub circuit board420-2. FIG.16illustrates an image stabilization operation of a camera module300according to an embodiment.FIG.17illustrates an operation of a connecting member400of the camera module300according to an embodiment. FIGS.16and17illustrate the camera module300having a different structure from the camera module illustrated inFIGS.4,5, and12(e.g., the camera module200ofFIGS.4,5, and12) and some components of the camera module300. The camera module200ofFIGS.4,5, and12may be a camera module (e.g., a folded camera) in which the direction in which external light is incident on the camera module200and the optical axis L of the lens226are perpendicular to each other, and the camera module300ofFIGS.16and17may be a camera module (e.g., a direct type camera) in which the direction in which external light is incident on the camera module300and the optical axis of a lens are parallel to each other. Referring toFIG.16, the camera module300may include a camera housing310, a lens assembly320, an image sensor340, and a first circuit board330. For example, unlike the camera module200ofFIG.12, the camera module300ofFIG.16may not include a reflective member (e.g., the reflective member224ofFIG.5) because the camera module300does not have to reflect and/or refract a travel path of external light. The camera module300may perform an image stabilization function by moving the first circuit board330and the image sensor340in a direction perpendicular to an optical axis L (e.g., the z-axis). The camera module300may correct image shake by moving the first circuit board330, on which the image sensor340is disposed, in at least one of two directions (e.g., the x-axis direction and the y-axis direction) perpendicular to the optical axis L. The lens assembly320including a lens322may be disposed in the camera housing310. The camera housing310may have a light receiving area311formed therein through which external light is incident. The external light may be incident on the lens322and the image sensor340through the light receiving area311. The first circuit board330may be disposed on the camera housing310so as to be movable in the direction perpendicular to the optical axis L. The image sensor340may be disposed on the first circuit board330. The image sensor340may be disposed on one surface of the first circuit board330to face the lens322. The first circuit board330may be configured to move in the x-axis direction and/or the y-axis direction relative to the camera housing310. The image sensor340may move together with the first circuit board330and may move relative to the lens322accordingly. The camera module300may align the optical axis L of the lens322and the image sensor340to a specified position by moving the first circuit board330in at least one of the two directions perpendicular to the optical axis L using a drive member270. Referring toFIG.17, the camera module300may include the first circuit board330on which the image sensor340is disposed, a second circuit board350(e.g., the second circuit board290ofFIG.12) on which a connector353is disposed, and the connecting member400that connects the first circuit board330and the second circuit board350. The connecting member400may electrically connect the first circuit board330and the second circuit board350. For example, one portion of the connecting member400may be connected to the first circuit board330, and another portion of the connecting member400may be connected to the second circuit board350. Accordingly, the first circuit board330and the second circuit board350may be electrically connected through the connecting member400. InFIG.17, the connecting member400may be substantially the same as the connecting member400of the camera module described with reference toFIGS.13and14, and the contents described with reference toFIGS.13and14may be identically applied to the structure in which the connecting member400ofFIG.17is electrically/physically connected to the first circuit board330and the second circuit board350. As illustrated inFIG.17, the first circuit board330may be configured to move a predetermined distance in the direction perpendicular to the optical axis L as the image stabilization function of the camera module300is performed. A first flexible circuit board410and a second flexible circuit board420may be deformed such that portions of first layers440and470and portions of second layers450and480move toward or away from each other in response to a movement of the first circuit board330in the x-axis direction and/or the y-axis direction. When the first circuit board330moves in the y-axis direction, the gap between a second portion332of the first circuit board330and a first connecting portion431of a connecting circuit board430may be increased or decreased. For example, as at least a portion of the first layer440moves together with the first circuit board330, the first flexible circuit board410between the second portion332and the first connecting portion431may be deformed with a partial increase or decrease in the gap between the first layer440and the second layer450. When the first circuit board330moves in the x-axis direction, the gap between a fourth portion352of the second circuit board350and a second connecting portion432of the connecting circuit board430may be increased or decreased. For example, as at least a portion of the second layer480moves together with the connecting circuit board430, the second flexible circuit board420between the fourth portion352and the second connecting portion432may be deformed with a partial increase or decrease in the gap between the first layer470and the second layer480. FIG.18illustrates a connecting structure of a first circuit board253, a second circuit board290, and a connecting member400′ of a camera module200according to an embodiment. FIG.18illustrates an embodiment in which the connecting member400′ does not include the connecting circuit board430and a first flexible circuit board410′ and a second flexible circuit board420′ of the connecting member400′ are integrally formed with each other, compared to the camera module200illustrated inFIGS.13and14. Referring toFIG.18, the camera module200may include a movable member250, a guide member260, the second circuit board290, and the connecting member400′. The movable member250may include an image sensor252and the first circuit board253on which the image sensor252is disposed. The first circuit board253may include a first portion255on which the image sensor252is disposed and a second portion256that extends from the first portion255at a right angle and to which the connecting member400′ is connected. The second circuit board290may include a third portion291on which a connector295is disposed and a fourth portion293that extends from the third portion291and to which the connecting member400′ is connected. The connecting member400′ may be connected to the second portion256of the first circuit board253and the fourth portion293of the second circuit board290. The connector295of the second circuit board290may be fixedly coupled to the main circuit board150of the electronic device100. According to an embodiment, when the movable member250moves according to an image stabilization operation, tension by the connecting member400′ may be applied between the first circuit board253and the second circuit board290. The tension may obstruct the movement of the first circuit board253and the movable member250. The connecting member400′ may be deformable in response to the movement of the first circuit board253. The connecting member400′ may include the first flexible circuit board410′ connected to the first circuit board253and the second flexible circuit board420′ connected to the second circuit board290. The first flexible circuit board410′ and the second flexible circuit board420′ may include first layers440′ and470′ and second layers450′ and480′, respectively. At least a portion of the first layer440′ of the first flexible circuit board410′ may be coupled to the second portion256of the first circuit board253. At least a portion of the first layer470′ of the second flexible circuit board420′ may be coupled to the fourth portion293of the second circuit board290. The first flexible circuit board410′ may be configured such that the first layer440′ and the second layer450′ are electrically connected through VIAs461. The first layer440′ and the second layer450′ of the first flexible circuit board410′ may be physically coupled through an adhesive member (e.g., the adhesive member462ofFIG.9) disposed on the areas where the VIAs461are formed. The second flexible circuit board420′ may be configured such that the first layer470′ and the second layer480′ are electrically connected through VIAs (e.g., the VIAs491ofFIGS.13and14). The first layer470′ and the second layer480′ of the second flexible circuit board420′ may be physically coupled through an adhesive member (e.g., the adhesive member462ofFIG.9) disposed on the areas where the VIAs491are formed (e.g., the VIA areas443and453ofFIG.11). The first flexible circuit board410′ and the second flexible circuit board420′ may be at least partially connected. The second layer450′ of the first flexible circuit board410′ and the second layer480′ of the second flexible circuit board420′ may be at least partially connected with each other. The second layer450′ of the first flexible circuit board410′ and the second layer480′ of the second flexible circuit board420′ may be integrally formed with each other. The second layer480′ of the second flexible circuit board420′ may extend from at least a portion of the second layer450′ of the first flexible circuit board410′. The second layer480′ of the second flexible circuit board420′ may extend from an edge of the second layer450′ of the first flexible circuit board410′ and may extend substantially perpendicular to the second layer450′ of the first flexible circuit board410′ to face the fourth portion293of the second circuit board290. The second layer450′ of the first flexible circuit board410′ and the second layer480′ of the second flexible circuit board420′ may be provided to form substantially one layer by bending and/or cutting one FPCB. The camera module200may be configured to linearly move the movable member250in the direction of the first shift axis S1(e.g., the y-axis direction) using a first coil271. When the movable member250moves in the +S1direction, the first circuit board253may move in the +S1direction relative to the second circuit board290that is relatively fixed. When the first circuit board253moves, the first layer440′ of the first flexible circuit board410′ may move in the +S1direction together with the first circuit board253, and the gap between the first layer440′ and the second layer450′ of the first flexible circuit board410′ may be increased. In contrast, when the movable member250moves in the −S1direction, the first layer440′ of the first flexible circuit board410′ may move in the −S1direction together with the first circuit board253, and the gap between the first layer440′ and the second layer450′ of the first flexible circuit board410′ may be decreased. The camera module200may be configured to linearly move the movable member250in the direction of the second shift axis S2(e.g., the z-axis direction) using a second coil273. In the illustrated embodiment, when the movable member250moves in the +S2direction, the first circuit board253may move in the +S2direction relative to the second circuit board290that is relatively fixed. When the first circuit board253moves, the second layer480′ of the second flexible circuit board420′ may move in the +S2direction together with the first flexible circuit board410′ (or the first circuit board253, and the gap between the first layer470′ and the second layer480′ of the second flexible circuit board420′ may be increased. In contrast, when the movable member250moves in the −S2direction, the second layer480′ of the second flexible circuit board420′ may move in the −S2direction together with the first flexible circuit board410′, and the gap between the first layer470′ and the second layer480′ of the second flexible circuit board420′ may be decreased. FIG.19is a plan view of a camera module500according to an embodiment. FIG. is a perspective view of the camera module500according to an embodiment.FIG.21is an exploded perspective view of the camera module500according to an embodiment. FIG.20may be a view in which a first surface511aof a camera housing510is omitted such that the inside of the camera housing510is visible. FIGS.19to21illustrate the camera module500having a different structure from the camera module illustrated inFIGS.4,5, and12(e.g., the camera module200ofFIGS.4,5, and12). The camera module500ofFIGS.19to21may be referred to as the direct type camera module having a structure in which the direction in which external light is incident and the optical axis of a lens are parallel to each other, like the camera module ofFIG.16(e.g., the camera module300ofFIG.16). For example,FIGS.13and14may be schematic views illustrating the camera module500and some components included in the camera module500illustrated inFIGS.19to21. Referring toFIGS.19to21, the camera module500may include the camera housing510, a lens assembly520, an image stabilization assembly530, a support member580, a second circuit board591, and a connecting member600. Some of the components of the camera module500illustrated inFIGS.19to21may be substantially the same as or similar to, some of the components of the camera module200illustrated inFIGS.4to6or the camera module300illustrated inFIG.16, and therefore repetitive descriptions will hereinafter be omitted. The camera housing510may form at least a portion of the exterior of the camera module500. The camera housing510may accommodate a part of other components of the camera module500. For example, at least a part of the lens assembly520, at least a part of the image stabilization assembly530, the support member580, and/or the connecting member600may be accommodated in the camera housing510. The camera housing510may include a cover511and a plate513. The camera housing510may be formed by a coupling of the cover511and the plate513. The components of the camera module500may be accommodated in the space between the cover511and the plate513. The cover511and the plate513may be integrally formed with each other. The lens assembly520may be fixedly disposed in the camera housing510. The lens assembly520may be coupled to the cover511of the camera housing510. The cover511may have an opening515formed therein in which the lens assembly520is disposed. The opening515may be formed in a portion of the first surface511a(e.g., the surface facing the +z-axis direction) of the camera housing510. The lens assembly520may be disposed in the opening515. The camera housing510may be configured such that at least a part of the lens assembly520is visually exposed outside the camera housing510through the opening515. The lens assembly520may overlap the opening515when the first surface511aof the camera housing510is viewed from above. The lens assembly520may be partially disposed in the camera housing510. At least a part of the lens assembly520may be visually exposed outside the camera housing510through the opening515of the camera housing510. As the lens assembly520is disposed in the opening515, a lens522may be configured to receive external light. The lens assembly520(or the lens522may be visually exposed inside the opening515when the first surface511aof the camera housing510is viewed from above. The lens assembly520may be fixed to the camera housing510. The lens assembly520may be coupled to the inside of the opening515of the cover511and may be fixedly disposed in the camera housing510accordingly. In various embodiments, when a movable member550(or an image sensor552) moves in a direction perpendicular to the optical axis L of the lens522, the lens assembly520may be in a state of being relatively fixed to the camera housing510. The lens assembly520may be aligned with the image sensor552in the direction of the optical axis L of the lens522. The lens assembly520may partially overlap the image sensor552when the first surface511aof the camera housing510is viewed from above. The optical axis L of the lens522may refer to a virtual axis extending in the direction in which external light passes through the lens522. The optical axis L of the lens522may be substantially perpendicular to the first surface511aof the camera housing510. The optical axis L of the lens522may extend substantially parallel to the z-axis direction. The image stabilization assembly530may include a fixed member540, the movable member550, a guide member560, and a drive member570. The fixed member540may refer to a structure, on the basis of which the movable member550makes a relative movement. The fixed member540may refer to components relatively fixed with respect to a movement of the movable member550in an image stabilization operation and may include the camera housing510and the support member580. As the camera module500is configured such that the lens assembly520is fixed to the fixed member540and the movable member550moves relative to the fixed member540, the position of the movable member550relative to the lens assembly520may be changed, and an image stabilization function may be performed based on the change. The movable member550may be configured to move in the direction perpendicular to the optical axis L relative to the fixed member540. The movable member550may move in at least one direction perpendicular to the optical axis L relative to the fixed member540and the lens assembly520. The image stabilization assembly530may perform the image stabilization function by moving the movable member550in the direction perpendicular to the optical axis L (e.g., the x-axis direction and/or the y-axis direction). The image stabilization assembly530may compensate for image shake by moving the movable member550including the image sensor552in the x-axis direction and/or the y-axis direction with respect to the relatively fixed lens assembly520. The movable member550may include a holder551(e.g., the holder251ofFIGS.5to7), the image sensor552, and a first circuit board553. The holder551, the image sensor552, and the first circuit board553may be combined or connected so as to move together in the direction perpendicular to the optical axis L. The holder551may be coupled with the first circuit board553. For example, at least a portion of the holder551may be coupled to the first circuit board553such that the holder551moves together with the first circuit board553. When the image stabilization operation is performed, the holder551, together with the image sensor552and the first circuit board553, may move relative to the fixed member540or the lens assembly520. The holder551may have a first opening area554formed therein to be aligned with the lens assembly520in the direction of the optical axis L. The lens assembly520may face the image sensor552through the first opening area554. Light passing through the lens522may be incident on the image sensor552through the first opening area554. A plurality of magnets575and577may be disposed on the holder551. The holder551may include magnet support portions556and557formed on a first peripheral area555surrounding the first opening area554. The magnet support portions556and557may include the first magnet support portion556in which the first magnet575is seated and the second magnet support portion557in which the second magnet577is seated. The first magnet575may be fixedly disposed in the first magnet support portion556, and the second magnet577may be fixedly disposed in the second magnet support portion557. The holder551may have on the first peripheral area555thereof, first recesses558in which at least portions of first balls565are accommodated. As many first recesses558as the first balls565may be formed. The first recesses558may be formed in a shape extending in the x-axis direction. However, the shape of the first recesses558is not limited to the illustrated embodiment. The first circuit board553may be coupled to the holder551so as to move together with the holder551. The first circuit board553may be electrically connected with the image sensor552and the connecting member600. The image sensor552may be connected to or disposed on, one surface (e.g., the surface facing the +z-axis direction) of the first circuit board553. The image sensor552may be mounted on the surface facing substantially the same direction as the first surface511aof the camera housing510so as to partially face the lens assembly520. The connecting member600may be connected to at least a portion of the periphery of the first circuit board553. The structure in which the connecting member600and the first circuit board553are connected will be described below in more detail with reference toFIG.23. The guide member560may guide and/or support a movement of the movable member550. The guide member560may be located between the holder551of the movable member550and the cover511of the camera housing510. The guide member560may be coupled so as to be movable relative to the cover511and the holder551. The guide member560may be coupled to the holder551so as to be movable in the x-axis direction and may be coupled to the cover511of the camera housing510so as to be movable in the y-axis direction. The guide member560may be coupled to the cover511such that a movement of the guide member560relative to the cover511in the x-axis direction is limited. In an embodiment, in the image stabilization operation, the guide member560may move together with the movable member550or may be fixed without moving together with the movable member550. The guide member560may be configured to move in the y-axis direction together with the movable member550when the movable member550moves in the y-axis direction. When the movable member550moves in the x-axis direction, a movement of the guide member560in the x-axis direction may be limited, and thus the guide member560may be separated from the movement of the movable member550in the x-axis direction. The guide member560may guide or support the movement of the movable member550in the x-axis direction in the image stabilization operation. The guide member560may move together with the movable member550in a stabilization operation of moving the movable member550(or the image sensor552) in the y-axis direction and may be fixed to the fixed member540(e.g., the cover511) without moving together with the movable member550in a stabilization operation of moving the movable member550in the x-axis direction. The guide member560may have a second opening area561formed therein to be aligned with the lens assembly520in the direction of the optical axis L. The second opening area561may be aligned with the first opening area554of the holder551in the direction of the optical axis L. The lens assembly520may face the image sensor552through the first opening area554and the second opening area561. Light passing through the lens522may be incident on the image sensor552through the first opening area554and the second opening area561. The second opening area561may overlap the first opening area554and the magnet support portions556and557of the holder551. The second opening area561may be aligned with the first opening area554and the magnet support portions556and557in the direction perpendicular to the first surface511a(e.g., the z-axis direction). Based onFIG.22, the first opening area554and the magnet support portions556and557may be located inside the second opening area561when the camera module500is viewed in the +z-axis direction. A second surface (e.g., the surface facing the −z-axis direction) of the cover511that faces away from the first surface511amay face at least a portion (e.g., the magnet support portions556and557) of the holder551through the second opening area561. A plurality of coils571and573located on the second surface of the cover511and the plurality of magnets575and577located in the magnet support portions556and557of the holder551may face each other in the direction of the optical axis L through the second opening area561. The guide member560may include a second peripheral area562surrounding the second opening area561. The plurality of magnets575and577may be located in the second opening area561. The guide member560may have on the second peripheral area562thereof, second recesses563in which at least portions of second balls566are accommodated. As many second recesses563as the second balls566may be formed. The second recesses563may be formed in a shape extending in the y-axis direction. However, the shape of the second recesses563is not limited to the illustrated embodiment. The drive member570may be configured to provide a driving force for moving the movable member550in at least one direction perpendicular to the optical axis L. The drive member570may generate a driving force or a physical force for moving the movable member550in the first shift axis S1and/or the direction of the second shift axis S2. The drive member570may include the plurality of coils571and573and the plurality of magnets575and577. The plurality of coils571and573may be disposed in the camera housing510of the fixed member540. The plurality of magnets575and577may be disposed on the holder551of the movable member550. The plurality of coils571and573may be fixedly disposed on the cover511of the camera housing510, and the plurality of magnets575and577may be fixedly disposed in the magnet support portions556and557of the holder551to face the plurality of coils571and573in the direction of the optical axis L. The movable member550may be configured to move relative to the fixed member540by an electromagnetic interaction between the plurality of magnets575and577and the plurality of coils571and573. The plurality of coils571and573and the plurality of magnets575and577may be disposed to be aligned in the direction of the optical axis L (e.g., the z-axis direction). The plurality of magnets575and577may be disposed in the magnet support portions556and557of the holder551to face the second surface of the cover511(e.g., the surface facing away from the first surface511aor the surface facing the −z-axis direction). A coil flexible circuit board595may be disposed on the second surface of the cover511, and the plurality of coils571and573may be disposed on the coil flexible circuit board595to face the plurality of magnets575and577. The plurality of coils571and573may be mounted on one area (e.g., the area facing the −z-axis direction) of the coil flexible circuit board595so as to be electrically connected with the coil flexible circuit board595. The coil flexible circuit board595may be configured such that one portion is attached to the second surface of the cover511and another portion is connected to the second circuit board591. The coil flexible circuit board595may be formed in a shape extending from the cover511toward the second circuit board591. The drive member570may include the first coil571and the first magnet575for moving the movable member550in the x-axis direction and the second coil573and the second magnet577for moving the movable member550in the y-axis direction. The first magnet575may be fixedly disposed in the first magnet support portion556of the holder551. The first coil571may be fixedly disposed on the cover511(or the coil flexible circuit board595) to face the first magnet575in the direction of the optical axis L. The second magnet577may be fixedly disposed in the second magnet support portion557of the holder551. The second coil573may be fixedly disposed on the cover511(or the coil flexible circuit board595) to face the second magnet577in the direction of the optical axis L. The camera module500may perform the image stabilization function by moving the movable member550(or the image sensor552) in the direction perpendicular to the optical axis L (e.g., the x-axis direction and/or the y-axis direction) by applying electrical signals to the plurality of coils571and573. For example, when the electrical signals are applied to the plurality of coils571and573, a magnetic field may be formed, and an electromagnetic force may be generated between the plurality of coils571and573and the plurality of magnets575and577. The movable member550may be configured to move in the x-axis direction and/or the y-axis direction relative to the lens assembly520and the fixed member540by the electromagnetic force. The image stabilization assembly530may further include a first ball guide structure and a second ball guide structure for guiding a movement of the movable member550. The first ball guide structure may include one or more first balls565disposed between the guide member560and the holder551. For example, a plurality of first balls565may be formed. Third recesses that overlap the first recesses558in the direction of the optical axis L may be formed on the guide member560. The first balls565may be configured to roll in the spaces between the first recesses558of the holder551and the third recesses of the guide member560. The second ball guide structure may include one or more second balls566disposed between the guide member560and the cover511. For example, a plurality of second balls566may be formed. Although not illustrated, fourth recesses that overlap the second recesses563in the direction of the optical axis L may be formed on the cover511. The second balls566may be configured to roll in the spaces between the second recesses563of the guide member560and the fourth recesses of the cover511. The support member580may support at least a portion of the connecting member600and at least a portion of the coil flexible circuit board595. The support member580may be located in the camera housing510. The support member580may be coupled to the cover511so as to be fixedly disposed in the camera housing510. The connecting member600and the coil flexible circuit board595may be mounted on at least portions of the support member580. The connecting member600and the coil flexible circuit board595may extend from the inside of the camera housing510to the outside of the camera housing510across the support member580. As at least a portion of the connecting member600is mounted on the support member580, the support member580may support the connecting member600such that the connecting member600is deformed in response to a movement of the first circuit board553. For example, at least a portion of the connecting member600mounted on the support member580may be fixed to the support member580. The connecting member600may be deformed in response to a movement of the first circuit board553based on the portion fixed to the support member580. The second circuit board591may be configured to electrically connect the camera module500with a main circuit board of an electronic device. The second circuit board591may be electrically connected with the camera module500, the main circuit board150, and the FPCB595. The second circuit board591may be electrically connected with the first circuit board553of the camera module500through the connecting member600. The second circuit board591may be electrically connected with the main circuit board150through a connector593. The second circuit board591may include a first portion591ato which the connecting member600and the coil flexible circuit board595are connected and a second portion591bextending from the first portion591a. The connector593may be disposed on the second portion591b. The connecting member600and/or the coil flexible circuit board595may be electrically connected to the first portion591a. The second circuit board591may have on the first portion591athereof, a conductive area with which at least a portion of the connecting member600and/or at least a portion of the coil flexible circuit board595make electrical contact. The second circuit board591may be fixedly disposed in the housing110of the electronic device100. The second circuit board591may be fixed inside the housing110by connection of the connector593to the main circuit board150. When the first circuit board553moves in the image stabilization operation, the second circuit board591may remain fixed without moving together with the first circuit board553. The connecting member600may electrically connect the first circuit board553and the second circuit board591. The connecting member600may be connected to the first circuit board553and the second circuit board591. When the first circuit board553moves relative to the second circuit board591in the image stabilization operation, the connecting member600, which electrically connects the first circuit board553and the second circuit board591, may be deformed while at least a portion of the connecting member600moves together. The connecting member600may include a first connecting member600aand a second connecting member600b. The first connecting member600aand the second connecting member600bmay include identical components and may be formed in substantially the same shape or similar shapes. The connecting member600may include one or more flexible circuit boards or flexible portions610,620,630, and640. The flexible portions610,620,630, and640may include the FPCB. The flexible portions610,620,630, and640may be integrally formed by using one FPCB or may be formed by connecting or coupling a plurality of FPCBs. The connecting member600may include a flexible portion and a rigid portion. The flexible portion may include the first portions610, the second portions620, the bending portions630, and the extending portions640. The rigid portion may include contact portions650disposed on end portions of the extending portions640and brought into contact with the second circuit board591. The connecting member600may be implemented with an RFPCB. A specific shape and a movement operation of the connecting member600will be described below in more detail with reference toFIGS.22and23. FIG.22illustrates the movable member550and the connecting member600of the camera module500according to an embodiment.FIG.23illustrates the connecting member600and the first circuit board553of the camera module500according to an embodiment. Referring toFIGS.22and23, the camera module500may include the camera housing510, the lens assembly520, the movable member550, the guide member560, the support member580, and the connecting member600. Some of the components of the camera module500illustrated inFIGS.22and23may be the same as or similar to, some of the components of the camera module500illustrated inFIGS.19to21, and repetitive descriptions will hereinafter be omitted. The movable member550may include the first circuit board553and the image sensor552disposed on the first circuit board553. The image sensor552may be disposed on a first surface553aof the first circuit board553, and the connecting member600may be connected to edges of the first circuit board553. A contact portion650of the connecting member600may be connected with the second circuit board591fixedly coupled to the main circuit board of the electronic device. The connecting member600may be configured such that the portions connecting the contact portions650and the first circuit board553are deformable in response to a movement of the first circuit board553. The connecting member600may include the first connecting member600aand the second connecting member600bconnected to opposite sides of the first circuit board553. The first connecting member600amay be connected to a first edge553bof the first circuit board553, and the second connecting member600bmay be connected to a second edge553cfacing the first edge553b. The first connecting member600aand the second connecting member600bmay be formed in substantially the same structure or similar structures. The connecting member600may include the first portions610, the second portions620, the bending portions630, the extending portions640, and the contact portions650. The first portions610, the second portions620, the bending portions630, and the extending portions640may flexibly extend to connect the contact portions650and the first circuit board553. The first portions610, the second portions620, and the bending portions630may be partially deformed as the first circuit board553moves in the direction of the first shift axis S1or the direction of the second shift axis S2when the contact portions650are fixed to the second circuit board591. For example, at least parts of the extending portions640may be mounted on the support member580, and the first portions610, the second portions620, and the bending portions630may be deformed while partially moving with respect to the extending portions640mounted on the support member580. The first portions610may be connected to the edges553band553cof the first circuit board553. The second portions620may be connected to the extending portions640. The bending portion630may be connected, at opposite end portions, to the first portions610and the second portions620. The bending portions630may physically and electrically connect the first portions610and the second portions620. The first portions610may be located at the edges of the first circuit board553in the direction of the first shift axis S1, and the second portions620may be located at the edge of the first circuit board553in the direction of the second shift axis S2. The first portions610and the second portions620may be located in directions perpendicular to each other with respect to the first circuit board553. The first portions610, the second portions620, and the bending portions630may be disposed substantially perpendicular to the first circuit board553. In an embodiment, each of the bending portions630may be bent such that one portion partially faces the first portion610and another portion partially faces the second portion620. The bending portion630may include a third portion632, at least part of which faces the first portion610and a fourth portion634, at least part of which faces the second portion620. A reinforcing member691may be disposed on part of the bending portion630. The reinforcing member691may be disposed on the portion where the third portion632and the fourth portion634are connected and may be bent depending on the shape of the bending portion630. The reinforcing member691may support the bending portion630such that the bending portion630remains bent. The reinforcing member691may be implemented with a material having a specified rigidity. The first portion610and the third portion632may be electrically connected through VIAs. The first portion610and the third portion632may be electrically connected through a first VIA portion660having one or more VIAs formed therein. An adhesive member that attaches the first portion610and the third portion632may be disposed on the first VIA portion660. The second portion620and the fourth portion634may be electrically connected through VIAs. The second portion620and the fourth portion634may be electrically connected through a second VIA portion670having one or more VIAs formed therein. An adhesive member that attaches the second portion620and the fourth portion634may be disposed on the second VIA portion670. The connecting member600may be understood as a change of the structure of the connecting member400illustrated inFIGS.13and14. Compared to the connecting member400ofFIGS.13and14, the connecting member600ofFIGS.22and23may be a structure integrally formed by using one flexible circuit board. The connecting member600ofFIGS.22and23may be a structure changed such that in the connecting member400, the first flexible circuit board410and the second flexible circuit board420are implemented with one flexible circuit board so as to be connected without the connecting circuit board430, the first flexible circuit board410is integrally formed with the second portion256of the first circuit board253, and the second flexible circuit board420is integrally formed with the fourth portion293of the second circuit board290. The first portion610and the third portion632may be referred to as the first layer440and the second layer450of the first flexible circuit board410illustrated inFIGS.13and14. The second portion620and the fourth portion634may be referred to as the first layer470and the second layer480of the second flexible circuit board420illustrated inFIGS.13and14. The third portion632and the fourth portion634of the bending portion630may be understood as the second layer450of the first flexible circuit board410and the second layer480of the second flexible circuit board420that are changed so as to be integrally formed and directly connected with each other without being connected through the connecting circuit board430in the connecting member400illustrated inFIGS.13and14. The first VIA portion660may be referred to as the portion where the VIAs461and the adhesive member462are disposed in the first flexible circuit board410illustrated inFIGS.13and14. The second VIA portion670may be referred to as the portion where the VIAs491and the adhesive member492are disposed in the second flexible circuit board420ofFIGS.13and14. Hereinafter, an operation in which the connecting member600is deformed in an image stabilization operation will be described. The camera module500may perform the image stabilization operation by moving the movable member550(e.g., the first circuit board553and the image sensor552) in the direction of the first shift axis S1or the second shift axis S2perpendicular to the optical axis L. For example, when the movable member550moves, the lens522of the lens assembly520may be in a state of being relatively fixed to the camera housing510. When the image stabilization operation is performed in the direction of the first shift axis S1(e.g., the x-axis direction), the first circuit board553and the image sensor552may move in the direction of the first shift axis S1relative to the guide member560and the camera housing510that are relatively fixed. The first portion610and the third portion632of the connecting member600may be spaced apart from each other by a predetermined gap when viewed in the direction of the optical axis L. The predetermined gap may be increased or decreased as the first circuit board553and the image sensor552move in the direction of the first shift axis S1. When the first circuit board553moves in the +S1direction (e.g., the +x-axis direction), the first portion610of the first connecting member600aand the first portion610of the second connecting member600bmay move in the +S1direction together with the first circuit board553. The gap between the first portion610and the third portion632of the first connecting member600amay be decreased, and the gap between the first portion610and the third portion632of the second connecting member600bmay be increased. When the first circuit board553moves in the −S1direction (e.g., the −x-axis direction), the first portion610of the first connecting member600aand the first portion610of the second connecting member600bmay move in the −S1direction together with the first circuit board553. The gap between the first portion610and the third portion632of the first connecting member600amay be increased, and the gap between the first portion610and the third portion632of the second connecting member600bmay be decreased. When the image stabilization operation is performed in the direction of the second shift axis S2(e.g., the y-axis direction), the first circuit board553and the image sensor552, together with the guide member560, may move in the direction of the second shift axis S2relative to the camera housing510that is relatively fixed. The second portion620and the fourth portion634of the connecting member600may be spaced apart from each other by a predetermined gap when viewed in the direction of the optical axis L. The predetermined gap may be increased or decreased as the first circuit board553and the image sensor552move in the direction of the second shift axis S2. When the first circuit board553moves in the +S2direction (e.g., the +y-axis direction), the fourth portion634of the first connecting member600aand the fourth portion634of the second connecting member600bmay move in the +S2direction together with the first circuit board553. The gap between the second portion620and the fourth portion634of the first connecting member600aand the gap between the second portion620and the fourth portion634of the second connecting member600bmay be decreased. When the first circuit board553moves in the −S2direction (e.g., the −y-axis direction), the fourth portion634of the first connecting member600aand the fourth portion634of the second connecting member600bmay move in the −S2direction together with the first circuit board553. The gap between the second portion620and the fourth portion634of the first connecting member600aand the gap between the second portion620and the fourth portion634of the second connecting member600bmay be increased. FIG.24is a block diagram illustrating an electronic device701in a network environment700according to an embodiment. Referring toFIG.24, the electronic device701in the network environment700may communicate with an electronic device702via a first network798(e.g., a short-range wireless communication network) or at least one of an electronic device704or a server708via a second network799(e.g., a long-range wireless communication network). The electronic device701may communicate with the electronic device704via the server708. The electronic device701may include a processor720, memory730, an input module750, a sound output module755, a display module760, an audio module770, a sensor module776, an interface777, a connecting terminal778, a haptic module779, a camera module780, a power management module788, a battery789, a communication module790, a subscriber identification module (SIM)796or an antenna module797. In some embodiments, at least one of the components (e.g., the connecting terminal778) may be omitted from the electronic device701or one or more other components may be added in the electronic device701. In some embodiments, some of the components (e.g., the sensor module776, the camera module780or the antenna module797) may be implemented as a single component (e.g., the display module760). The processor720may execute, for example, software (e.g., a program740) to control at least one other component (e.g., a hardware or software component) of the electronic device701coupled with the processor720, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor720may store a command or data received from another component (e.g., the sensor module776or the communication module790) in volatile memory732, process the command or the data stored in the volatile memory732, and store resulting data in non-volatile memory734. According to an embodiment, the processor720may include a main processor721(e.g., a CPU or an AP) or an auxiliary processor723(e.g., a GPU, a neural processing unit (NPU), an ISP, a sensor hub processor or a CP) that is operable independently from or in conjunction with, the main processor721. For example, when the electronic device701includes the main processor721and the auxiliary processor723, the auxiliary processor723may be adapted to consume less power than the main processor721or to be specific to a specified function. The auxiliary processor723may be implemented as separate from or as part of the main processor721. The auxiliary processor723may control at least some of functions or states related to at least one component (e.g., the display module760, the sensor module776or the communication module790) among the components of the electronic device701, instead of the main processor721while the main processor721is in an inactive (e.g., sleep) state or together with the main processor721while the main processor721is in an active state (e.g., executing an application). According to an embodiment, the auxiliary processor723(e.g., an ISP or a CP) may be implemented as part of another component (e.g., the camera module780or the communication module790) functionally related to the auxiliary processor723. According to an embodiment, the auxiliary processor723(e.g., the neural processing unit) may include a hardware structure specified for artificial intelligence model processing. An artificial intelligence model may be generated by machine learning. Such learning may be performed, e.g., by the electronic device701where the artificial intelligence is performed or via a separate server (e.g., the server708). Learning algorithms may include, but are not limited to, e.g., supervised learning, unsupervised learning, semi-supervised learning or reinforcement learning. The artificial intelligence model may include a plurality of artificial neural network layers. The artificial neural network may be a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted Boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent DNN (BRDNN), deep Q-network, or a combination of two or more thereof but is not limited thereto. The artificial intelligence model may, additionally or alternatively, include a software structure other than the hardware structure. The memory730may store various data used by at least one component (e.g., the processor720or the sensor module776) of the electronic device701. The various data may include, for example, software (e.g., the program740) and input data or output data for a command related thereto. The memory730may include the volatile memory732or the non-volatile memory734. The program740may be stored in the memory730as software, and may include, for example, an operating system (OS)742, middleware744or an application746. The input module750may receive a command or data to be used by another component (e.g., the processor720) of the electronic device701, from the outside (e.g., a user) of the electronic device701. The input module750may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button) or a digital pen (e.g., a stylus pen). The sound output module755may output sound signals to the outside of the electronic device701. The sound output module755may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record. The receiver may be used for receiving incoming calls. According to an embodiment, the receiver may be implemented as separate from or as part of the speaker. The display module760may visually provide information to the outside (e.g., a user) of the electronic device701. The display module760may include, for example, a display, a hologram device or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display module760may include a touch sensor adapted to detect a touch or a pressure sensor adapted to measure the intensity of force incurred by the touch. The audio module770may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module770may obtain the sound via the input module750or output the sound via the sound output module755or a headphone of an external electronic device (e.g., an electronic device702) directly (e.g., wiredly) or wirelessly coupled with the electronic device701. The sensor module776may detect an operational state (e.g., power or temperature) of the electronic device701or an environmental state (e.g., a state of a user) external to the electronic device701, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module776may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR sensor, a biometric sensor, a temperature sensor, a humidity sensor or an illuminance sensor. The interface777may support one or more specified protocols to be used for the electronic device701to be coupled with the external electronic device (e.g., the electronic device702) directly (e.g., wiredly) or wirelessly. According to an embodiment, the interface777may include, for example, an HDMI, a USB interface, an SD card interface or an audio interface. A connecting terminal778may include a connector via which the electronic device701may be physically connected with the external electronic device (e.g., the electronic device702). According to an embodiment, the connecting terminal778may include, for example, a HDMI connector, a USB connector, an SD card connector or an audio connector (e.g., a headphone connector). The haptic module779may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module779may include, for example, a motor, a piezoelectric element or an electric stimulator. The camera module780may capture a still image or moving images. According to an embodiment, the camera module780may include one or more lenses, image sensors, ISPs or flashes. The power management module788may manage power supplied to the electronic device701. According to one embodiment, the power management module788may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery789may supply power to at least one component of the electronic device701. According to an embodiment, the battery789may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable or a fuel cell. The communication module790may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device701and the external electronic device (e.g., the electronic device702, the electronic device704or the server708) and performing communication via the established communication channel. The communication module790may include one or more CPs that are operable independently from the processor720(e.g., the AP) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module790may include a wireless communication module792(e.g., a cellular communication module, a short-range wireless communication module or a global navigation satellite system (GNSS) communication module) or a wired communication module794(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network798(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct or IR data association (IrDA)) or the second network799(e.g., a long-range communication network, such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip) or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module792may identify and authenticate the electronic device701in a communication network, such as the first network798or the second network799, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module796. The wireless communication module792may support a 5G network, after a 4G network, and next-generation communication technology, e.g., new radio (NR) access technology. The NR access technology may support enhanced mobile broadband (eMBB), massive machine type communications (mMTC) or ultra-reliable and low-latency communications (URLLC). The wireless communication module792may support a high-frequency band (e.g., the mmWave band) to achieve, e.g., a high data transmission rate. The wireless communication module792may support various technologies for securing performance on a high-frequency band, such as, e.g., beamforming, massive multiple-input and multiple-output (massive MIMO), full dimensional MIMO (FD-MIMO), array antenna, analog beam-forming or large scale antenna. The wireless communication module792may support various requirements specified in the electronic device701, an external electronic device (e.g., the electronic device704) or a network system (e.g., the second network799). According to an embodiment, the wireless communication module792may support a peak data rate (e.g., 20 Gbps or more) for implementing eMBB, loss coverage (e.g., 164 dB or less) for implementing mMTC or U-plane latency (e.g., 0.5 ms or less for each of downlink (DL) and uplink (UL) or a round trip of 1 ms or less) for implementing URLLC. The antenna module797may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device701. According to an embodiment, the antenna module797may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., a PCB). According to an embodiment, the antenna module797may include a plurality of antennas (e.g., array antennas). In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network798or the second network799, may be selected, for example, by the communication module790(e.g., the wireless communication module792) from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module790and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module797. The antenna module797may form a mmWave antenna module. According to an embodiment, the mmWave antenna module may include a PCB, an RFIC disposed on a first surface (e.g., the bottom surface) of the PCB or adjacent to the first surface and capable of supporting a designated high-frequency band (e.g., the mmWave band), and a plurality of antennas (e.g., array antennas) disposed on a second surface (e.g., the top or a side surface) of the PCB or adjacent to the second surface and capable of transmitting or receiving signals of the designated high-frequency band. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI) or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device701and the external electronic device704via the server708coupled with the second network799. Each of the electronic devices702or704may be a device of a same type as or a different type, from the electronic device701. According to an embodiment, all or some of operations to be executed at the electronic device701may be executed at one or more of the external electronic devices702,704or708. For example, if the electronic device701should perform a function or a service automatically or in response to a request from a user or another device, the electronic device701, instead of or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device701. The electronic device701may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, mobile edge computing (MEC) or client-server computing technology may be used, for example. The electronic device701may provide ultra low-latency services using, e.g., distributed computing or MEV. Alternatively, the external electronic device704may include an internet-of-things (IoT) device. The server708may be an intelligent server using machine learning and/or a neural network. According to an embodiment, the external electronic device704or the server708may be included in the second network799. The electronic device701may be applied to intelligent services (e.g., smart home, smart city, smart car or healthcare) based on 5G communication technology or IoT-related technology. FIG.25is a block diagram800illustrating the camera module780according to various embodiments. Referring toFIG.25, the camera module780may include a lens assembly810, a flash820, an image sensor830, an image stabilizer840, memory850(e.g., buffer memory) or an ISP860. The lens assembly810may collect light emitted or reflected from an object whose image is to be taken. The lens assembly810may include one or more lenses. According to an embodiment, the camera module780may include a plurality of lens assemblies810. In such a case, the camera module780may form, for example, a dual camera, a 360-degree camera or a spherical camera. Some of the plurality of lens assemblies810may have the same lens attribute (e.g., view angle, focal length, auto-focusing, f number or optical zoom) or at least one lens assembly may have one or more lens attributes different from those of another lens assembly. The lens assembly810may include, for example, a wide-angle lens or a telephoto lens. The flash820may emit light that is used to reinforce light reflected from an object. According to an embodiment, the flash820may include one or more light emitting diodes (LEDs)(e.g., a red-green-blue (RGB) LED, a white LED, an IR LED or an ultraviolet (UV) LED) or a xenon lamp. The image sensor830may obtain an image corresponding to an object by converting light emitted or reflected from the object and transmitted via the lens assembly810into an electrical signal. According to an embodiment, the image sensor830may include one selected from image sensors having different attributes, such as an RGB sensor, a black-and-white (BW) sensor, an IR sensor or a UV sensor, a plurality of image sensors having the same attribute or a plurality of image sensors having different attributes. Each image sensor included in the image sensor830may be implemented using, for example, a charged coupled device (CCD) sensor or a complementary metal oxide semiconductor (CMOS) sensor. The image stabilizer840may move the image sensor830or at least one lens included in the lens assembly810in a particular direction or control an operational attribute (e.g., adjust the read-out timing) of the image sensor830in response to the movement of the camera module780or the electronic device701including the camera module780. This allows compensating for at least part of a negative effect (e.g., image blurring) by the movement on an image being captured. According to an embodiment, the image stabilizer840may sense such a movement by the camera module780or the electronic device701using a gyro sensor or an acceleration sensor disposed inside or outside the camera module780. According to an embodiment, the image stabilizer840may be implemented, for example, as an OIS. The memory850may store, at least temporarily, at least part of an image obtained via the image sensor830for a subsequent image processing task. For example, if image capturing is delayed due to shutter lag or multiple images are quickly captured, a raw image obtained (e.g., a Bayer-patterned image, a high-resolution image) may be stored in the memory850, and its corresponding copy image (e.g., a low-resolution image) may be previewed via the display module760. Thereafter, if a specified condition is met (e.g., by a user's input or system command), at least part of the raw image stored in the memory850may be obtained and processed, for example, by the ISP860. According to an embodiment, the memory850may be configured as at least part of the memory730or as a separate memory that is operated independently from the memory730. The ISP860may perform one or more image processing with respect to an image obtained via the image sensor830or an image stored in the memory850. The one or more image processing may include, for example, depth map generation, three-dimensional (3D) modeling, panorama generation, feature point extraction, image synthesizing or image compensation (e.g., noise reduction, resolution adjustment, brightness adjustment, blurring, sharpening or softening). Additionally or alternatively, the ISP860may perform control (e.g., exposure time control or read-out timing control) with respect to at least one (e.g., the image sensor830) of the components included in the camera module780. An image processed by the ISP860may be stored back in the memory850for further processing or may be provided to an external component (e.g., the memory730, the display module760, the electronic device702, the electronic device704or the server708) outside the camera module780. According to an embodiment, the ISP860may be configured as at least part of the processor720or as a separate processor that is operated independently from the processor720. If the ISP860is configured as a separate processor from the processor720, at least one image processed by the ISP860may be displayed, by the processor720, via the display module760as it is or after being further processed. According to an embodiment, the electronic device701may include a plurality of camera modules780having different attributes or functions. In such a case, at least one of the plurality of camera modules780may form, for example, a wide-angle camera and at least another of the plurality of camera modules780may form a telephoto camera. Similarly, at least one of the plurality of camera modules780may form, for example, a front camera and at least another of the plurality of camera modules780may form a rear camera. An electronic device100according to an embodiment of the disclosure may include a housing110having a main circuit board150disposed therein and a camera module200, at least part of which is disposed in the housing110, the camera module200being electrically connected with the main circuit board150. The camera module200may include a camera housing210, a lens assembly220, at least part of which is accommodated in the camera housing210, the lens assembly220including a lens226, a movable member250including an image sensor252and a first circuit board253electrically connected with the image sensor252, the movable member250being coupled to the camera housing210so as to be movable in a direction perpendicular to an optical axis L of the lens226, a second circuit board290, at least part of which is electrically connected with the main circuit board150, and a connecting member400that electrically connects the first circuit board253and the second circuit board290. The connecting member400may include a first flexible circuit board410connected with the first circuit board253and a second flexible circuit board420connected with the second circuit board290, and the first flexible circuit board410and the second flexible circuit board420may be configured to be electrically connected. Each of the first flexible circuit board410and the second flexible circuit board420may include a first layer440and470, a second layer450and480disposed to face the first layer440and470, and a VIA461and491that electrically connects the first layer440and470and the second layer450and480. Herein, each of the first flexible circuit board410and the second flexible circuit board420may be configured to be deformed in a shape in which, when the movable member moves, a gap between a partial area of the first layer440and470and a partial area of the second layer450and480is decreased or increased as the first circuit board253moves relative to the second circuit board290. The movable member250may be configured to move in a direction of at least one of a first shift axis S1or a second shift axis S2perpendicular to the optical axis L. The first shift axis S1and the second shift axis S2may be perpendicular to each other. The first flexible circuit board410may be configured to be deformed in response to a movement of the movable member250in the direction of the first shift axis S1. The second flexible circuit board420may be configured to be deformed in response to a movement of the movable member250in the direction of the second shift axis S2. Herein, each of the first flexible circuit board410and the second flexible circuit board420may be configured such that the first layer440and470and the second layer450and480are formed in substantially the same shape. The first layer440of the first flexible circuit board410may be connected to the first circuit board253. The first layer470of the second flexible circuit board420may be connected to the second circuit board290. The second layer450of the first flexible circuit board410and the second layer480of the second flexible circuit board420may be electrically connected. The first layer440and470may include a first pad area442having a first conductive pad441and471formed thereon and a first VIA area443having the VIA461and491formed therein. The second layer450and480may include a second pad area452having a second conductive pad451and481formed thereon and a second VIA area453having the VIA461and491formed therein. Herein, each of the first flexible circuit board410and the second flexible circuit board420may further include an adhesive member462and492disposed between the first VIA area443and the second VIA area453, and the VIA461and491may pass through at least part of the first VIA area443, the second VIA area453, and the adhesive member462and492. The first layer440and470and the second layer450and480may be configured such that the first VIA area443and the second VIA area453are physically coupled through the adhesive member462and492and a gap between the first pad area442and the second pad area452is changed based on areas coupled by the adhesive member462and492. The connecting member400may further include a connecting circuit board430that electrically connects the first flexible circuit board410and the second flexible circuit board420. The first flexible circuit board410may be disposed between the first circuit board253and the connecting circuit board430. The second flexible circuit board420may be disposed between the second circuit board290and the connecting circuit board430. The first flexible circuit board410may be configured such that at least part of the first layer440of the first flexible circuit board410is coupled to the first circuit board253and at least part of the second layer450of the first flexible circuit board410is coupled to the connecting circuit board430. The second flexible circuit board420may be configured such that at least part of the first layer470of the second flexible circuit board420is coupled to the second circuit board290and at least part of the second layer480of the second flexible circuit board420is coupled to the connecting circuit board430. The first circuit board253may include a first portion255on which the image sensor252is disposed and a second portion256that extends from the first portion255at a right angle. The first flexible circuit board410may be connected to the second portion256such that the first layer440and the second layer450are parallel to the second portion256. The second circuit board290may include a third portion291on which a connector295is disposed and a fourth portion293that extends from the third portion291and that is disposed perpendicular to the first portion255and the second portion256of the first circuit board253. The second flexible circuit board420may be connected to the fourth portion293such that the first layer470and the second layer480are parallel to the fourth portion293. The connecting member400may further include a connecting circuit board430that connects the first flexible circuit board410and the second flexible circuit board420. The connecting circuit board430may include a first connecting portion431that faces the second portion256of the first circuit board253in parallel and a second connecting portion432that extends from the first connecting portion431at a right angle and faces the fourth portion293of the second circuit board290in parallel. The first flexible circuit board410may be disposed between the second portion256and the first connecting portion431, and the second flexible circuit board420may be disposed between the fourth portion293and the second connecting portion432. Herein, each of the first flexible circuit board410and the second flexible circuit board420may be configured such that a first conductive pad441and471is disposed on a partial area of the first layer440and470and a second conductive pad451and481is disposed on a partial area of the second layer450and480. The first flexible circuit board410may electrically connect the first circuit board253and the connecting circuit board430as the first conductive pad441is coupled to the second portion256and the second conductive pad451is coupled to the first connecting portion431. The second flexible circuit board420may electrically connect the second circuit board290and the connecting circuit board430as the first conductive pad471is coupled to the fourth portion293and the second conductive pad481is coupled to the second connecting portion432. The second circuit board290may be fixedly disposed in the housing110. The first circuit board253may be configured to move in a direction of a first shift axis S1and a direction of a second shift axis S2perpendicular to the optical axis L relative to the second circuit board290together with the movable member250. A distance between the second portion256and the first connecting portion431of the first circuit board253may be changed when the first circuit board253moves in the direction of the first shift axis S1. A distance between the fourth portion293and the second connecting portion432of the second circuit board290may be changed when the first circuit board253moves in the direction of the second shift axis S2. A camera module200according to an embodiment of the disclosure may include a camera housing210, a lens assembly220, at least part of which is accommodated in the camera housing210, the lens assembly220including a lens226, and a circuit board structure for electrical connection of the camera module200. The circuit board structure may include a first circuit board portion253on which an image sensor252is disposed, a second circuit board portion290on which a connector295is disposed, and a third circuit board portion400, at least part of which flexibly extends from the first circuit board portion253toward the second circuit board portion290to connect the first circuit board portion253and the second circuit board portion290. The third circuit board portion400may include a first flexible portion410connected to the first circuit board portion253and a second flexible portion420connected to the second circuit board portion290. Each of the first flexible portion410and the second flexible portion420may include a first layer440and470, a second layer450and480disposed to face the first layer440and470, an adhesive member464and492disposed between a partial area of the first layer440and470and a partial area of the second layer450and480, and a VIA461and491that passes through the adhesive member462and492to electrically connect the first layer440and470and the second layer450and480. The first circuit board portion253may be configured to move relative to the second circuit board portion290. Each of the first flexible portion410and the second flexible portion420may be configured to be deformed in a shape in which a gap G between a partial area of the first layer440and470and a partial area of the second layer450and480is decreased or increased as the first circuit board portion253moves. The first layer440and470may include a first pad area442having a first conductive pad441and471formed thereon and a first VIA area443having the VIA461and491formed therein. The second layer450and480may include a second pad area452having a second conductive pad451and481formed thereon and a second VIA area453having the VIA461and491formed therein. The adhesive member462and492may be disposed between the first VIA area443and the second VIA area453. The first layer440and470and the second layer450and480may be configured such that the first VIA area443and the second VIA area453are physically coupled through the adhesive member462and492and a gap G between the first pad area442and the second pad area452is changed based on areas coupled by the adhesive member462and492. The circuit board structure may include a rigid circuit board portion including the first circuit board portion253and the second circuit board portion290and a flexible circuit board portion including the first flexible portion410and the second flexible portion420, and the rigid circuit board portion and the flexible circuit board portion may be integrally formed with each other. As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program740) including one or more instructions that are stored in a storage medium (e.g., internal memory736or external memory738) that is readable by a machine (e.g., the electronic device701). For example, a processor (e.g., the processor720) of the machine (e.g., the electronic device701) may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal, but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)) or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., PlayStore™) or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments, one or more of the above-described components may be omitted or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program or another component may be carried out sequentially, in parallel, repeatedly or heuristically or one or more of the operations may be executed in a different order or omitted or one or more other operations may be added. While the disclosure has been particularly shown and described with reference to certain embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
177,169
11943539
DETAILED DESCRIPTION Many of the innovations described herein are made with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding. It may be evident, however, that different innovations can be practiced without these specific details. In other instances, well-known structures and components are shown in block diagram form to facilitate describing the innovations. FIG.1Adepicts a dollhouse view100of an example environment, such as a house, according to some embodiments. The dollhouse view100gives an overall view of the example environment captured by an environmental capture system (discussed herein). A user may interact with the dollhouse view100on a user system by toggling between different views of the example environment. For example, the user may interact with area110to trigger a floorplan view of the first floor of the house, as seen inFIG.1B. In some embodiments, the user may interact with icons in the dollhouse view100, such as icons120,130, and140, to provide a walkthrough view (e.g., for a 3D walkthrough), a floorplan view, or a measurement view, respectively. FIG.1Bdepicts a floorplan view200of the first floor of the house according to some embodiments. The floorplan view is a top-down view of the first floor of the house. The user may interact with areas of the floorplan view, such as the area150, to trigger an eye-level view of a particular portion of the floorplan, such as a living room. An example of the eye-level view of the living room can be found inFIG.2which may be part of a virtual walkthrough. The user may interact with a portion of the floorplan200corresponding to the area150ofFIG.1B. The user may move a view around the room as if the user was actually in the living room. In addition to a horizontal 360° view of the living room, the user may also view or navigate the floor or ceiling of the living room. Furthermore, the user may traverse the living room to other parts of the house by interacting with particular areas of the portion of the floorplan200, such as areas210and220. When the user interacts with the area220, the ECS may provide a walking-style transition between the area of the house substantially corresponding to the region of the house depicted by area150to an area of the house substantially corresponding to the region of the house depicted by the area220. FIG.3depicts one example of an environmental capture system300according to some embodiments. The environmental capture system300includes lens310, a housing320, a mount attachment330, and a moveable cover340. When in use, the environmental capture system300may be positioned in an environment such as a room. The environmental capture system300may be positioned on a support (e.g., tripod). The moveable cover340may be moved to reveal a lidar and mirror that is capable of spinning. Once activated, the environmental capture system300may take a burst of images and then turn using a motor. The environmental capture system300may turn on the mount attachment330. While turning, the lidar may take measurements (while turning, the environmental capture system may not take images). Once directed to a new direction, the environmental capture system may take another burst of images before turning to the next direction. For example, once positioned, a user may command the environmental capture system300to start a sweep. The sweep may be as follows:(1) Exposure estimation and then take HDR RGB imagesRotate 90 degrees capturing depth information, also referred to herein interchangeably as depth data(2) Exposure estimation and then take HDR RGB imagesRotate 90 degrees capturing depth data(3) Exposure estimation and then take HDR RGB imagesRotate 90 degrees capturing depth data(4) Exposure estimation and then take HDR RGB imagesRotate 90 degrees (total 360) capturing depth data For each burst, there may be any number of images at different exposures. The environmental capture system may blend any number of the images of a burst together while waiting for another frame and/or waiting for the next burst. The lens310may be a part of a lens assembly. Further details of the lens assembly is provided in connection with the description ofFIG.7. The lens310is strategically placed at a center of an axis of rotation305of the environmental capture system300. In this example, the axis of rotation305is on the x-y plane. By placing the lens310at the center of the axis of rotation305, a parallax effect may be eliminated or reduced. Parallax is an error that arises due to the rotation of the image capture device about a point that is not a non-parallax point (NPP). In this example, the NPP can be found in the center of the lens's entrance pupil. In some embodiments, the environmental capture system300may include a motor for turning the environmental capture system300about the mount attachment330. In some embodiments, a motorized mount may move the environmental capture system300along a horizontal axis, vertical axis, or both. In some embodiments, the motorized mount may rotate or move in the x-y plane. The use of a mount attachment330may allow for the environmental capture system300to be coupled to a motorized mount, tripod, or the like to stabilize the environmental capture system300to reduce or minimize shaking. In another example, the mount attachment330may be coupled to a motorized mount that allows the 3D, and environmental capture system300to rotate at a steady, known speed, which aids the lidar in determining the (x, y, z) coordinates of each laser pulse of the lidar. FIG.4depicts a rendering of an environmental capture system400in some embodiments. The rendering shows the environmental capture system400(which may be an example of the environmental capture system300ofFIG.3) from a variety of views, such as a front view410, a top view420, a side view430, and a back view440. In these renderings, the environmental capture system400may include an optional hollow portion depicted in the side view430. The lens depicted on the front view410may be a part of a lens assembly. Like the environmental capture system300, the lens of the environmental capture system400is strategically placed at a center of an axis of rotation. The lens may include a large field of view. In various embodiments, the lens depicted on the front view410is recessed and the housing is flared such that the wide-angel lens is directly at the no-parallax point (e.g., directly above a mid-point of the mount and/or motor) but still may take images without interference from the housing. In view430, a mirror450is revealed. A lidar may emit a laser pulse to the mirror (for example, in a direction that is opposite or orthogonal about a substantially vertical axis to the lens view). The laser pulse may hit the mirror450which may be angled (e.g., at a 90 degree angle) The mirror450may be coupled to an internal motor that turns the mirror such at the laser pulses of the lidar may be emitted and/or received at many different angles around the environmental capture system400. FIG.5is a depiction of the laser pulses from the lidar about the environmental capture system400in some embodiments. In this example, the laser pulses are emitted at the spinning mirror450. The laser pulses may be emitted and received perpendicular to a horizontal axis602(seeFIG.6A) of the environmental capture system400. The mirror450may be angled such that laser pulses from the lidar are directed away from the environmental capture system400. In some examples, the angle of the angled surface of the mirror may be 90 degrees or be at or between 60 degree to 120 degrees. In some embodiments, while the environmental capture system400is stationary and in operation, the environmental capture system400may take a burst of images through the lens. The environmental capture system400may turn on a horizontal motor between bursts of images. While turning along the mount, the lidar of the environmental capture system400may emit and/or receive laser pulses which hit the spinning mirror450. The lidar may generate depth signals from the received laser pulse reflections and/or generate depth data. In some embodiments, the depth data may be associated with coordinates about the environmental capture system400. Similarly, pixels or parts of images may be associated with the coordinates about the environmental capture system400to enable the creation of the 3D visualization (e.g., an image from different directions, a 3D walkthrough, or the like) to be generated using the images and the depth data. As shown inFIG.5, the lidar pulses may be blocked by the bottom portion of the environmental capture system400. It will be appreciated that the mirror450may spin consistently while the environmental capture system400moves about the mount or the mirror450may spin more slowly when the environmental capture system400starts to move and again when the environmental capture system400slows to stop (e.g., maintaining a constant speed between the starting and stopping of the mount motor). The lidar may receive depth data from the pulses. Due to movement of the environmental capture system400and/or the increase or decrease of the speed of the mirror450, the density of depth data about the environmental capture system400may be inconsistent (e.g., more dense in some areas and less dense in others). FIG.6Adepicts a side view of the environmental capture system400. In this view, the mirror450is depicted and may spin about a horizontal axis. The pulse604may be emitted by the lidar at the spinning mirror450and may be emitted perpendicular to the horizontal axis602. Similarly, the pulse604may be received by the lidar in a similar manner. Although the lidar pulses are discussed as being perpendicular to the horizontal axis602, it will be appreciated that the lidar pulses may be at any angle relative to the horizontal axis602(e.g., the mirror angle may be at any angle including between 60 to 120 degrees). In various embodiments, the lidar emits pulses opposite a front side (e.g., front side604) of the environmental capture system400(e.g., in a direction opposite of the center of the field of view of the lens or towards the back side606). As discussed herein, the environmental capture system400may turn about vertical axis608. In various embodiments, the environmental capture system400takes images and then turns 90 degrees, thereby taking a fourth set of images when the environmental capture system400completes turning 270 degrees from the original starting position where the first set of images was taken. As such, the environmental capture system400may generate four sets of images between turns totaling 270 degrees (e.g., assuming that the first set of images was taken before the initial turning of the environmental capture system400). In various embodiments, the images from a single sweep (e.g., the four sets of images) of the environmental capture system400(e.g., taken in a single full rotation or a rotation of 270 degrees about the vertical axis) is sufficient along with the depth data acquired during the same sweep to generate the 3D visualization without any additional sweeps or turns of the environmental capture system400. It will be appreciated that, in this example, lidar pulses are emitted and directed by the spinning mirror in a position that is distant from the point of rotation of the environmental capture system400(e.g., the lens may be at the no-parallax point while the mirror may be in a position behind the lens relative to the front of the environmental capture system400. Since the lidar pulses are directed by the mirror450at a position that is off the point of rotation, the lidar may not receive depth data from a cylinder running from above the environmental capture system400to below the environmental capture system400. In this example, the radius of the cylinder (e.g., the cylinder being a lack of depth information) may be measured from the center of the point of rotation of the motor mount to the point where the mirror450directs the lidar pulses. Further, inFIG.6B, cavity610is depicted. In this example, the environmental capture system400includes the spinning mirror within the body of the housing of the environmental capture system400. There is a cut-out section from the housing. The laser pulses may be reflected by the mirror out of the housing and then reflections may be received by the mirror and directed back to the lidar to enable the lidar to create depth signals and/or depth data. The base of the body of the environmental capture system400below the cavity610may block some of the laser pulses. The cavity610may be defined by the base of the environmental capture system400and the rotating mirror. As depicted inFIG.6B, there may still be a space between an edge of the angled mirror and the housing of the environmental capture system400containing the lidar. In various embodiments, the lidar is configured to stop emitting laser pulses if the speed of rotation of the mirror drops below a rotating safety threshold (e.g., if there is a failure of the motor spinning the mirror or the mirror is held in place). In this way, the lidar may be configured for safety and reduce the possibility that a laser pulse will continue to be emitted in the same direction (e.g., at a user's eyes). FIG.6Bdepicts a view from above the environmental capture system400in some embodiments. In this example, the front of the environmental capture system400is depicted with the lens recessed and directly above the center of the point of rotation (e.g., above the center of the mount). The front of the camera is recessed for the lens and the front of the housing is flared to allow the field of view of the image sensor to be unobstructed by the housing. The mirror450is depicted as pointing upwards. FIG.7depicts a rendering of the components of one example of the environmental capture system300according to some embodiments. The environmental capture system700includes a front cover702, a lens assembly704, a structural frame706, a lidar708, a front housing710, a mirror assembly712, a GPS antenna714, a rear housing716, a vertical motor718, a display720, a battery pack722, a mount724, and a horizontal motor726. In various embodiments, the environmental capture system700may be configured to scan, align, and create 3D mesh outdoors in full sun as well as indoors. This removes a barrier to the adoption of other systems which are an indoor-only tool. The front cover702, the front housing710, and the rear housing716make up a part of the housing. In one example, the front cover may have a width, w, of 75 mm. The lens assembly704may include a camera lens that focuses light onto an image capture device. The image capture device may capture an image of a physical environment. The user may place the environmental capture system700to capture one portion of a floor of a building, to obtain a panoramic image of the one portion of the floor. The environmental capture system700may be moved to another portion of the floor of the building to obtain a panoramic image of another portion of the floor. In one example, the depth of field of the image capture device is 0.5 meters to infinity.FIG.8Adepicts example lens dimensions in some embodiments andFIG.8Bdepicts an example lens design specification in some embodiments. In some embodiments, the image capture device is a complementary metal-oxide-semiconductor (CMOS) image sensor. In various embodiments, the image capture device is a charged coupled device (CCD). In one example, the image capture device is a red-green-blue (RGB) sensor. In one embodiment, the image capture device is an infrared (IR) sensor. The lens assembly704may give the image capture device a wide field of view. In some examples, the lens assembly704has an HFOV of at least 148 degrees and a VFOV of at least 94 degrees. In one example, the lens assembly704has a field of view of 150°, 180°, or be within a range of 145° to 180°. Image capture of a 360° view around the environmental capture system700may be obtained, in one example, with three or four separate image captures from the image capture device of environmental capture system700. The output of the lens assembly704may be a digital image of one area of the physical environment. The images captured by the lens assembly704may be stitched together to form a 2D panoramic image of the physical environment. A 3D panoramic may be generated by combining the depth data captured by the lidar708with the 2D panoramic image generated by stitching together multiple images from the lens assembly704. In some embodiments, the images captured by the environmental capture system400are stitched together by an image processing system, such as image stitching and processor system1105, user system1110, and/or performed by environmental capture system400. In various embodiments, the environmental capture system400generates a “preview” or “thumbnail” version of a 2D panoramic image. The preview or thumbnail version of the 2D panoramic image may be presented on a user system1110such as an iPad, personal computer, smartphone, or the like. In some embodiments, the environmental capture system400may generate a mini-map of a physical environment representing an area of the physical environment. In various embodiments, the image processing system generates the mini-map representing the area of the physical environment. The images captured by the lens assembly704may include capture device location data that identifies or indicates a capture location of a 2D image. For example, in some implementations, the capture device location data can include a global positioning system (GPS) coordinates associated with a 2D image. In other implementations, the capture device location data can include position information indicating a relative position of the capture device (e.g., the camera and/or a 3D sensor) to its environment, such as a relative or calibrated position of the capture device to an object in the environment, another camera in the environment, another device in the environment, or the like. In some implementations, this type of location data can be determined by the capture device (e.g., the camera and/or a device operatively coupled to the camera comprising positioning hardware and/or software) in association with the capture of an image and received with the image. The placement of the lens assembly704is not solely by design. By placing the lens assembly704at the center, or substantially at the center, of the axis of rotation, the parallax effect may be reduced. In some embodiments, the structural frame706holds the lens assembly704and the lidar708in a particular position and may help protect the components of the example of the environmental capture system. The structural frame706may serve to aid in rigidly mounting the lidar708and place the lidar708in a fixed position. Furthermore, the fixed position of the lens assembly704and the lidar708enable a fixed relationship to align the depth data with the image information to assist with creating the 3D images. The 2D image data and depth data captured in the physical environment can be aligned relative to a common 3D coordinate space to generate a 3D model of the physical environment. In various embodiments, the lidar708captures depth information of a physical environment. When the user places the environmental capture system700in one portion of a floor of a building, the lidar708may obtain depth information of objects. The lidar708may include an optical sensing module that can measure the distance to a target or objects in a scene by utilizing pulses from a laser to irradiate a target or scene and measure the time it takes photons to travel to the target and return to the lidar708. The measurement may then be transformed into a grid coordinate system by using information derived from a horizontal drive train of the environmental capture system700. In some embodiments, the lidar708may return depth data points every 10 microseconds (usec) with a timestamp (of an internal clock). The lidar708may sample a partial sphere (small holes at top and bottom) every 0.25 degrees. In some embodiments, with a data point every 10 usec and 0.25 degrees, there may be a 14.40 milliseconds per “disk” of points and 1440 disks to make a sphere that is nominally 20.7 seconds. One advantage of utilizing lidar is that with a lidar at the lower wavelength (e.g., 905 nm, 900-940 nm, or the like) it allows the environmental capture system700to determine depth information for an outdoor environment or an indoor environment with bright light. The placement of the lens assembly704and the lidar708may allow the environmental capture system700or a digital device in communication with the environmental capture system700to generate a 3D panoramic image using the depth data from the lidar708and the lens assembly704. In some embodiments, the 2D and 3D panoramic images are not generated on the environmental capture system400. The output of the lidar708may include attributes associated with each laser pulse sent by the lidar708. The attributes include the intensity of the laser pulse, number of returns, the current return number, classification point, RGC values, GPS time, scan angle, the scan direction, or any combination therein. The depth of field may be (0.5 m; infinity), (1 m; infinity), or the like. In some embodiments, the depth of field is 0.2 m to 1 m and infinity. In some embodiments, the environmental capture system700captures four separate RBG images using the lens assembly704while the environmental capture system700is stationary. In various embodiments, the lidar708captures depth data in four different instances while the environmental capture system700is in motion, moving from one RBG image capture position to another RBG image capture position. In one example, the 3D panoramic image is captured with a 360° rotation of the environmental capture system700, which may be called a sweep. In various embodiments, the 3D panoramic image is captured with a less than 360° rotation of the environmental capture system700. The output of the sweep may be a sweep list (SWL), which includes image data from the lens assembly704and depth data from the lidar708and properties of the sweep, including the GPS location and a timestamp of when the sweep took place. In various embodiments, a single sweep (e.g., a single 360 degree turn of the environmental capture system700) captures sufficient image and depth information to generate a 3D visualization (e.g., by the digital device in communication with the environmental capture system700that receives the imagery and depth data from the environmental capture system700and creates the 3D visualization using only the imagery and depth data from the environmental capture system700captured in the single sweep). In some embodiments, the images captured by the environmental capture system400may be blended, stitched together, and combined with the depth data from the lidar708by an image stitching and processing system discussed herein. In various embodiments, the environmental capture system400and/or an application on the user system1110may generate a preview or thumbnail version of a 3D panoramic image. The preview or thumbnail version of the 3D panoramic image may be presented on the user system1110and may have a lower image resolution than the 3D panoramic image generated by the image processing system. After the lens assembly704and the lidar708captures the images and depth data of the physical environment, the environmental capture system400may generate a mini-map representing an area of the physical environment that has been captured by the environmental capture system400. In some embodiments, the image processing system generates the mini-map representing the area of the physical environment. After capturing images and depth data of a living room of a home using the environmental capture system400, the environmental capture system400may generate a top-down view of the physical environment. A user may use this information to determine areas of the physical environment in which the user has not captured or generated 3D panoramic images. In one embodiment, the environmental capture system700may interleave image capture with the image capture device of the lens assembly704with depth information capture with the lidar708. For example, the image capture device may capture an image from the physical environment with the image capture device, and then lidar708obtains depth information from the physical environment. Once the lidar708obtains depth information, the image capture device may move on to capture an image at another location in the physical environment, and then lidar708obtains depth information from another portion, thereby interleaving image capture and depth information capture. In some embodiments, the lidar708may have a field of view of at least 145°, depth information of all objects in a 360° view of the environmental capture system700may be obtained by the environmental capture system700in three or four scans. In another example, the lidar708may have a field of view of at least 150°, 180°, or between 145° to 180°. An increase in the field of view of the lens reduces the amount of time required to obtain visual and depth information of the physical environment around the environmental capture system700. The lidar708may utilize the mirror assembly712to direct the laser in different scan angles. In one embodiment. In some embodiments, the mirror assembly712may be a dielectric mirror with a hydrophobic coating or layer. The mirror assembly712may be coupled to the vertical motor718that rotates the mirror assembly712when in use. By capturing images with multiple levels of exposures and using a 900 nm based lidar system708, the environmental capture system700may capture images outside in bright sunlight or inside with bright lights or sunlight glare from windows. In some embodiments, the mount724provides a connector for the environmental capture system700to connect to a platform such as a tripod or mount. The horizontal motor726may rotate the environmental capture system700around an x-y plane. In some embodiments, the horizontal motor726may provide information to a grid coordinate system to determine (x, y, z) coordinates associated with each laser pulse. In various embodiments, due to the broad field of view of the lens, the positioning of the lens around the axis of rotation, and the lidar device, the horizontal motor726may enable the environmental capture system700to scan quickly. In various embodiments, the mount724may include a quick release adapter. The holding torque may be, for example, >2.0 Nm and the durability of the capture operation may be up to or beyond 70,000 cycles. For example, the environmental capture system700may enable construction of a 3D mesh of a standard home with a distance between sweeps greater than 8 m. A time to capture, process, and align an indoor sweep may be under 45 seconds. In one example, a time frame from the start of a sweep capture to when the user can move the environmental capture system700may be less than 15 seconds. In various embodiments, these components provide the environmental capture system700the ability to align scan positions outdoor as well as indoor and therefore create seamless walk-through experiences between indoor and outdoor (this may be a high priority for hotels, vacation rentals, real estate, construction documentation, CRE, and as-built modeling and verification. The environmental capture system700may also create an “outdoor dollhouse” or outdoor mini-map. The environmental capture system700, as shown herein, may also improve the accuracy of the 3D reconstruction, mainly from a measurement perspective. For scan density, the ability for the user to tune it may also be a plus. These components may also enable the environmental capture system700the ability to capture wide empty spaces (e.g., longer range). In order to generate a 3D model of wide empty spaces may require the environmental capture system to scan and capture 3D data and depth data from a greater distance range than generating a 3D model of smaller spaces. In various embodiments, these components enable the environmental capture system700to align SWLs and reconstruct the 3D model in a similar way for indoor as well as outdoor use. These components may also enable the environmental capture system700to perform geo-localization of 3D models (which may ease integration to Google street view and help align outdoor panoramas if needed). In some embodiments, the image and depth data may then be sent to a capture application (e.g., a device in communication with the environmental c capture system700, such as a smart device or an image capture system on a network). In some embodiments, the environmental capture system700may send the image and depth data to the image processing system for processing and generating the 2D panoramic image or the 3D panoramic image. In various embodiments, the environmental capture system700may generate a sweep list of the captured RGB image and the depth data from a 360-degree revolution of the environmental capture system700. The sweep list may be sent to the image processing system for stitching and aligning. The output of the sweep may be a SWL, which includes image data from the lens assembly704and depth data from the lidar708and properties of the sweep, including the GPS location and a timestamp of when the sweep took place. FIG.9Adepicts a block diagram900of an example of an environmental capture system according to some embodiments. The block diagram900includes a power source902, a power converter904, an input/output (I/O) printed circuit board assembly (PCBA), a system on module (SOM) PCBA, a user interface910, a lidar912, a mirror brushless direct current (BLCD) motor914, a drive train916, wide FOV (WFOV) lens918, and an image sensor920. The power converter904may change the voltage level from the power source902to a lower or higher voltage level so that it may be utilized by the electronic components of the environmental capture system. The environmental capture system may utilize 4×18650 Li-Ion cells in 4S1P configuration, or four series connections and one parallel connection configuration. In some embodiments, the I/O PCBA906may include elements that provide IMU, Wi-Fi, GPS, Bluetooth, inertial measurement unit (IMU), motor drivers, and microcontrollers. In some embodiments, the I/O PCBA906includes a microcontroller for controlling the horizontal motor and encoding horizontal motor controls as well as controlling the vertical motor and encoding vertical motor controls. The SOM PCBA908may include a central processing unit (CPU) and/or graphics processing unit (GPU), memory, and mobile interface. The SOM PCBA908may control the lidar912, the image sensor920, and the I/O PCBA906. The SOM PCBA908may determine the (x, y, z) coordinates associated with each laser pulse of the lidar912and store the coordinates in a memory component of the SOM PCBA908. In some embodiments, the SOM PCBA908may store the coordinates in the image processing system of the environmental capture system400. In addition to the coordinates associated with each laser pulse, the SOM PCBA908may determine additional attributes associated with each laser pulse, including the intensity of the laser pulse, number of returns, the current return number, classification point, RGC values, GPS time, scan angle, and the scan direction. The user interface910may include physical buttons or switches with which the user may interact with. The buttons or switches may provide functions such as turn the environmental capture system on and off, scan a physical environment, and others. In some embodiments, the user interface910may include a display such as the display720ofFIG.7. The SOM PCBA908may determine the coordinates based on the location of the drive train916. In various embodiments, the lidar912may include one or more lidar devices. Multiple lidar devices may be utilized to increase the lidar resolution. In some embodiments, the drive train916includes a vertical monogon mirror and motor. In this example, the drive train916may include a BLDC motor, an external hall effect sensor, a magnet (paired with Hall effect sensor), a mirror bracket, and a mirror. The placement of the components of the environmental capture system is such that the lens assembly and the lidar are substantially placed at a center of an axis of rotation. This may reduce the image parallax that occurs when an image capture system is not placed at the center of the axis of rotation. An image capture device may include the WFOV lens918and the image sensor920. The image sensor920may be a CMOS image sensor. In one embodiment, the image sensor920is a charged coupled device (CCD). In some embodiments, the image sensor920is a red-green-blue (RGB) sensor. In one embodiment, the image sensor920is an IR sensor. FIG.9Bdepicts a block diagram of an example SOM PCBA908of the environmental capture system according to some embodiments. The SOM PCBA908may include a communication component922, a lidar control component924, a lidar location component926, a user interface component928, a classification component930, a lidar datastore932, and a captured image datastore934. In some embodiments, the communication component922may send and receive requests or data between any of the components of the SOM PCBA1008and components of the environmental capture system ofFIG.9A. In various embodiments, the lidar control component924may control various aspects of the lidar. For example, the lidar control component924may send a control signal to the lidar912to start sending out a laser pulse. The control signal sent by the lidar control component924may include instructions on the frequency of the laser pulses. In some embodiments, the lidar location component926may utilize GPS data to determine the location of the environmental capture system. In various embodiments, the lidar location component926utilizes the position of the mirror assembly to determine the scan angle and (x, y, z) coordinates associated with each laser pulse. The lidar location component926may also utilize the IMU to determine the orientation of the environmental capture system. The user interface component928may facilitate user interaction with the environmental capture system. In some embodiments, the user interface component928may provide one or more user interface elements with which a user may interact. The user interface provided by the user interface component928may be sent to the user system1110. For example, the user interface component928may provide to the user system (e.g., a digital device) a visual representation of an area of a floorplan of a building. As the user places the environmental capture system in different parts of the story of the building to capture and generate 3D panoramic images, the environmental capture system may generate the visual representation of the floorplan. The user may place the environmental capture system in an area of the physical environment to capture and generate 3D panoramic images in that region of the house. Once the 3D panoramic image of the area has been generated by the image processing system, the user interface component may update the floorplan view with a top-down view of the living room area depicted inFIG.1B. In some embodiments, the floorplan view200may be generated by the user system1110after a second sweep of the same home, or floor of a building has been captured. The lidar datastore932may be any structure and/or structures suitable for captured lidar data (e.g., an active database, a relational database, a self-referential database, a table, a matrix, an array, a flat file, a documented-oriented storage system, a non-relational No-SQL system, an FTS-management system such as Lucene/Solar, and/or the like). The image datastore408may store the captured lidar data. However, the lidar datastore932may be utilized to cache the captured lidar data in cases where the communication network404is non-functional. For example, in cases where the environmental capture system400and the user system1110are in a remote location with no cellular network or in a region with no Wi-Fi, the lidar datastore932may store the captured lidar data until they can be transferred to the image datastore934. FIG.10A-10Cdepicts a process for the environmental capture system400for taking images in some embodiments. As depicted inFIG.10A-10C, the environmental capture system400may take a burst of images at different exposures. A burst of images may be a set of images, each with different exposures. The first image burst happens at time 0.0. The environmental capture system400may receive the first frame and then assess the frame while waiting for the second frame.FIG.10Aindicates that the first frame is blended before the second frame arrives. In some embodiments, the environmental capture system400may process each frame to identify pixels, color, and the like. Once the next frame arrives, the environmental capture system400may process the recently received frame and then blend the two frames together. In various embodiments, the environmental capture system400performs image processing to blend the sixth frame and further assess the pixels in the blended frame (e.g., the frame that may include elements from any number of the frames of the image burst). During the last step prior to or during movement (e.g., turning) of the environmental capture system400, the environmental capture system400may optionally transfer the blended image from the graphic processing unit to CPU memory. The process continues inFIG.10B. At the beginning ofFIG.10B, the environmental capture system400conducts another burst. The environmental capture system400may compress the blended frames and/or all or parts of the captured frames using J×R). LikeFIG.10A, a burst of images may be a set of images, each with different exposures (the length of exposure for each frame in the set may the same and in the same order as other bursts covered inFIGS.10A and10C). The second image burst happens at time 2 second. The environmental capture system400may receive the first frame and then assess the frame while waiting for the second frame.FIG.10Bindicates that the first frame is blended before the second frame arrives. In some embodiments, the environmental capture system400may process each frame to identify pixels, color, and the like. Once the next frame arrives, the environmental capture system400may process the recently received frame and then blend the two frames together. In various embodiments, the environmental capture system400performs image processing to blend the sixth frame and further assess the pixels in the blended frame (e.g., the frame that may include elements from any number of the frames of the image burst). During the last step prior to or during movement (e.g., turning) of the environmental capture system400, the environmental capture system400may optionally transfer the blended image from the graphic processing unit to CPU memory. After turning, the environmental capture system400may continue the process by conducting another color burst (e.g., after turning 180 degrees) at about time 3.5 seconds. The environmental capture system400may compress the blended frames and/or all or parts of the captured frames using J×R). The burst of images may be a set of images, each with different exposures (the length of exposure for each frame in the set may the same and in the same order as other bursts covered inFIGS.10A and10C). The environmental capture system400may receive the first frame and then assess the frame while waiting for the second frame.FIG.10Bindicates that the first frame is blended before the second frame arrives. In some embodiments, the environmental capture system400may process each frame to identify pixels, color, and the like. Once the next frame arrives, the environmental capture system400may process the recently received frame and then blend the two frames together. In various embodiments, the environmental capture system400performs image processing to blend the sixth frame and further assess the pixels in the blended frame (e.g., the frame that may include elements from any number of the frames of the image burst). During the last step prior to or during movement (e.g., turning) of the environmental capture system400, the environmental capture system400may optionally transfer the blended image from the graphic processing unit to CPU memory. The last burst happens at time 5 seconds inFIG.10C. The environmental capture system400may compress the blended frames and/or all or parts of the captured frames using J×R). The burst of images may be a set of images, each with different exposures (the length of exposure for each frame in the set may the same and in the same order as other bursts covered inFIGS.10A and10B). The environmental capture system400may receive the first frame and then assess the frame while waiting for the second frame.FIG.10Cindicates that the first frame is blended before the second frame arrives. In some embodiments, the environmental capture system400may process each frame to identify pixels, color, and the like. Once the next frame arrives, the environmental capture system400may process the recently received frame and then blend the two frames together. In various embodiments, the environmental capture system400performs image processing to blend the sixth frame and further assess the pixels in the blended frame (e.g., the frame that may include elements from any number of the frames of the image burst). During the last step prior to or during movement (e.g., turning) of the environmental capture system400, the environmental capture system400may optionally transfer the blended image from the graphic processing unit to CPU memory. The dynamic range of an image capture device is a measure of how much light an image sensor can capture. The dynamic range is the difference between the darkest area to the brightest area of an image. There are many ways to increase the dynamic range of the image capture device, one of which is to capture multiple images of the same physical environment using different exposures. An image captured with a short exposure will capture brighter areas of the physical environment, while a long exposure will capture darker physical environment areas. In some embodiments, the environmental capture system may capture multiple images with six different exposure times. Some or all of the images captured by the environmental capture system are used to generate 2D images with high dynamic range (HDR). One or more of the captured images may be used for other functions such as ambient light detection, flicker detection, and the like. A 3D panoramic image of the physical environment may be generated based on four separate image captures of the image capture device and four separate depth data capture of the lidar device of the environmental capture system. Each of the four separate image captures may include a series of image captures of different exposure times. A blending algorithm may be used to blend the series of image captures with the different exposure times to generate one of four RGB image captures, which may be utilized to generate a 2D panoramic image. For example, the environmental capture system may be used to capture a 3D panoramic image of a kitchen. Images of one wall of the kitchen may include a window, an image with an image captured with a shorter exposure may provide the view out the window but may leave the rest of the kitchen underexposed. In contrast, another image captured with a longer exposure may provide the view of the interior of the kitchen. The blending algorithm may generate a blended RGB image by blending the view out the window of the kitchen from one image with the rest of the kitchen's view from another image. In various embodiments, the 3D panoramic image may be generated based on three separate image captures of the image capture device and four separate depth data captures of the lidar device of the environmental capture environmental capture system. In some embodiments, the number of image captures, and the number of depth data captures may be the same. In one embodiment, the number of image captures, and the number of depth data captures may be different. After capturing a first of a series of images with one exposure time, a blending algorithm receives the first of the series of images, calculate initial intensity weights for that image, and set that image as a baseline image for combining the subsequently received images. In some embodiments, the blending algorithm may utilize a graphic processing unit (GPU) image processing routine such as a “blend_kernel” routine. The blending algorithm may receive subsequent images that may be blended with previously received images. In some embodiments, the blending algorithm may utilize a variation of the blend_kernel GPU image processing routine. In one embodiment, the blending algorithm utilizes other methods of blending multiple images, such as determining the difference between the darkest and brightest part, or contrast, of the baseline image to determine if the baseline image may be overexposed or under-exposed. For example, a contrast value less than a predetermine contrast threshold means that the baseline image is overexposed or under-exposed. In one embodiment, the contrast of the baseline image may be calculated by taking an average of the image's light intensity or a subset of the image. In some embodiments, the blending algorithm calculates an average light intensity for each row or column of the image. In some embodiments, the blending algorithm may determine a histogram of each of the images received from the image capture device and analyze the histogram to determine light intensities of the pixels which make up each of the images. In various embodiments, the blending may involve sampling colors within two or more images of the same scene, including along objects and seems. If there is a significant difference in color between the two images (e.g., within a predetermined threshold of color, hue, brightness, saturation, and/or the like), a blending module (e.g., on the environmental capture system400or the user device1110) may blend a predetermined size of both images along the position where there is the difference. In some embodiments, the greater the difference in color or image at a position in the image, the greater the amount of space around or near the position may be blended. In some embodiments, after blending, the blending module (e.g., on the environmental capture system400or the user device1110) may re-scan and sample colors along the image(s) to determine if there are other differences in image or color that exceed the predetermined threshold of color, hue, brightness, saturation, and/or the like. If so, the blending module may identify the portions within the image(s) and continue to blend that portion of the image. The blending module may continue to resample the images along the seam until there are no further portions of the images to blend (e.g., any differences in color are below the predetermined threshold(s).) FIG.11depicts a block diagram of an example environment1100capable of capturing and stitching images to form 3D visualizations according to some embodiments. The example environment1100includes 3D and panoramic capture and stitching system1102, a communication network1104, an image stitching and processor system1106, an image datastore1108, a user system1110, and a first scene of a physical environment1112. The 3D and panoramic capture and stitching system1102and/or the user system1110may include an image capture device (e.g., environmental capture system400) that may be used to capture images of an environment (e.g., the physical environment1112). The 3D and panoramic capture and stitching system1102and the image stitching and processor system1106may be a part of the same system (e.g., part of one or more digital devices) that are communicatively coupled to the environmental capture system400. In some embodiments, one or more of the functionality of the components of the 3D and panoramic capture and stitching system1102and the image stitching and processor system1106may be performed by the environmental capture system400. Similarly, or alternatively, 3D and panoramic capture and stitching system1102and the image stitching and processor system1106may be performed by the user system1110and/or the image stitching and processor system1106 The 3D panoramic capture and stitching system1102may be utilized by a user to capture multiple 2D images of an environment, such as the inside of a building and/or and outside of the building. For example, the user may utilize the 3D and panoramic capture and stitching system1102to capture multiple 2D images of the first scene of the physical environment1112provided by the environmental capture system400. The 3D and panoramic capture and stitching system1102may include an aligning and stitching system1114. Alternately, the user system1110may include the aligning and stitching system1114. The aligning and stitching system1114may be software, hardware, or a combination of both configured to provide guidance to the user of an image capture system (e.g., on the 3D and panoramic capture and stitching system1102or the user system1110) and/or process images to enable improved panoramic pictures to be made (e.g., through stitching, aligning, cropping, and/or the like). The aligning and stitching system1114may be on a computer-readable media (described herein). In some embodiments, the aligning and stitching system1114may include a processor for performing functions. An example of the first scene of the physical environment1112may be any room, real estate, or the like (e.g., a representation of a living room). In some embodiments, the 3D and panoramic capture and stitching system1102is utilized to generate 3D panoramic images of indoor environments. The 3D panoramic capture and stitching system1102may, in some embodiments, be the environmental capture system400discussed with regard toFIG.4. In some embodiments, the 3D panoramic capture and stitching system1102may in communication with a device for capturing images and depth data as well as software (e.g., the environmental capture system400). All or part of the software may be installed on the 3D panoramic capture and stitching system1102, the user system1110, the environmental capture system400, or both. In some embodiments, the user may interact with the 3D and panoramic capture and stitching system1102via the user system1110. The 3D and panoramic capture and stitching system1102or the user system1110may obtain multiple 2D images. The 3D and panoramic capture and stitching system1102or the user system1110may obtain depth data (e.g., from a lidar device or the like). In various embodiments, an application on the user system1110(e.g., a smart device of the user such as a smartphone or tablet computer) or an application on the environmental capture system400may provide visual or auditory guidance to the user for taking images with the environmental capture system400. Graphical guidance may include, for example, a floating arrow on a display of the environmental capture system400(e.g., on a viewfinder or LED screen on the back of the environmental capture system400) to guide the user on where to position and/or point an image capture device. In another example, the application may provide audio guidance on where to position and/or point the image capture device. In some embodiments, the guidance may allow the user to capture multiple images of the physical environment without the help of a stabilizing platform such as a tripod. In one example, the image capture device may be a personal device such as a smartphone, tablet, media tablet, laptop, and the like. The application may provide direction on position for each sweep, to approximate the no-parallax point based on position of the image capture device, location information from the image capture device, and/or previous image of the image capture device. In some embodiments, the visual and/or auditory guidance enables the capture of images that can be stitched together to form panoramas without a tripod and without camera positioning information (e.g., indicating a location, position, and/or orientation of the camera from a sensor, GPS device, or the like). The aligning and stitching system1114may align or stitch 2D images (e.g., captured by the user system1110or the 3D panoramic capture and stitching system1102) to obtain a 2D panoramic image. In some embodiments, the aligning and stitching system1114utilizes a machine learning algorithm to align or stitch multiple 2D images into a 2D panoramic image. The parameters of the machine learning algorithm may be managed by the aligning and stitching system1114. For example, the 3D and panoramic capture and stitching system1102and/or the aligning and stitching system1114may recognize objects within the 2D images to aid in aligning the images into a 2D panoramic image. In some embodiments, the aligning and stitching system1114may utilize depth data and the 2D panoramic image to obtain a 3D panoramic image. The 3D panoramic image may be provided to the 3D and panoramic stitching system1102or the user system1110. In some embodiments, the aligning and stitching system1114determines 3D/depth measurements associated with recognized objects within a 3D panoramic image and/or sends one or more 2D images, depth data, 2D panoramic image(s), 3D panoramic image(s) to the image stitching and processor system106to obtain a 2D panoramic image or a 3D panoramic image with pixel resolution that is greater than the 2D panoramic image or the 3D panoramic image provided by the 3D and panoramic capture and stitching system1102. The image stitching and processor system1106may process 2D images captured by the image capture device (e.g., the environmental capture system400or a user device such as a smartphone, personal computer, media tablet, or the like) and stitch them into a 2D panoramic image. The 2D panoramic image processed by the image stitching and processor system106may have a higher pixel resolution than the panoramic image obtained by the 3D and panoramic capture and stitching system1102. In some embodiments, the image stitching and processor system1106receives and processes the 3D panoramic image to create a 3D panoramic image with pixel resolution that is higher than that of the received 3D panoramic image. The higher pixel resolution panoramic images may be provided to an output device with a higher screen resolution than the user system1110, such as a computer screen, projector screen, and the like. In some embodiments, the higher pixel resolution panoramic images may provide to the output device a panoramic image in greater detail and may be magnified. The user system1110may communicate between users and other associated systems. In some embodiments, the user system1110may be or include one or more mobile devices (e.g., smartphones, cell phones, smartwatches, or the like). The user system1110may include one or more image capture devices. The one or more image capture devices can include, for example, RGB cameras, HDR cameras, video cameras, IR cameras, and the like. The 3D and panoramic capture and stitching system1102and/or the user system1110may include two or more capture devices may be arranged in relative positions to one another on or within the same mobile housing such that their collective fields of view span up to 360°. In some embodiments, pairs of image capture devices can be used capable of generating stereo-image pairs (e.g., with slightly offset yet partially overlapping fields of view). The user system1110may include two image capture devices with vertical stereo offset fields-of-view capable of capturing vertical stereo image pairs. In another example, the user system1110can comprise two image capture devices with vertical stereo offset fields-of-view capable of capturing vertical stereo image pairs. In some embodiments, the user system1110, environmental capture system400, or the 3D and panoramic capture and stitching system1102may generate and/or provide image capture position and location information. For example, the user system1110or the 3D and panoramic capture and stitching system1102may include an inertial measurement unit (IMU) to assist in determining position data in association with one or more image capture devices that capture the multiple 2D images. The user system1110may include a global positioning sensor (GPS) to provide GPS coordinate information in association with the multiple 2D images captured by one or more image capture devices. In some embodiments, users may interact with the aligning and stitching system1114using a mobile application installed in the user system1110. The 3D and panoramic capture and stitching system1102may provide images to the user system1110. A user may utilize the aligning and stitching system1114on the user system1110to view images and previews. In various embodiments, the aligning and stitching system1114may be configured to provide or receive one or more 3D panoramic images from the 3D and panoramic capture and stitching system1102and/or the image stitching and processor system1106. In some embodiments, the 3D and panoramic capture and stitching system1102may provide a visual representation of a portion of a floorplan of a building, which has been captured by the 3D and panoramic capture and stitching system1102to the user system1110. The user of the system1110may navigate the space around the area and view different rooms of the house. In some embodiments, the user of the user system1110may display the 3D panoramic images, such as the example 3D panoramic image, as the image stitching and processor system1106completes the generation of the 3D panoramic image. In various embodiments, the user system1110generates a preview or thumbnail of the 3D panoramic image. The preview 3D panoramic image may have an image resolution that is lower than a 3D panoramic image generated by the 3D and panoramic capture and stitching system1102. FIG.12is a block diagram of an example of the alignment and stitching system1114according to some embodiments. The align and stitching system1114includes a communication module1202, an image capture position module1204, a stitching module1206, a cropping module1208, a graphical cut module1210, a blending module1211, a 3D image generator1214, a captured 2D image datastore1216, a 3D panoramic image datastore1218, and a guidance module1220. It may be appreciated that there may be any number of modules of the aligning and stitching system1114that perform one or more different functions as described herein. In some embodiments, the aligning and stitching system1114includes an image capture module configured to receive images from one or more image capture devices (e.g., cameras). The aligning and stitching system1114may also include a depth module configured to receive depth data from a depth device such as a lidar if available. The communication module1202may send and receive requests, images, or data between any of the modules or datastores of the aligning and stitching system1114and components of the example environment1100ofFIG.11. Similarly, the aligning and stitching system1114may send and receive requests, images, or data across the communication network1104to any device or system. In some embodiments, the image capture position module1204may determine image capture device position data of an image capture device (e.g., a camera which may be a stand-alone camera, smartphone, media tablet, laptop, or the like). Image capture device position data may indicate a position and orientation of an image capture device and/or lens. In one example, the image capture position module1204may utilize the IMU of the user system1110, camera, digital device with a camera, or the 3D and panoramic capture and stitching system1102to generate position data of the image capture device. The image capture position module1204may determine the current direction, angle, or tilt of one or more image capture devices (or lenses). The image capture position module1204may also utilize the GPS of the user system1110or the 3D and panoramic capture and stitching system1102. For example, when a user wants to use the user system1110to capture a 360° view of the physical environment, such as a living room, the user may hold the user system1110in front of them at eye level to start to capture one of a multiple of images which will eventually become a 3D panoramic image. To reduce the amount of parallax to the image and capture images better suited for stitching and generating 3D panoramic images, it may be preferable if one or more image capture devices rotate at the center of the axis of rotation. The aligning and stitching system1114may receive position information (e.g., from the IMU) to determine the position of the image capture device or lens. The aligning and stitching system1114may receive and store a field of view of the lens. The guidance module1220may provide visual and/or audio information regarding a recommended initial position of the image capture device. The guidance module1220may make recommendations for positioning the image capture device for subsequent images. In one example, the guidance module1220may provide guidance to the user to rotate and position the image capture device such that the image capture device rotates close to a center of rotation. Further, the guidance module1220may provide guidance to the user to rotate and position the image capture device such that subsequent images are substantially aligned based on characteristics of the field of view and/or image capture device. The guidance module1220may provide the user with visual guidance. For example, the guidance module1220may place markers or an arrow in a viewer or display on the user system1110or the 3D and panoramic capture and stitching system1102. In some embodiments, the user system1110may be a smartphone or tablet computer with a display. When taking one or more pictures, the guidance module1220may position one or more markers (e.g., different color markers or the same markers) on an output device and/or in a viewfinder. The user may then use the markers on the output device and/or viewfinder to align the next image. There are numerous techniques for guiding the user of the user system1110or the 3D and panoramic capture and stitching system1102to take multiple images for ease of stitching the images into a panorama. When taking a panorama from multiple images, images may be stitched together. To improve time, efficiency, and effectiveness of stitching the images together with reduced need of correcting artifacts or misalignments, the image capture position module1204and the guidance module1220may assist the user in taking multiple images in positions that improve the quality, time efficiency, and effectiveness of image stitching for the desired panorama. For example, after taking the first picture, the display of the user system1110may include two or more objects, such as circles. Two circles may appear to be stationary relative to the environment and two circles may move with the user system1110. When the two stationary circles are aligned with the two circles that move with the user system1110, the image capture device and/or the user system1110may be aligned for the next image. In some embodiments, after an image is taken by an image capture device, the image capture position module1204may take a sensor measurement of the position of the image capture device (e.g., including orientation, tilt, and the like). The image capture position module1204may determine one or more edges of the image that was taken by calculating the location of the edge of a field of view based on the sensor measurement. Additionally, or alternatively, the image capture position module1204may determine one or more edges of the image by scanning the image taken by the image capture device, identifying objects within that image (e.g., using machine learning models discussed herein), determining one or more edges of the image, and positioning objects (e.g., circles or other shapes) at the edge of a display on the user system1110. The image capture position module1204may display two objects within a display of the user system1110that indicates the positioning of the field of view for the next picture. These two objects may indicate positions in the environment that represent where there is an edge of the last image. The image capture position module1204may continue to receive sensor measurements of the position of the image capture device and calculate two additional objects in the field of view. The two additional objects may be the same width apart as the previous two objects. While the first two objects may represent an edge of the taken image (e.g., the far right edge of the image), the next two additional objects representing an edge of the field of view may be on the opposite edge (e.g., the far left edge of the field of view). By having the user physically aligning the first two objects on the edge of the image with the additional two objects on the opposite edge of the field of view, the image capture device may be positioned to take another image that can be more effectively stitched together without a tripod. This process can continue for each image until the user determines the desired panorama has been captured. Although multiple objects are discussed herein, it will be appreciated that the image capture position module1204may calculate the position of one or more objects for positioning the image capture device. The objects may be any shape (e.g., circular, oblong, square, emoji, arrows, or the like). In some embodiments, the objects may be of different shapes. In some embodiments, there may be a distance between the objects that represent the edge of a captured image and the distance between the objects of a field of view. The user may be guided to move forward to move away to enable there to be sufficient distance between the objects. Alternately, the size of the objects in the field of view may change to match a size of the objects that represent an edge of a captured image as the image capture device approaches the correct position (e.g., by coming closer or farther away from a position that will enable the next image to be taken in a position that will improve stitching of images. In some embodiments, the image capture position module1204may utilize objects in an image captured by the image capture device to estimate the position of the image capture device. For example, the image capture position module1204may utilize GPS coordinates to determine the geographical location associated with the image. The image capture position module1204may use the position to identify landmarks that may be captured by the image capture device. The image capture position module1204may include a 2D machine learning model to convert 2D images into 2D panoramic images. The image capture position module1204may include a 3D machine learning model to convert 2D images to 3D representations. In one example, a 3D representation may be utilized to display a three-dimensional walkthrough or visualization of an interior and/or exterior environment. The 2D machine learning model may be trained to stitch or assist in stitching two or more 2D images together to form a 2D panorama image. The 2D machine learning model may, for example, be a neural network trained with 2D images that include physical objects in the images as well as object identifying information to train the 2D machine learning model to identify objects in subsequent 2D images. The objects in the 2D images may assist in determining position(s) within a 2D image to assist in determining edges of the 2D image, warping in the 2D image, and assist in alignment of the image. Further, the objects in the 2D images may assist in determining artifacts in the 2D image, blending of an artifact or border between two images, positions to cut images, and/or crop the images. In some embodiments, the 2D machine learning model may, for example, be a neural network trained with 2D images that include depth information (e.g., from a lidar device or structured light device of the user system1110or the 3D and panoramic capture and stitching system1102) of the environment as well as include physical objects in the images to identify the physical objects, position of the physical objects, and/or position of the image capture device/field of view. The 2D machine learning model may identify physical objects as well as their depth relative to other aspects of the 2D images to assist in the alignment and position of two 2D images for stitching (or to stitch the two 2D images). The 2D machine learning model may include any number of machine learning models (e.g., any number of models generated by neural networks or the like). The 2D machine learning model may be stored on the 3D and panoramic capture and stitching system1102, the image stitching and processor system1106, and/or the user system1110. In some embodiments, the 2D machine learning model may be trained by the image stitching and processor system1106. The image capture position module1204may estimate the position of the image capture device (a position of the field of view of the image capture device) based on a seam between two or more 2D images from the stitching module1206, the image warping from the cropping module1208, and/or the graphical cut from the graphical cut module1210. The stitching module1206may combine two or more 2D images to generate a 2D panoramic. Based on the seam between two or more 2D images from the stitching module1206, the image warping from the cropping module1208, and/or a graphical cut, which has a field of view that is greater than the field of views of each of the two or more images. The stitching module1206may be configured to align or “stitch together” two different 2D images providing different perspectives of the same environment to generate a panoramic 2D image of the environment. For example, the stitching module1206can employ known or derived (e.g., using techniques described herein) information regarding the capture positions and orientations of respective 2D images to assist in stitching two images together. The stitching module1206may receive two 2D images. The first 2D image may have been taken immediately before the second image or within a predetermined period of time. In various embodiments, the stitching module1206may receive positioning information of the image capture device associated with the first image and then positioning information associated with the second image. The positioning information may be associated with an image based on, at the time the image was taken, positioning data from the IMU, GPS, and/or information provided by the user. In some embodiments, the stitching module1206may utilize a 2D machine learning module for scanning both images to recognize objects within both images, including objects (or parts of objects) that may be shared by both images. For example, the stitching module1206may identify a corner, pattern on a wall, furniture, or the like shared at opposite edges of both images. The stitching module1206may align edges of the two 2D images based on the positioning of the shared objects (or parts of objects), positioning data from the IMU, positioning data from the GPS, and/or information provided by the user and then combine the two edges of the images (i.e., “stitch” them together). In some embodiments, the stitching module1206may identify a portion of the two 2D images that overlap each other and stitch the images at the position that is overlapped (e.g., using the positioning data and/or the results of the 2D machine learning model. In various embodiments, the 2D machine learning model may be trained to use the positioning data from the IMU, positioning data from the GPS, and/or information provided by the user to combine or stitch the two edges of the images. In some embodiments, the 2D machine learning model may be trained to identify common objects in both 2D images to align and position the 2D images and then combine or stitch the two edges of the images. In further embodiments, the 2D machine learning model may be trained to use the positioning data and object recognition to align and position the 2D images and then stitch the two edges of the images together to form all or part of the panoramic 2D image. The stitching module1206may utilize depth information for the respective images (e.g., pixels in the respective images, objects in the respective images, or the like) to facilitate aligning the respective 2D images to one another in association with generating a single 2D panoramic image of the environment. The cropping module1208may resolve issues with two or more 2D images where the image capture device was not held in the same position when 2D images were captured. For example, while capturing an image, the user may position the user system1110in a vertical position. However, while capturing another image, the user may position the user system at an angle. The resultant images may not be aligned and may suffer from parallax effects. Parallax effects may occur when foreground and background objects do not line up in the same way in the first image and the second image. The cropping module1208may utilize the 2D machine learning model (by applying positioning information, depth information, and/or object recognition) to detect changes in the position of the image capture device in two or more images and then measure the amount of change in position of the image capture device. The cropping module1208may warp one or multiple 2D images so that the images may be able to line up together to form a panoramic image when the images are stitched, and while at the same time preserving certain characteristics of the images such as keeping a straight line straight. The output of the cropping module1208may include the number of pixel columns and rows to offset each pixel of the image to straighten out the image. The amount of offset for each image may be outputted in the form of a matrix representing the number of pixel columns and pixel rows to offset each pixel of the image. In some embodiments, the cropping module1208may determine the amount of image warping to perform on one or more of the multiple 2D images captured by the image capture devices of the user system1110based on one or more image capture position from the image capture position module1204or seam between two or more 2D images from the stitching module1206, the graphical cut from the graphical cut module1210, or blending of colors from the blending module1211. The graphical cut module1210may determine where to cut or slice one or more of the 2D images captured by the image capture device. For example, the graphical cut module1210may utilize the 2D machine learning model to identify objects in both images and determine that they are the same object. The image capture position module1204, the cropping module1208, and/or the graphical cut module1210may determine that the two images cannot be aligned, even if warped. The graphical cut module1210may utilize the information from the 2D machine learning model to identify sections of both images that may be stitched together (e.g., by cutting out a part of one or both images to assist in alignment and positioning). In some embodiments, the two 2D images may overlap at least a portion of the physical world represented in the images. The graphical cut module1210may identify an object, such as the same chair, in both images. However, the images of the chair may not line up to generate a panoramic that is not distorted and would not correctly represent the portion of the physical world, even after image capture positioning and image wrapping by the cropping module1208. The graphical cut module1210may select one of the two images of the chair to be the correct representation (e.g., based on misalignment, positioning, and/or artifacts of one image when compared to the other) and cut the chair from the image with misaligning, errors in positioning, and/or artifacts. The stitching module1206may subsequently stitch the two images together. The graphical cut module1210may try both combinations, for example, cutting the image of the chair from the first image and stitching the first image, minus the chair to the second image, to determine which graphical cut generates a more accurate panoramic image. The output of the graphical cut module1210may be a location to cut one or more of the multiple 2D images which correspond to the graphical cut, which generates a more accurate panoramic image. The graphical cut module1210may determine how to cut or slice one or more of the 2D images captured by the image capture device based on one or more image capture position from the image capture position module1204, stitching, or seam between two or more 2D images from the stitching module1206, the image warping from the cropping module1208, and the graphical cut from the graphical cut module1210. The blending module1211may colors at the seams (e.g., stitching) between two images so that the seams are invisible. Variation in lighting and shadows may cause the same object or surface to be outputted in slightly different colors or shades. The blending module may determine the amount of color blending required based on one or more image capture position from the image capture position module1204, stitching, image colors along the seams from both images, the image warping from the cropping module1208, and/or the graphical cut from the graphical cut module1210. In various embodiments, the blending module1211may receive a panorama from a combination of two 2D images and then sample colors along the seam of the two 2D images. The blending module1211may receive seam location information from the image capture position module1204to enable the blending module1211to sample colors along the seam and determine differences. If there is a significant difference in color along a seam between the two images (e.g., within a predetermined threshold of color, hue, brightness, saturation, and/or the like), the blending module1211may blend a predetermined size of both images along the seam at the position where there is the difference. In some embodiments, the greater the difference in color or image along the seam, the greater the amount of space along the seam of the two images that may be blended. In some embodiments, after blending, the blending module1211may re-scan and sample colors along the seam to determine if there are other differences in image or color that exceed the predetermined threshold of color, hue, brightness, saturation, and/or the like. If so, the blending module1211may identify the portions along the seam and continue to blend that portion of the image. The blending module1211may continue to resample the images along the seam until there are no further portions of the images to blend (e.g., any differences in color are below the predetermined threshold(s).) The 3D image generator1214may receive 2D panoramic images and generate 3D representations. In various embodiments, the 3D image generator1214utilizes a 3D machine learning model to transform the 2D panoramic images into 3D representations. The 3D machine learning model may be trained using 2D panoramic images and depth data (e.g., from a lidar sensor or structured light device) to create 3D representations. The 3D representations may be tested and reviewed for curation and feedback. In some embodiments, the 3D machine learning model may be used with 2D panoramic images and depth data to generate the 3D representations. In various embodiments, the accuracy, speed of rendering, and quality of the 3D representation generated by the 3D image generator1214are greatly improved by utilizing the systems and methods described herein. For example, by rendering a 3D representation from 2D panoramic images that have been aligned, positioned, and stitched using methods described herein (e.g., by alignment and positioning information provided by hardware, by improved positioning caused by the guidance provided to the user during image capture, by cropping and changing warping of images, by cutting images to avoid artifacts and overcome warping, by blending images, and/or any combination), the accuracy, speed of rendering, and quality of the 3D representation are improved. Further, it will be appreciated that by utilizing 2D panoramic images that have been aligned, positioned, and stitched using methods described herein, training of the 3D machine learning model may be greatly improved (e.g., in terms of speed and accuracy). Further, in some embodiments, the 3D machine learning model may be smaller and less complex because of the reduction of processing and learning that would have been used to overcome misalignments, errors in positioning, warping, poor graphic cutting, poor blending, artifacts, and the like to generate reasonably accurate 3D representations. The trained 3D machine learning model may be stored in the 3D and panoramic capture and stitching system1102, image stitching and processor system106, and/or the user system1110. In some embodiments, the 3D machine learning model may be trained using multiple 2D images and depth data from the image capture device of the user system1110and/or the 3D and panoramic capture and stitching system1102. In addition, the 3D image generator1214may be trained using image capture position information associated with each of the multiple 2D images from the image capture position module1204, seam locations to align or stitch each of the multiple 2D images from the stitching module1206, pixel offset(s) for each of the multiple 2D images from the cropping module1208, and/or the graphical cut from the graphical cut module1210. In some embodiments, the 3D machine learning model may be used with 2D panoramic images, depth data, image capture position information associated with each of the multiple 2D images from the image capture position module1204, seam locations to align or stitch each of the multiple 2D images from the stitching module1206, pixel offset(s) for each of the multiple 2D images from the cropping module1208, and/or the graphical cut from the graphical cut module1210to generate the 3D representations. The stitching module1206may be a part of a 3D model that converts multiple 2D images into 2D panoramic or 3D panoramic images. In some embodiments, the 3D model is a machine learning algorithm, such as a 3D-from-2D prediction neural network model. The cropping module1208may be a part of a 3D model that converts multiple 2D images into 2D panoramic or 3D panoramic images. In some embodiments, the 3D model is a machine learning algorithm, such as a 3D-from-2D prediction neural network model. The graphical cut module1210may be a part of a 3D model that converts multiple 2D images into 2D panoramic or 3D panoramic images. In some embodiments, the 3D model is a machine learning algorithm, such as a 3D-from-2D prediction neural network model. The blending module1211may be a part of a 3D machine learning model that converts multiple 2D images into 2D panoramic or 3D panoramic images. In some embodiments, the 3D model is a machine learning algorithm, such as a 3D-from-2D prediction neural network model. The 3D image generator1214may generate a weighting for each of the image capture position module1204, the cropping module1208, the graphical cut module1210, and the blending module1211, which may represent the reliability or a “strength” or “weakness” of the module. In some embodiments, the sum of the weightings of the modules equals 1. In cases where depth data is not available for the multiple 2D images, the 3D image generator1214may determine depth data for one or more objects in the multiple 2D images captured by the image capture device of the user system1110. In some embodiments, the 3D image generator1214may derive the depth data based on images captured by stereo-image pairs. The 3D image generator can evaluate stereo image pairs to determine data about the photometric match quality between the images at various depths (a more intermediate result), rather than determining depth data from a passive stereo algorithm. The 3D image generator1214may be a part of a 3D model that converts multiple 2D images into 2D panoramic or 3D panoramic images. In some embodiments, the 3D model is a machine learning algorithm, such as a 3D-from-2D prediction neural network model. The captured 2D image datastore1216may be any structure and/or structures suitable for captured images and/or depth data (e.g., an active database, a relational database, a self-referential database, a table, a matrix, an array, a flat file, a documented-oriented storage system, a non-relational No-SQL system, an FTS-management system such as Lucene/Solar, and/or the like). The captured 2D image datastore1216may store images captured by the image capture device of the user system1110. In various embodiments, the captured 2D image datastore1216stores depth data captured by one or more depth sensors of the user system1110. In various embodiments, the captured 2D image datastore1216stores image capture device parameters associated with the image capture device, or capture properties associated with each of the multiple image captures, or depth information captures used to determine the 2D panoramic image. In some embodiments, the image datastore1108stores panoramic 2D panoramic images. The 2D panoramic images may be determined by the 3D and panoramic capture and stitching system1102or the image stitching and processor system106. Image capture device parameters may include lighting, color, image capture lens focal length, maximum aperture, angle of tilt, and the like. Capture properties may include pixel resolution, lens distortion, lighting, and other image metadata. FIG.13depicts a flow chart1300of a 3D panoramic image capture and generation process according to some embodiments. In step1302, the image capture device may capture multiple 2D images using the image sensor920and the WFOV lens918ofFIG.9A. The wider FOV means that the environmental capture system400will require fewer scans to obtain a 360° view. The WFOV lens918may also be wider horizontally as well as vertically. In some embodiments, the image sensor920captures RGB images. In one embodiment, the image sensor920captures black and white images. In step1304, the environmental capture system may send the captured 2D images to the image stitching and processor system1106. The image stitching and processor system1106may apply a 3D modeling algorithm to the captured 2D images to generate a panoramic 2D image. In some embodiments, the 3D modeling algorithm is a machine learning algorithm to stitch the captured 2D images into a panoramic 2D image. In some embodiments, step1304may be optional. In step1306, the lidar912and WFOV lens918ofFIG.9Amay capture lidar data. The wider FOV means that the environmental capture system400will require fewer scans to obtain a 360° view. In step1308, the lidar data may be sent to the image stitching and processor system1106. The image stitching and processor system1106may input the lidar data and the captured 2D image into the 3D modeling algorithm to generate the 3D panoramic image. The 3D modeling algorithm is a machine learning algorithm. In step1310, the image stitching and processor system1106generates the 3D panoramic image. The 3D panoramic image may be stored in the image datastore408. In one embodiment, the 3D panoramic image generated by the 3D modeling algorithm is stored in the image stitching and processor system1106. In some embodiments, the 3D modeling algorithm may generate a visual representation of the floorplan of the physical environment as the environmental capture system is utilized to capture various parts of the physical environment. In step1312, image stitching and processor system1106may provide at least a portion of the generated 3D panoramic image to the user system1110. The image stitching and processor system1106may provide the visual representation of the floorplan of the physical environment. The order of one or more steps of the flow chart1300may be changed without affecting the end product of the 3D panoramic image. For example, the environmental capture system may interleave image capture with the image capture device with lidar data or depth information capture with the lidar912. For example, the image capture device may capture an image of section of the physical environment with the image capture device, and then lidar912obtains depth information from section1605. Once the lidar912obtains depth information from section, the image capture device may move on to capture an image of another section, and then lidar912obtains depth information from section, thereby interleaving image capture and depth information capture. In some embodiments, the devices and/or systems discussed herein employ one image capture device to capture 2D input images. In some embodiments, the one or more image capture devices1116can represent a single image capture device (or image capture lens). In accordance with some of these embodiments, the user of the mobile device housing the image capture device can be configured to rotate about an axis to generate images at different capture orientations relative to the environment, wherein the collective fields of view of the images span up to 360° horizontally. In various embodiments, the devices and/or systems discussed herein may employ two or more image capture devices to capture 2D input images. In some embodiments, the two or more image capture devices can be arranged in relative positions to one another on or within the same mobile housing such that their collective fields of view span up to 360°. In some embodiments, pairs of image capture devices can be used capable of generating stereo-image pairs (e.g., with slightly offset yet partially overlapping fields of view). For example, the user system1110(e.g., the device comprises the one or more image capture devices used to capture the 2D input images) can comprise two image capture devices with horizontal stereo offset fields of-view capable of capturing stereo image pairs. In another example, the user system1110can comprise two image capture devices with vertical stereo offset fields-of-view capable of capturing vertical stereo image pairs. In accordance with either of these examples, each of the cameras can have fields-of-view that span up to 360. In this regard, in one embodiment, the user system1110can employ two panoramic cameras with vertical stereo offsets capable of capturing pairs of panoramic images that form stereo pairs (with vertical stereo offsets). The positioning component1118may include any hardware and/or software configured to capture user system position data and/or user system location data. For example, the positioning component1118includes an IMU to generate the user system1110position data in association with the one or more image capture devices of the user system1110used to capture the multiple 2D images. The positioning component1118may include a GPS unit to provide GPS coordinate information in association with the multiple 2D images captured by one or more image capture devices. In some embodiments, the positioning component1118may correlate position data and location data of the user system with respective images captured using the one or more image capture devices of the user system1110. Various embodiments of the apparatus provide users with 3D panoramic images of indoor as well as outdoor environments. In some embodiments, the apparatus may efficiently and quickly provide users with 3D panoramic images of indoor and outdoor environments using a single wide field-of-view (FOV) lens and a single light and detection and ranging sensors (lidar sensor). The following is an example use case of an example apparatus described herein. The following use case is of one of the embodiments. Different embodiments of the apparatus, as discussed herein, may include one or more similar features and capabilities as that of the use case. FIG.14depicts a flow chart of a 3D and panoramic capture and stitching process1400according to some embodiments. The flow chart ofFIG.14refers to the 3D and panoramic capture and stitching system1102as including the image capture device, but, in some embodiments, the data capture device may be the user system1110. In step1402, the 3D and panoramic capture and stitching system1102may receive multiple 2D images from at least one image capture device. The image capture device of the 3D and panoramic capture and stitching system1102may be or include a complementary metal-oxide-semiconductor (CMOS) image sensor. In various embodiments, the image capture device is a charged coupled device (CCD). In one example, the image capture device is a red-green-blue (RGB) sensor. In one embodiment, the image capture device is an IR sensor. Each of the multiple 2D images may have partially overlapping fields of view with at least one other image of the multiple 2D images. In some embodiments, at least some of the multiple 2D images combine to create a 360° view of the physical environment (e.g., indoor, outdoor, or both). In some embodiments, all of the multiple 2D images are received from the same image capture device. In various embodiments, at least a portion of the multiple 2D images is received from two or more image capture devices of the 3D and panoramic capture and stitching system1102. In one example, the multiple 2D images include a set of RGB images and a set of IR images, where the IR images provide depth data to the 3D and panoramic capture and stitching system1102. In some embodiments, each 2D image may be associated with depth data provided from a lidar device. Each of the 2D images may, in some embodiments, be associated with positioning data. In step1404, the 3D and panoramic capture and stitching system1102may receive capture parameters and image capture device parameters associated with each of the received multiple 2D images. Image capture device parameters may include lighting, color, image capture lens focal length, maximum aperture, a field of view, and the like. Capture properties may include pixel resolution, lens distortion, lighting, and other image metadata. The 3D and panoramic capture and stitching system1102may also receive the positioning data and the depth data. In step1406, the 3D and panoramic capture and stitching system1102may take the received information from steps1402and1404for stitching the 2D images to form a 2D panoramic image. The process of stitching the 2D images is further discussed with regard to the flowchart ofFIG.15. In step1408, the 3D and panoramic capture and stitching system1102may apply a 3D machine learning model to generate a 3D representation. The 3D representation may be stored in a 3D panoramic image datastore. In various embodiments, the 3D representation is generated by the image stitching and processor system1106In some embodiments, the 3D machine learning model may generate a visual representation of the floorplan of the physical environment as the environmental capture system is utilized to capture various parts of the physical environment. In step1410, the 3D and panoramic capture and stitching system1102may provide at least a portion of the generated 3D representation or model to the user system1110. The user system1110may provide the visual representation of the floorplan of the physical environment. In some embodiments, the user system1110may send the multiple 2D images, capture parameters, and image capture parameters to the image stitching and processor system1106. In various embodiments, the 3D and panoramic capture and stitching system1102may send the multiple 2D images, capture parameters, and image capture parameters to the image stitching and processor system1106. The image stitching and processor system1106may process the multiple 2D images captured by the image capture device of the user system1110and stitch them into a 2D panoramic image. The 2D panoramic image processed by the image stitching and processor system1106may have a higher pixel resolution than the 2D panoramic image obtained by the 3D and panoramic capture and stitching system1102. In some embodiments, the image stitching and processor system106may receive the 3D representation and output a 3D panoramic image with pixel resolution that is higher than that of the received 3D panoramic image. The higher pixel resolution panoramic images may be provided to an output device with a higher screen resolution than the user system1110, such as a computer screen, projector screen, and the like. In some embodiments, the higher pixel resolution panoramic images may provide to the output device a panoramic image in greater detail and may be magnified. FIG.15depicts a flow chart showing further detail of one step1406of the 3D and panoramic capture and stitching process ofFIG.14. In step1502, the image capture position module1204may determine image capture device position data associated with each image captured by the image capture device. The image capture position module1204may utilize the IMU of the user system1110to determine the position data of the image capture device (or the field of view of the lens of the image capture device). The position data may include the direction, angle, or tilt of one or more image capture devices when taking one or more 2D images. One or more of the cropping module1208, the graphical cut module1210, or the blending module1212may utilize the direction, angle, or tilt associated with each of the multiple 2D images to determine how to warp, cut, and/or blend the images. In step1504, the cropping module1208may warp one or more of the multiple 2D images so that two images may be able to line up together to form a panoramic image and while at the same time preserving specific characteristics of the images such as keeping a straight line straight. The output of the cropping module1208may include the number of pixel columns and rows to offset each pixel of the image to straighten out the image. The amount of offset for each image may be outputted in the form of a matrix representing the number of pixel columns and pixel rows to offset each pixel of the image. In this embodiment, the cropping module1208may determine the amount of warping each of the multiple 2D images requires based on the image capture pose estimation of each of the multiple 2D images. In step1506, the graphical cut module1210determines where to cut or slice one or more of the multiple 2D images. In this embodiment, the graphical cut module1210may determine where to cut or slice each of the multiple 2D images based on the image capture pose estimation and the image warping of each of the multiple 2D images. In step1508, the stitching module1206may stitch two or more images together using the edges of the images and/or the cuts of the images. The stitching module1206may align and/or position images based on objects detected within the images, warping, cutting of the image, and/or the like. In step1510, the blending module1212may adjust the color at the seams (e.g., stitching of two images) or the location on one image that touches or connects to another image. The blending module1212may determine the amount of color blending required based on one or more image capture positions from the image capture position module1204, the image warping from the cropping module1208, and the graphical cut from the graphical cut module1210. The order of one or more steps of the 3D and panoramic capture and stitching process1400may be changed without affecting the end product of the 3D panoramic image. For example, the environmental capture system may interleave image capture with the image capture device with lidar data or depth information capture. For example, the image capture device1616may capture an image of a section or portion of the physical environment, and then the lidar obtains depth information from the section or portion, or other sections or portions. Once the lidar obtains depth information from the section or portion, or other sections or portions, the image capture device may then capture an image of another section, and then the lidar obtains depth information from the section, or other sections, thereby interleaving image capture and depth information capture. Additional example embodiments that overcome the stated limitations of the prior art, and that may share the following common set of elements, are now described. Lidar system(s). A lidar system is described below with reference toFIGS.17aand17b, as one example of a depth information capture device that may be used according to embodiments of the invention. The salient elements include a lidar transceiver1720which sources laser pulses and detects reflected laser pulses, and a rotating mirror1710that directs the pulses into a plane1705shown inFIG.17A.FIG.17Ashows an embodiment that aligns that plane to be perpendicular to the horizontal plane using a second axis of rotation1715that is in the horizontal plane. However,FIG.17Bshows there is a continuum of combinations of mirror angle, laser angle, and second axis of rotation1715angles that can achieve a lidar scanning plane that is substantially vertical. The origin of the lidar system1725is specified as the intersection of the laser transmit beam with the mirror. Imaging capture system(s). Also referred to as a camera system or imaging system in some embodiments, the salient parts of this system are the lens and the sensor array (e.g., Charge Couple Device image sensor, or CMOS image sensor). In example embodiments, a wide-angle lens or a fish-eye lens can be used to obtain larger horizontal field of view (HFOV) and/or vertical field of views (VFOV). Frame. A common mechanical frame to which the camera system and the lidar system are attached. The frame establishes the geometric relationship between the imaging system's frame of reference and the lidar system's frame of reference. A means of rotating the image capture system. A means of rotating the depth information capture devices (e.g., lidar, structured light projection, etc.). Processors to control the elements of the system and to process the image and lidar data that is acquired data to create panoramic 3D models. This processing may also be completed within the Environmental Capture System (ECS) or it may be shared with other processors, in part or whole, in the associated ecosystem. The associated ecosystem of the ECS includes additional systems that may interface with the ECS via communication networks1825as shown as shown inFIG.18. ECS1805can communicate with other systems in the network (e.g. control systems1815, data storage centers1820, and processing centers1810) via the communication networks1825. Sensors for ascertaining the state of operation of the machine such as IMU, accelerometers, level sensors, GPS, etc. Communication system. Communicates the data acquired to external processing system(s), external data storage system(s), and external control systems. Other support systems, e.g., storage, control and power. The ECS apparatus and the associated methods of operation and methods of data processing system disclosed herein have several differentiators over the prior art: The horizontal rotation of the ECS is about a substantially vertical axis that, in some embodiments, passes through the NPP (no parallax point) of the image capture system. This facilitates the blending of images with overlapping fields of view. The sequence of operations disclosed herein with respect to some embodiments interleaves image captures with lidar data capture as the ECS is rotated 360 degrees or less. This is different from the existing ECS systems which capture the lidar data in its entirety separately from capture the image data in its entirety, which requires extra revolutions of the ECS and is therefore slower than the embodiments. At each position of the ECS where images are captured, a number of different exposures may be taken. Blending these images together results in a higher quality of images over a wide dynamic range of lighting conditions. An embodiment, shown inFIG.19, minimizes the stitching artifacts by placing the axis of rotation of the ECS (first axis of rotation)1960through the No Parallax Point (NPP)1910of the image capture system. The NPP is the center of the lens of the image capture system.FIG.19further shows both the image capture system and the lidar system attached to the common frame of the ECS1940. Therefore, a rotation of the mechanical frame causes both the image capture system1950and lidar system1930to rotate by the same amount. The motor for driving the rotation of the ECS around the first axis may be onboard the ECS or it may be part of an external support device such as a tripod. The image capture system has a HFOV (horizontal field of view)1905about its central axis. In the embodiment shown the lidar system has a vertical scanning plane1920that is perpendicular to the central axis of the camera. Note the NPP1910is a distance A1from the lidar scanning plane1920. In this example the lidar scanning plane is vertical. A side view of the lidar scanning plane is shown inFIG.20. This figure shows the distribution of the laser beam reflecting off the rotating mirror at various angles θ2005in the vertical plane with respect to the horizontal plane. Note that the origin for the lidar data2010is defined as the intersection of the laser beam from the lidar transmitter (in the lidar transceiver) with the surface of the rotating mirror. Further note that the lidar transmitted scanning beams are blocked if the beams are emitted in the direction that of the frame2020of the ECS. With the exception of this case, the lidar system is able to calculate the distance a surface is from the ECS lidar system origin in the direction the beam was launched by recording the roundtrip time from the time of launch of a laser pulse to the return of some portion of the reflected beam energy of that pulse from the targeted surface. In order to acquire the two-dimensional (2D) images necessary to construct a 2D panoramic picture, the horizontal directions in which the camera is pointed are determined to provide sufficient overlap in the field of views to facilitate stitching.FIGS.21A,21B,21C and21Dare top views of the field of view of the ECS image capture system showing overlapping field of views in four horizontal rotation positions, or directions, Ø=0 degrees, Ø=90 degrees, Ø=180 degrees, and Ø=270 degrees, where images are captured, and where Ø is the horizontal direction of the camera around the first axis of rotation. In this embodiment the HFOV is approximately 145 degrees, therefore the overlap between a first FOV2110and a second FOV2120is 55 degrees, between the second FOV2120and a third FOV2130is 55 degrees, and between the third FOV2130and a fourth FOV2140is 55 degrees. In general, between adjacent FOVs, the overlap is 55 degrees, in this embodiment. In other embodiments the degree of overlap may be less or more than 55 degrees. Lidar scanning for the apparatus shown inFIG.19involves the lidar system being rotated off axis since the first axis of rotation1960for the ECS does not go through the origin of the lidar system, rather it goes through the NPP1910of the image capture system.FIG.22Ais a top view of successive lidar scans (2205,2210,2215,2220,2225) that shows as the first axis of rotation transitions from 0 degrees to 90 degrees. Each of the lidar scans contains data from one completed revolution of the second axis of rotation.FIG.22Bis a top view of successive lidar scans (2225,2230,2235,2240,2245) as the first axis of rotation moves from 90 degrees to 180 degrees.FIG.22Cillustrates the combination of scans depicted inFIGS.22A and22B.FIG.22Creveals a gap2260in lidar scan coverage between the lidar scan2205at 0 degrees and the lidar scan2245at 180 degrees. FIG.23shows more specifically the geometry of the gap2260and β1, the additional amount of rotation that is required to close the gap. Tan(β1)=2A1/d1. A1is the distance between NPP1910and the origin2330of the lidar system.2A1is the distance between the lidar scanning plane for the case where the image capture angle Ø is 0°2305and the lidar scanning plane when the image capture angle Ø is 180°2310. d1is the distance from the origin2330of the lidar system and the closest object2320in the view of the lidar system when the ECS system is oriented at 0°, or equivalently d1is the distance from the origin2340and the closest object2320in the view of the lidar system when the ECS is oriented at 180°. A few typical cases are 1) A1(inches)=6, d1(inches)=24, and β1=26.6 degrees; 2) A1(inches)=6, d1(inches)=36, and β1=18.4 degrees; and 3) A1(inches)=6, d1(inches)=48, and β1=14.0 degrees. TogetherFIG.24A,FIG.24B,FIG.24CandFIG.24Drepresent two complete series of 360-degree lidar scans in increments of 90 degrees from 0 degrees to 360 degrees. The cross-hatched areas represent the lidar coverage for each of the incremental scans.2405aand2405bdepict the lidar scan segments as the ECS rotates from 0° to 90°.2410aand2410bdepict the lidar scan segments as the ECS rotates from 90° to 180°.2415aand2415bdepict the lidar scan segments as the ECS rotates from 180° to 270°.2420aand2420bdepict the lidar scan segments as the ECS rotates from 270° to 360°. By twice scanning the full 360 degrees, additional, or duplicative, depth information is provided that can be used for various purposes. This additional information is supplied in the form of additional points in the cloud of points which can provide finer resolution to the contours and texture of the surfaces of environmental captured by the ECS. Consider that each scan gives almost two quadrants of information, according to one embodiment. This implies that the information provided by two 360 degree scans is approximately eight quadrants of information of scanning information, i.e., two times a 360-degree single scan. The scans are covering the same surface areas but at slightly different angles and at different times. This information may be used to identify movement of an object or person in motion and furthermore with complete spatial information of the path of that object or person as it or they move through the aggregate view of the lidar system. The additional information can also be used as cross check for the integrity of a given scan. If the information for a specific scan is not consistent with other scan information, then a flag can be raised, which if corroborated with other flags can result in a request for a rescan. The processing of this information can be accomplished such that the rescan can be performed before the ECS is moved to another location. This may potentially save the time of the operator tasked with acquiring a known good set of data before moving to another location. Embodiments for Data Acquisition The following embodiments apply to the apparatus described herein, where the ECS vertical axis of rotation passes thru the NPP of an image capture system and the lidar scans are off the first axis of rotation, as discussed above with reference toFIG.19. The embodiments reduce the time of acquisition of the image and depth information by interleaving the image capture processes with the depth information capture process, with one objective of acquiring data that can be used to generate an image of a 360-degree scene in a single rotation (or less) of the ECS. The relative positions where image capture occur may be determined in part by the horizontal field of view (HFOV) of the imaging system and the amount of overlap between adjacent HFOVs that is desired.FIGS.21A-Care an example of a top view of the ECS oriented at 4 sequential positions of rotation around the first axis of rotation, i.e., at 0 degrees, 90 degrees, 180 degrees and 270 degrees about a substantially vertical axis and their associated fields of view2110,2120,2130and2140. In this example embodiment the fields of view are large enough to insure overlap between successive images taken at the 4 sequential positions. Mathematically this equates to restricting the angle of rotation of the image capture system to be less than the horizontal field of view of the image capture system. For sufficiently large horizontal field of views the number of horizontal angular positions may be three. In practice the overlap should be enough to facilitate the task of stitching the images together to create a 360-degree panoramic view of the environment. FIG.25illustrates an example embodiment in which the image acquisition positions are those shown inFIGS.21A,21B,21C, and21D. The process for this example embodiment is a time sequence of steps, where t indicates time: Step one2505, from t=0 to t=t1, images are captured at a first angle position of the first axis of rotation, for example, at 0 degrees, for the field of view2110. Step two2510, from t=t1to t=t2, the lidar system acquires depth data, as the ECS is rotated around the first axis of rotation from the first position to a second angle position, for example, from 0 degrees to 90 degrees, for quadrants2405aand2405b. Step three2515, from t=t2to t=t3, images are captured at the second angle position of the first axis of rotation, for example, at 90 degrees, for the field of view2120. Step four2520, from t=t3to t=t4, the lidar system acquires depth data, as the ECS is rotated around the first axis of rotation from the second angle position to a third angle position, for example, from 90 degrees to 180 degrees, for quadrants2410aand2410b. Step Five2525, from t=t4to t=t5, images are captured at the third angle position of the first axis of rotation, for example, at 180 degrees, for the field of view2130. Step Six2530, from t=t5to t=t6, the lidar system acquires depth data, as the ECS is rotated around the first axis of rotation from third position to a fourth angle position that is the third angle position plus the angle β1, for example, from 180 degrees to 180 degrees+β1(gap closure angle). Because the lidar system is positioned off the first axis of rotation the ECS is rotated an additional angle, β1, to cover a gap in lidar scan coverage, as discussed earlier. Step Seven2535, from t=t6to t=t7, continue the ECS rotation around the 1st axis of rotation from the fourth position to a fifth position, for example, from 180 degrees+β1to 270 degrees. Step Eight2540, from t=t7to t=t8, images are captured at the fifth angle position of the first axis of rotation, for example, at 270 degrees, for the field of view2140. Thus, at blocks2505,2515,2525and2540, images are captured. At each of these steps multiple images may be captured, each at different exposures. Furthermore, image processing may also be included in these steps to blend and then stitch the images together or to validate the completeness and quality of the images. This may result in repeating the capture of certain images at specific exposures. The purpose of doing so at this point to avoid doing so at a later time, which may result in the operator revisiting the location. FIG.26illustrates another example embodiment in which the image acquisition positions are at 0 degrees, 120 degrees and 240 degrees and in which the image capture system has a sufficiently large horizontal field of view, e.g., 155 degrees. The algorithm for this method is a time sequence of steps (similar to the above described embodiment), where t indicates time: Step one2605, from t=0 to t=t1, images are captured at a first angle position of a first axis of rotation, for example, at 0 degrees, for a first field of view. Step two2610, from t=t1to t=t2, the lidar system acquires depth data, as the ECS is rotated around the first axis of rotation from the first angle position to the second angle position, for example, from 0 degrees to 120 degrees, to obtain depth data for first and second portions of a 360-degree scene. Step three2615, from t=t2to t=t3, images are captured at the second angle position of the first axis of rotation, for example, at 120 degrees, for a second field of view. Step four2620, from t=t3to t=t4, the lidar system acquires depth data, as the ECS is rotated around the first axis of rotation from the second angle position to a third angle position, for example, from 120 degrees to 240 degrees, to obtain depth data for third and fourth portions of the 360-degree scene. Step Five2625, from t=t4to t=t5, images are captured at the third angle position of the first axis of rotation, for example, at 240 degrees, for a third field of view. Step Six2630, from t=t5to t=t6, the lidar system acquires depth data, as the ECS is rotated around the first axis of rotation from the third angle position to a fourth angle position, for example, from 240 degrees to 360 degrees, to obtain depth data again for the first and second portions of the 360-degree scene. Thus, at blocks2605,2615and2625, images are captured. These steps may also include images captured at different exposures. Furthermore, image processing may also be included in these steps to blend and or stitch the images together or to validate the completeness and quality of the images. The panoramic 3D model of the environment surrounding the ECS combines both the image capture data and the depth information captured data. To facilitate this process, it is helpful to convert the coordinate system representing the lidar cloud of points into a common reference frame that is consistent with either the reference frame used for the image capture or the reference frame used for the depth information capture. The coordinate system used to describe this common reference is a Cartesian coordinate system (x, y, z). Note that other coordinate systems could be used (i.e., a spherical coordinate system, or a cylindrical coordinate system). The origin2705for the (x, y, z) is defined as the NPP1910. The Z-axis is the first axis of rotation. The dotted line2715represents the lidar vertical plane as seen from the top view. The choice for the orientation of the XY plane is consistent with, for example, the image capture system, with the X-axis set to the Ø=0 direction, as shown inFIG.21A. FIG.27shows the conversion equations for converting the coordinates of location of a point2720in the lidar cloud points at (Ø, θ, Dtof) to its equivalent coordinates (x, y, z) using the common cartesian frame of reference. x=Dtofcos θ sin Ø−A1 cos Ø y=−(Dtofcos θ cos Ø+A1 sin Ø) z=Dtofsin θ The salient parameters for this conversion are: Dtofis the distance as measured from the lidar origin2710(intersection of the transmit laser beam with the lidar mirror surface) to the point on the surface of the environment. It is half of the round-trip time of flight (TOF) divided by the speed of light, where the round trip time is defined as the time it takes the laser pulse to travel from its reflection point on the lidar mirror to the contact point on the surface of the environment and back again to the lidar mirror. Ø is the angle around the first axis of rotation. Θ is the angle around the second axis of rotation as shown inFIG.20. A1is the distance from the lidar origin2710to the NPP (no parallax point)2705. Note it has been assumed for the purpose of simplicity that the lidar origin is colinear with the line that passes through the NPP and is perpendicular to the image sensor plane. FIG.16depicts a block diagram of an example digital device1602according to some embodiments. Any of the user system1110, the 3D panoramic capture and stitching system1102, and the image stitching and processor system may comprise an instance of the digital device1602. Digital device1602comprises a processor1604, a memory1606, a storage1608, an input device1610, a communication network interface1612, an output device1614, an image capture device1616, and a positioning component1618. Processor1604is configured to execute executable instructions (e.g., programs). In some embodiments, the processor1604comprises circuitry or any processor capable of processing the executable instructions. Memory1606stores data. Some examples of memory1606include storage devices, such as RAM, ROM, RAM cache, virtual memory, etc. In various embodiments, working data is stored within memory1606. The data within memory1606may be cleared or ultimately transferred to storage1608. Storage1608includes any storage configured to retrieve and store data. Some examples of storage1608include flash drives, hard drives, optical drives, and/or magnetic tape. Each of memory1606and storage1608comprises a computer-readable medium, which stores instructions or programs executable by processor1604. The input device1610is any device that inputs data (e.g., touch keyboard, stylus). Output device1614outputs data (e.g., speaker, display, virtual reality headset). It will be appreciated that storage1608, input device1610, and an output device1614. In some embodiments, the output device1614is optional. For example, routers/switchers may comprise processor1604and memory1606as well as a device to receive and output data (e.g., a communication network interface1612and/or output device1614). The communication network interface1612may be coupled to a network (e.g., communication network104) via communication network interface1612. Communication network interface1612may support communication over an Ethernet connection, a serial connection, a parallel connection, and/or an ATA connection. Communication network interface1612may also support wireless communication (e.g., 802.16 a/b/g/n, WiMAX, LTE, Wi-Fi). It will be apparent that the communication network interface1612may support many wired and wireless standards. A component may be hardware or software. In some embodiments, the component may configure one or more processors to perform functions associated with the component. Although different components are discussed herein, it will be appreciated that the server system may include any number of components performing any or all functionality discussed herein. The digital device1602may include one or more image capture devices1616. The one or more image capture devices1616can include, for example, RGB cameras, HDR cameras, video cameras, and the like. The one or more image capture devices1616can also include a video camera capable of capturing video in accordance with some embodiments. In some embodiments, one or more image capture devices1616can include an image capture device that provides a relatively standard field-of-view (e.g., around 75°). In other embodiments, the one or more image capture devices1616can include cameras that provide a relatively wide field-of-view (e.g., from around 120° up to 360°), such as a fisheye camera, and the like (e.g., the digital device1602may include or be included in the environmental capture system400). A component may be hardware or software. In some embodiments, the component may configure one or more processors to perform functions associated with the component. Although different components are discussed herein, it will be appreciated that the server system may include any number of components performing any or all functionality discussed herein.
124,346
11943540
DETAILED DESCRIPTION Specific embodiments of the disclosure are described herein in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. Embodiments of the disclosure provide for automatic exposure (AE) control of images in an imaging sensor system. In particular, embodiments provide for computing exposure values (EV) to be used to determine AE settings of an imaging sensor. Depending on the particular imaging system, an EV may be a function of the exposure time (ET) (shutter speed) and one or more of the analog gain (AG), the F-number (aperture size), and the digital gain (DG) to be used to capture an image. For example, imaging sensor systems used in automotive applications may not be configured to change the aperture size and therefore EV may be a function of ET and AG. Approaches to computation of EVs described herein are based on target characteristics of an image indicative of exposure settings that yield well exposed images. In some embodiments, the target characteristics that form the basis of the EV computation are target brightness, target percentage of low tone pixels, and target percentage of high tone pixels. FIG.1is a simplified block diagram illustrating an example of AE control in an imaging sensor system in accordance with some embodiments. The imaging sensor system includes a system on a chip (SoC)100coupled to an image sensor103to receive raw image sensor data. The SoC100includes an image signal processor (ISP)101and a processor102for executing AE control software104. The ISP101includes functionality to receive the raw image sensor data and perform various image processing operations on the raw sensor data to generate processed images. The image processing operations may include decompanding, defective pixel correction (DPC), lens shading correction, spatial noise filtering, brightness and contrast enhancement, demosaicing, and color enhancement. In addition, the ISP101is configured to generate statistics from the images for the AE computations and to provide the generated AE statistics to the processor102. The AE statistics for an image may be a downsampled version of the image in which each pixel in the downsampled image is generated by averaging pixels in the original image. The AE control software104using the AE statistics (e.g., a downsampled image) generated for a current image, referred to as the image at time t herein, to determine an EV for one or more subsequent images to be captured by the image sensor103. The AE control software104computes exposure values (EV) using a cost function based on target characteristics of an image, i.e., target brightness of an image and target percentages (also called target probabilities) of low tone and high tone pixels in an image, indicative of exposure settings that yield well exposed images. More specifically, for each image, the cost function minimizes the normalized deviation of the mean brightness of the image from a target mean brightness, the normalized deviation of the probability of low tone pixels in the image from a target percentage (or target probability) of low tone pixels, and the normalized deviation of the probability of high tone pixels in the image from a target percentage (or target probability) of high tone pixels. Equations for these normalized deviations are provided below and Table 1 provide definitions for the variables in these equations. The normalized deviation from a target brightness is computed as per DMB=❘"\[LeftBracketingBar]"TMB-IMB(t)❘"\[RightBracketingBar]"Brange. The normalized deviation from a target percentage of low tone pixels is computed as per DPL=|TPLpL(t)|. The normalized deviation from a target percentage of high tone pixels is computed as per DPH=|TPH−pH(t)|. TABLE 1VariableDefinitionTMBTarget mean brightnessIMB(t)Mean brightness of image I at time tBrangeBrightness rangeTPLTarget probability of low-tone pixelspL(t)Probability of low-tone pixels at time tTPHTarget probability of high-tone pixelspH(t)Probability of high-tone pixels at time t The AE control software computes the cost function as per c=α−DMB*βDPL*γ−DPH, where α, β, and γ≥1 are weights provided as parameters to the AE control software104. The target mean brightness TMB, the target probability of low-tone pixels TPL, and the target probability of high-tone pixels TPHare also provided as parameters to the AE control software104. The AE control software104computes the brightness range Brange, the mean brightness at time t IMB(t), the probability of low-tone pixels at time t pL(t), and the probability of high-tone pixels at time t pH(t) using the downsampled image. The brightness range Brangemay be the maximum allowed pixel value in the image, which depends on the pixel bit depth of the image. The mean brightness IMB(t) may be computed as the sum of all the pixels values in the image divided by the number of pixels in the image. The probability of low tone pixels pL(t) may be computed as the number of pixels in the image below a low tone threshold divided by the total number of pixels in the image. The probability of high tone pixels pH(t) may be computed as the number of pixels in the image above a high tone threshold divided by the total number of pixels in the image. The low tone threshold and high tone threshold are provided as parameters to the AE control software104. In some embodiments, the values of α, β, and γ may be chosen based on the impact reaching each target has on achieving a well exposed image. For example, if meeting the target brightness goal is a better measure of a well exposed image than the target percentage of low-tone pixels, then α>β. In some embodiments, a machine learning approach is used to learn values of α, β, and γ. Such embodiments are described below. The AE control software104computes an EV for time t as per EV⁡(t)={(c*EV⁡(t-1))⁢if⁢c≥TcEV⁡(t-1)⁢otherwise where Tca threshold cost value representing a minimum cost for adjusting the EV. The threshold cost Tcis provided as a parameter to the AE control software104. The resulting EV(t) is then used to generate exposure settings, e.g., AG and ET, for the image sensor103. In some embodiments, the AE control software104implements a stability lock that is used to ensure that the EV is not changed based on small or anomalous changes in scene brightness between images. The stability lock determines whether or not the computed value of EV(t) is to be used to determine the exposure settings for the image sensor103. Table 2 shows pseudo code for the stability lock in accordance with some embodiments. In this pseudo code, the lock state of locked indicates whether the AE computations are locked (not to be used) or unlocked, the lock count is used to ensure that the EV is not change based on a single faulty frame, e.g., an image with a very high brightness as compared to the previous image, Run AE indicates whether or not the computed value of EV(t) is to be used, ycurris the current average scene brightness, and yavgis the average brightness over a user specified number of previous images, and TBDis the maximum allowable brightness deviation from TMB. TABLE 2Run AE = 1If (locked) and (c < Tc):→ Run AE = 0If ( (|ycurr(t − 1) − yavg(t)| < TBD) and (c < 2 * Tc) ) or (c < Tc)):→ lock count + +Else If (lock count > 0):→ lock count − −If (lock count > 3):→ lock count = 3→ locked = 1→ Run AE = 0Else If (lock count > 0) and (locked)→ Run AE = 0If (Run AE)→Perform AE→ locked = 0 FIG.2is a flow diagram of a method for automatic exposure (AE) control in an imaging sensor system in accordance with some embodiments. For example purposes, the method is explained in reference to the example imaging sensor system ofFIG.1. The method assumes the AE control software104has been initialized with values for the parameters α, β, γ, Tc, TBD, TMB, TPL, and TPH. Initially, statistics for AE (e.g., a downsampled image) as generated for an image captured at time t by the image sensor103are received by the AE control software104. The AE statistics are generated by the ISP101and stored in memory for access by the AE control software103. The AE control software104computes202an exposure value EV(t) as previously described herein using the AE statistics for the image. Exposure settings are then computed204for the image sensor103based on the value of EV(t) and output206to the image sensor103. In some embodiments, a stability lock as previously described herein may be executed after EV(t) is computed to determine if the exposure setting for the image sensor103are to be computed using the computed EV(t) or if the exposure settings are to be unchanged. FIG.3is a block diagram of an example system for learning values for the AE weights α, β, and γ. The system includes a weight learning component300coupled to a virtual imaging system302configured to execute the above described approach for computing EV(t) and a scene database304coupled to the virtual imaging system302. Values for the weights are learned by executing experiments on the virtual imaging system302with various combinations of weight values and starting EVs. Quality metrics are computed based on the results of these experiments and used to determine the weight values. The scene database304stores multiple scenes each including multiple images and corresponding EVs for multiple images. More specifically, for a given scene, the scene database304stores multiple images of the scene where each image was captured with a different EV. The range of EVs used to capture the images may be selected to cover the EV range of an imaging sensor but need not include every possible EV. In some embodiments, the images corresponding to the EVs in an EV range are captured sequentially in a small amount of time, e.g., less than one minute. The same EV range may be used for each scene in the scene database304. For example, for a particular image sensor, the scene database304may include fifty scenes with images collected at EV={1, 1.2, 1.4, . . . , 8}. The virtual imaging system302includes a virtual imaging system control component306, an image interpolation component310, and an AE computation component308. The image interpolation component310is coupled to the scene database304and is configured to provide images to the AE computation component at specified EVs. Inputs to the interpolation component310include a scene identifier, e.g., a scene name or number, and an EV. The image interpolation component310is configured to search the images in the scene database304corresponding to the identified scene for an image captured at the requested EV. If such an image is found, the image is provided to the AE computation component308. If such an image is not found, the image interpolation component310is configured to interpolate an image corresponding to the requested EV and provide the interpolated image to the AE computation component308. In some embodiments, the image interpolation component310generates the interpolated image using two images from the identified scene that were captured at EVs immediately above and below the requested EV. More specifically, the image interpolation component310generates an interpolated image IEv(t)as per IEV⁡(t)={(EV⁡(t)+EV⁡(t)++EV⁡(t)-)⁢IE⁢V⁡(t)+}+{(EV⁡(t)-EV⁡(t)++EV⁡(t)-)⁢IE⁢v⁡(t)-} where EV(t) is the requested IEV(t)+is the scene image from the scene database304with the closest EV larger than EV(t), EV(t)+is the EV of IEV(t)+, IEV(t)−is the scene image from the scene database304with the closest EV smaller than EV(t), and EV(t)−is the EV of IEV(t)−. The AE computation component308is configured to compute an EV based on the image received from the image interpolation component308. The EV is computed as per EV⁡(t)={(c*EV⁡(t-1))⁢if⁢c≥TcEV⁡(t-1)⁢otherwise as previously described herein. The AE computation component308computes the brightness range Brange, the mean brightness at time t IMB(t), the probability of low-tone pixels and the probability of high-tone pixels at time t pH(t) based on the image received from the image interpolation component310. The target mean brightness TMB, the target probability of low-tone pixels TPL, the target probability of high-tone pixels TPH, the threshold cost Tc, and values for the weights α, β, γ are input parameters to the AE computation component308. In some embodiments, the AE computation component308is also configured to implement a stability lock as previously described herein. The virtual imaging system control component306is configured to control the operation of the virtual imaging system302. Inputs to the virtual imaging system control component306include the parameters for the image interpolation component310and the AE computation component308which include those previously mentioned herein as well as an initial EV for the image interpolation component310, a scene identifier, and a number of time steps to execute the virtual imaging system302. The virtual imaging system control component306includes functionality to cause the image interpolation component310and AE computation component308to execute for the specified number of time steps. A time step includes generation of an image at the requested EV (either the initial one or one output by the AE computation component308in the previous time step) by the image interpolation component303and computation of an EV based on the generated image by the AE computation component308. The virtual imaging system control component306is further configured to capture the EV output by the AE computation component308after each time step and to provide the captured EVs to the weight learning component300. The weight learning component300is configured to cause the virtual imaging system302to execute experiments using candidate values of α, β, γ and various starting EVs and to compute quality metrics for each combination of candidate values of α, β, γ. The goal is to identify values for α, β, γ that result in EVs within a goal range of EVs identified as producing well exposed images in the scenes in the scene database304. The goal range of EVs may be determined, for example, by having an expert view all the images in the scene database304and identify the images that have good exposure in the opinion of the expert. The EVs for the identified images can then be used to set the goal range of EVs. In some embodiments, multiple experts may be used and the goal range of EVs determined from the opinions of the multiple experts. The weight learning component300is configured to receive a goal range of EVs, a range of candidate weight values for α, β, γ to be tried, a set of starting EVs, and a number of time steps from a user as well as values for TMB, TPL, TPH, Tc, TBD, and the low and high tone thresholds for the AE computation component308. The weight learning component300causes the virtual imaging system302to execute an experiment for the number of time steps for each scene in the scene database304for each possible combination of α, β, γ from the range of candidate weight values and starting EV and receives from the virtual imaging system302a set of EVs generated by the AE computation component308for each experiment. A combination of values of α, β, γ selected from the range of candidate weight values may be referred to as a candidate set of weight values herein. The set of EVs includes one EV value for each time step. More specifically, the weight learning component300receives a set of EVs for each experiment described by a 5-tuple (α, β, γ, S, EV(0)) where α, β, γ are the candidate weight values used for the experiment, S is the scene identifier for the experiment, and EV(0) is the starting EV for the experiment. For example, the range of candidate weight values may be [1, 2, 3], the set of starting EVs may be [1, 4, 8], and the number of time steps may be 10. The possible combinations of α, β, γ, i.e., the candidate sets of weight values, are [1, 1, 1], [1, 1, 2], [1, 1, 3], [1, 2, 1], [1, 2, 2], [1, 2, 3], . . . [3, 3, 3]. An experiment is executed for each of these candidate sets of weight values with each starting EV for each scene. For example, experiments are executed with α, β, γ=[1, 1, 1] and a starting EV of 1 for each scene in the database. Then, experiments are executed with α, β, γ=[1, 1, 1] and a starting EV of 4 for each scene in the database. Then, experiments are executed with α, β, γ=[1, 1, 1] and a starting EV of 8 for each scene in the database. This process is repeated for each possible combination of α, β, γ in the range of candidate weight values. After all the experiments are executed, the weight learning component300will have a set of ten EVs for each experiment. The weight learning component300is configured to compute quality metrics based on the sets of EVs generated by each experiment. For example, in some embodiments, the quality metrics are computed for each possible candidate set of weight values for α, β, γ, and are the percentage of experiments using a candidate set of weight values that converged to the goal EV range, the probability of convergence using the candidate set of weight values, and the average convergence time using the candidate set of weight values. An experiment for a candidate set of weight values converges if the EVs generated for a scene using the candidate set of weight values achieve an EV value within the goal range of EV values and stay within the goal range for consecutive time steps. For example, assume the goal range of EV values is [2, 3], the starting EV value is 8, the number of time steps is ten, and the resulting EVs for an experiment for a scene are 8, 6, 4, 2, 1, 2, 2, 2, 2, 2. The experiment converged to an EV within the goal range at time step 6 and remained converged for the final four time steps. The convergence time for a combination of α, β, γ is the number of time steps the AE computation component308is called until the EV is within the goal range and convergence occurs in the remaining time steps of an experiment multiplied by the frame rate of the imaging sensor used to capture the images in the scene data base304. Continuing the previous example, convergence began at time step 6, so the convergence time step is 6 and the convergence time is 6 times the frame rate. FIG.4is a flow diagram of a method for learning values for the AE weights α, β, and γ in accordance with some embodiments. For example purposes, the method is explained in reference to the example system ofFIG.3. The method assumes that one or more experts have viewed the images in the scene database304and a goal range of EVs has been determined. Initially, the weight learning component300receives400parameters for learning the weight values including a range of candidate weight values for α, β, γ to be tried, a set of starting EVs, a number of time steps n for each experiment as well as values for TMB, TPL, TPH, Tc, TBD, and the low and high tone thresholds and initializes the virtual imaging system302with the values for TMB, TPL, TPH, Tc, TBD, and the low and high tone thresholds. The weight learning component300sets401values of α, β, γ for the virtual imaging system302selected from the range of candidate weight values and sets402a value for the starting EV, EV(0), selected from the set of starting EVs. The weight learning component300then causes an experiment to be executed on the virtual imaging system302for a scene in the scene database304for the specified number of time steps and receives406the resulting EVs from the virtual imaging system302. If there is another scene408in the scene database for which an experiment has not been executed for the current values of α, β, γ and EV(0), the weight learning component300causes an experiment to be executed on the virtual imaging system302for another scene in the scene database304using the current values of α, β, γ and EV(0). Once an experiment has been performed for all scenes408in the scene database304with the current values of α, β, γ and EV(0), if there is another starting EV410in the set of starting EVs that has not been used in experiments with the current values of α, β, γ, then the weight learning component300sets402EV(0) to the next starting EV and causes an experiment to be executed on the virtual imaging system302for each scene408in the scene database304using the current values of α, β, γ and EV(0) and receives406the resulting EVs. Once an experiment has been executed for all scenes408in the scene database304with the current values of α, β, γ and all starting EVs410in the set of starting EVs, if there is another candidate set412of values of α, β, γ in the range of candidate weight values for which experiments have not been executed, then the weight learning component300sets401the values of α, β, γ to another set of values in the range of weight values and causes an experiment to be executed404on the virtual imaging system302for each scene408for each starting EV410in the set of starting EVs with the current values of α, β, γ and receives406the resulting EVs. Once an experiment has been executed for all scenes408in the scene database304for all combinations of values of α, β, γ in the range of candidate weight values and values of EV(0) in the set of starting EVs, the weight learning component300uses the EVs from each experiment to compute quality metrics for each combination of values of α, β, γ, i.e., each candidate set of weight values. Computation of example quality metrics is previously described herein. The user may then use these quality metrics to select a combination of values of α, β, γ to be used for automatic exposure control in an imaging sensor system. FIG.5is a high level block diagram of an example multiprocessor system-on-a-chip (SoC)500that may be configured to perform embodiments of automatic exposure (AE) control as described herein. In particular, the example SoC500is an embodiment of the TDA4VM SoC available from Texas Instruments, Inc. A high level description of the components of the SoC500is provided herein. More detailed descriptions of example components may be found in “TDA4VM Jacinto™ Automotive Processors for ADAS and Autonomous Vehicles Silicon Revisions 1.0 and 1.1,” Texas Instruments, SPRSP36J, February, 2019, revised August, 2021, pp. 1-323, which is incorporated by reference herein. The SoC500includes numerous subsystems across different domains such as one dual-core 64-bit Arm® Cortex®-A72 microprocessor subsystem504, a microcontroller unit (MCU) island506, based on two dual-core Arm® Cortex®-R5F MCUs, four additional dual-core Arm® Cortex®-R5F MCUs512in the main domain, two C66x floating point digital signal processors (DSPs)508, one C71x floating point, vector DSP510, that includes a deep-learning matrix multiply accelerator (MMA), and 3D graphics processing unit (GPU)512. The SoC500further includes a memory subsystem514including up to 8 MB of on-chip static random access memory (SRAM), an internal DMA engine, a general purpose memory controller (GPMC), and an external memory interface (EMIF) module (EMIF). In addition, the SoC500includes a capture subsystem516with two camera streaming interfaces, a vision processing accelerator (VPAC)502including one or more image signal processors (ISPs), a depth and motion processing accelerator (DMPAC)518, and a video acceleration module520. The SoC500also includes a display subsystem522, an ethernet subsystem524, a navigator subsystem526, various security accelerators528, support for system services530, and a variety of other interfaces532. Software instructions implementing AE control software as described herein may be stored in the memory subsystem514(e.g., a computer readable medium) and may execute on one or more programmable processors of the SOC500, e.g., the DSP510. Further, the one or more ISPs in the VPAC502may be, for example, embodiments of the ISP101ofFIG.1. FIG.6is a simplified block diagram of a computer system600that may be used to execute a system for learning values for the AE weights α, β, and γ as previously described herein. The computer system600includes a processing unit630equipped with one or more input devices604(e.g., a mouse, a keyboard, or the like), and one or more output devices, such as a display608, or the like. In some embodiments, the display608may be touch screen, thus allowing the display608to also function as an input device. The display may be any suitable visual display unit such as, for example, a computer monitor, an LED, LCD, or plasma display, a television, a high definition television, or a combination thereof. The processing unit630includes a central processing unit (CPU)618, memory614, a storage device616, a video adapter612, an I/O interface610, a video decoder622, and a network interface624connected to a bus. The bus may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, video bus, or the like. The CPU618may be any suitable type and suitable combination of electronic data processors. For example, the CPU618may include one or more processors from Intel Corp. or Advanced Micro Devices, Inc., one or more Reduced Instruction Set Computers (RISC), one or more Application-Specific Integrated Circuits (ASIC), one or more digital signal processors (DSP), or the like. The memory614may be any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), flash memory, a combination thereof, or the like. Further, the memory614may include ROM for use at boot-up, and DRAM for data storage for use while executing programs. The storage device616(e.g., a computer readable medium) may include any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus. The storage device616may be, for example, one or more of a hard disk drive, a magnetic disk drive, an optical disk drive, or the like. Software instructions implementing a system for learning values for the AE weights α, β, and γ as described herein may be stored on the storage device616. The scene database may also be stored on the storage device616or may be accessed via the network interface624. The software instructions may be initially stored in a computer-readable medium such as a compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device and loaded and executed by the CPU618. In some cases, the software instructions may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium. In some cases, the software instructions may be distributed to the computer system600via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another computer system (e.g., a server), etc. The video adapter612and the I/O interface610provide interfaces to couple external input and output devices to the processing unit630. As illustrated inFIG.6, examples of input and output devices include the display608coupled to the video adapter612and the mouse/keyboard604coupled to the I/O interface610. The network interface624allows the processing unit630to communicate with remote units via a network. The network interface624may provide an interface for a wired link, such as an Ethernet cable or the like, and/or a wireless link via, for example, a local area network (LAN), a wide area network (WAN) such as the Internet, a cellular network, any other similar type of network and/or any combination thereof. Other Embodiments While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope disclosed herein. For example, embodiments are described herein in reference to a particular technique for interpolating images to generate an image with a desired EV. In other embodiments, other techniques for interpolation may be used. In another example, embodiments are described herein in which a particular cost function is used for AE control in an imaging sensor system. In other embodiments, other cost functions representative of achieving well exposed images as viewed by one or more experts may be used. In another example, embodiments are described herein in which a particular approach to learning weight values for a cost function for AE control is used. In other embodiments, other learning techniques may be used. In another example, embodiments are described herein in reference to an example stability lock. In other embodiments, the stability lock may not be used or other approaches for implementing the stability lock may be used. In another example, embodiments are described herein in which AE statistics may be a downsampled image. In other embodiments, different or additional statistics may be available for AE, e.g., an image histogram. It is therefore contemplated that the appended claims will cover any such modifications of the embodiments as fall within the true scope of the disclosure.
29,372
11943541
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION According to embodiments, a new auto-setting method is provided. It comprises several phases among which a learning phase and a calibration phase for obtaining information and an operation phase for dynamically auto-setting a camera in any situation, when environmental conditions change. A new calibration phase may be triggered when environmental conditions change significantly and when items of information obtained during previous calibration phases are no longer efficient. It has been observed that since most network cameras monitor distant targets, the aperture value is generally set so that focus is achieved for any objects positioned more than about one meter from the cameras. As a result, the trade-off to be attained is generally mainly directed to gain and shutter speed that is to say to noise and motion blur. However, the inventors have observed that there exist circumstances in which optimizing the aperture value has a significant impact on the overall system efficiency. Therefore, depending on the use of the network cameras, the trade-off to be attained may be directed to gain and shutter speed or to gain, shutter speed, and aperture. FIG.1schematically illustrates an example of a video surveillance system wherein embodiments of the invention may be implemented. Video surveillance system100includes a plurality of network cameras denoted110a,110b, and110c, for example network cameras of the Internet Protocol (IP) type, generically referred to as IP cameras110. Network cameras110, also referred to as source devices, are connected to a central site140via a backbone network130. In a large video surveillance system, backbone network130is typically a wide area network (WAN) such as the Internet. According to the illustrated example, central site140comprises a video manager system (VMS)150used to manage the video surveillance system, an auto-setting server160used to perform an automatic setting of cameras110, and a set of recording servers170configured to store the received video streams, a set of video content analytics (VCA) servers180configured to analyse the received video streams, and a set of displays185configured to display received video streams. All the modules are interconnected via a dedicated infrastructure network145that is typically a local area network (LAN), for example a local area network based on Gigabit Ethernet. Video manager system150may be a device containing a software module that makes it possible to configure, control, and manage the video surveillance system, for example via an administration interface. Such tasks are typically carried out by an administrator (e.g. administrator190) who is in charge of configuring the overall video surveillance system. In particular, administrator190may use video manager system150to select a source encoder configuration for each source device of the video surveillance system. In the state of the art, it is the only means to configure the source video encoders. The set of displays185may be used by operators (e.g. operators191) to watch the video streams corresponding to the scenes shot by the cameras of the video surveillance system. The auto-setting server160contains a module for setting automatically or almost automatically parameters of cameras110. It is described in more detail by reference toFIG.2. Administrator190may use the administration interface of video manager system150to set input parameters of the auto-setting algorithm described with reference toFIGS.3to7, carried out in in auto-setting server160. FIG.2is a schematic block diagram of a computing device for implementing embodiments of the invention. It may be embedded in auto-setting server160described with reference toFIG.1. The computing device200comprises a communication bus connected to:a central processing unit210, such as a microprocessor, denoted CPU;an I/O module220for receiving data from and sending data to external devices. In particular, it may be used to retrieve images from source devices;a read only memory230, denoted ROM, for storing computer programs for implementing embodiments;a hard disk240denoted HD;a random access memory250, denoted RAM, for storing the executable code of the method of embodiments of the invention, in particular an auto-setting algorithm, as well as registers adapted to record variables and parameters;a user interface260, denoted UI, used to configure input parameters of embodiments of the invention. As mentioned above, an administration user interface may be used by an administrator of the video surveillance system. The executable code may be stored either in random access memory250, in hard disk240, or in a removable digital medium (not represented) such as a disk of a memory card. The central processing unit210is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, CPU210may execute instructions from main RAM memory250relating to a software application after those instructions have been loaded, for example, from the program ROM230or hard disk240. FIG.3is a block diagram illustrating an example of an auto-setting method making it possible to set automatically parameters of a source device, typically a camera, according to embodiments of the invention. As illustrated, a first phase is a learning phase (reference300). According to embodiments, it is performed before the installation of the considered camera, for example during the development of a software application for processing images. Preferably, the learning phase is not specific to a type of camera (i.e. it is advantageously generic). During this phase, a relation or a function is established between a quality value (relating to the result of the image processing) and all or most of the relevant variables that are needed to estimate such a processing result quality. These relevant variables may include image quality-dependent parameters and/or scene-dependent parameters. As described hereafter, this relation or function, denoted quality function, may depend on a type of the missions that can be handled by any camera. An objective of the learning phase is to obtain a quality function which is able to state prima facie the quality of an image in the context of a particular mission, as a function of parameters which have an impact on the mission. According to particular embodiments, the output of the learning phase is a quality function that may be expressed as follows: fquality(missions)(image quality,scene) where, missions is a type of mission; image quality is a set of parameters that may comprise a blur value, a noise value, and a contrast value; and scene is a set of parameters that may comprise a target size, a target velocity, and/or a target distance. Therefore, in particular embodiments, the output of the learning phase may be expressed as follows: fquality(missions)(noise,blur,contrast,target_size,target_velocity,target_distance) The quality function fqualitymay be a mathematical relation or an n-dimensional array associating a quality value with a set of n parameter values, e.g. values of noise, blur, contrast, target size, target velocity, and target distance. As denoted with reference305, the type of mission to be handled by the camera may be chosen by a user (or an installer) during installation of the camera or later on. Likewise, a user may select a region of interest (ROI) corresponding to a portion of an image to be processed. As illustrated with the use of dotted lines, this step is optional. As illustrated, after a user has selected a type of mission, the quality function obtained from the learning phase may be written as follows: fquality(image quality,scene) or, according to the given example: fquality(noise,blur,contrast,target_size,target_velocity,target_distance) Alternatively, the auto-setting algorithm may be configured for a particular type of mission and the whole captured scene may be considered. It is observed here that there exist two sources of blur, the motion blur and the focus blur. A second phase (reference310) is directed to calibration. This is typically carried out during installation of the camera and aims at measuring scene values from the actual scene according to the settings of the camera, as well as at obtaining parameter values depending on the camera settings. This may take from a few minutes to a few tens of minutes. As explained hereafter, in particular with reference toFIGS.4and7, it makes it possible to determine quality processing values according to the actual scene and the current camera settings. According to embodiments, the calibration phase is run only once. The outputs of this phase may comprise: scene values (for example target size, target velocity, and target distance); image quality values (for example noise, blur, and contrast) that may be determined as a function of the camera settings (for example gain, shutter speed, and aperture); and image metrics (for example luminance) that may be determined as a function of the camera settings (for example gain, shutter speed, and aperture). They can be expressed as follows: scene-related parameters:target_sizetarget_speedtarget_distanceimage quality:noise = fnoise_calibration(gain, shutter speed, aperture)blur = fblur_calibration(gain, shutter speed, aperture)contrast = fcontrast_calibration(gain, shutter speed, aperture)image metrics:luminance = fluminance_calibration(gain, shutter speed, aperture) The functions (fnoise_calibration, fblur_calibration, fcontrast_calibration, fluminance_calibration) may be mathematical relations or 3-dimensional arrays associating values with sets of 3 parameter values (gain, shutter speed, and aperture). Alternatively, the functions (fnoise_calibration, fblur_calibration, fcontrast_calibration, fluminance_calibration) may be mathematical relations or 3-dimensional arrays associating values with sets of 2 parameter values (gain and shutter speed). A third phase (reference315) is directed to operation. It is performed during the operational use of the camera to improve its settings. It is preferably executed in a very short period of time, for example less than one second, and without perturbation for the camera, except for changing camera settings (i.e. it is a non-invasive phase). It is used to select suitable camera settings, preferably the most suitable camera settings. To that end, data obtained during the calibration phase are used to calculate good settings, preferably the best settings, according to the quality function determined during the learning phase, in view of the current environmental conditions. Indeed, the environmental conditions, typically lighting, may be different from the environmental conditions corresponding to the calibration. Accordingly, the calibration data must be adjusted to fit the current environmental conditions. Next, the adjusted data are used to calculate the best settings. This may be an iterative process since the adjustments of the calibration data are more accurate when camera settings get closer to the optimal settings. Such an operation phase is preferably carried out each time a new change of camera settings is needed. The output of the operation phase is a camera setting, for example a set of gain, shutter speed, and aperture values. During the operation phase a test may be performed to determine whether or not the items of information determined during the calibration phase make it possible to obtain accurate results. If the items of information determined during the calibration phase do not make it possible to obtain accurate results, some steps of the calibration phase may be carried out again, as discussed with reference toFIG.6c. Learning Phase Video surveillance cameras can be used in quite different contexts that is to say to conduct different “missions” or “tasks”. For example, some cameras may be used to provide an overall view, making it possible to analyse wide areas, for example for crowd management or detection of intruders, while others may be used to provide detailed views, making it possible, for example to recognize faces or license plates and others may be used to control the proper functioning of machinery, for example in factories. Depending on the type of mission, the constraints associated with the camera may be quite different. In particular, the impact of the noise, blur, and/or contrast is not the same depending on the mission. For example, the blur has generally a high impact on missions for which details are of importance, e.g. for face or license plate readability. In other cases, the noise may have more impact, for example when scenes are monitored continuously by humans (due to the higher eye strain experienced on noisy videos). As set forth above, an objective of the learning phase is to get a quality function which is able to state prima facie the quality of an image in the context of a particular type of missions, as a function of parameters which have an impact on the missions. According to embodiments, such parameters may be the followings:the parameters which represent a quality of images provided by the camera, which depend on the camera settings. Such parameters may comprise the noise, the blur, and/or the contrast; andthe parameters that are directed to the scene and the mission to be performed, referred to as scene-dependent parameters hereafter, their values being referred to as scene values. Their number and their nature depend on the type of missions. These parameters may comprise a size of targets, and/or a velocity of the targets, and/or a distance of the targets from the camera. The values of these parameters may be predetermined, may be determined by a user, or may be estimated, for example by image analysis. They do not have a direct impact on the image quality but play a role in how difficult it is to fulfil a mission. For example, the noise has more impact on smaller targets than on larger targets so the perceived quality of noisy images will be worse when targets are smaller. Regarding the image quality, it has been observed that the noise, the blur, and the contrast are generally the most relevant parameters. Nevertheless, camera settings have an impact on other parameters that may be considered as representative of the image quality, for example on the depth-of-field and/or on or the white balance. However, it is observed that for particular applications, due to hyperfocal settings in video surveillance systems, the depth of field may be not very relevant. It is also to be noted that the white balance is generally efficiently handled by the camera auto-mode. Accordingly and for the sake of clarity, the following description is based on the noise, the blur, and the contrast as image quality parameters. However, it must be understood that other parameters may be used. Regarding the scene-dependent parameters, it has been observed that the target size, the target velocity, and the target distance are generally the most relevant parameters. Therefore, for the sake of clarity, although other parameters may be used, the following description is based on these three parameters. Accordingly, the quality function determined in the learning phase may generally be expressed as follows: fquality(missions)(noise,blur,contrast,target_size,target_velocity,target_distance) or as a set of functions (one function per type of mission denoted mission<i>): fquality(noise,blur,contrast,target_size,target_velocity,target_distance) for mission<i> or as a function corresponding to a predetermined type of mission for which a video surveillance system is to be used: fquality(noise,blur,contrast,target_size,target_velocity,target_distance) Such a function makes it possible, during the operation phase, to select efficient camera settings for the mission to be carried out, in view of the noise, blur, contrast, target velocity, and target size corresponding to the current camera settings (according to the results obtained during the calibration phase). For the sake of illustration, this function may be scaled between 0 (very low quality) and 1 (very high quality). According to embodiments, the quality function is set by an expert who determines how to penalize the noise, blur, and contrast for a considered type of mission. For the sake of illustration, the quality function may be the following: fquality=3⁢Vnoise×Vblur×VcontrastVnoise+Vblur+Vcontrast where Vnoise, Vblur, and Vcontrastrepresent values for the noise, blur, and contrast parameters, respectively. As described above, the blur comprises a motion blur component and a focus blur component. Therefore, the blur may be expressed as follows: blur=blurA+blurS where blurArepresents the value of the focus blur and blurSrepresents the value of the motion blur. The quality function fqualitymakes it possible to determine a quality value as a function of general image characteristics such as the noise, blur, and contrast, and of scene characteristics such as target size, for a particular mission. However, this function cannot be used directly since it is not possible to determine a priori the noise, blur, and contrast since these parameters cannot be set on a camera. Calibration Phase The objective of the calibration phase is to measure in-situ, on the actual camera and the actual scene, all the data that are required to calculate a quality value from an fqualityfunction as determined during the learning phase. Accordingly, the calibration phase comprises four objectives (or only three if the focus is not to be set):determining or measuring the scene-dependent parameters, for example a target size, a target velocity, and a target distance;setting a focus;estimating functions to establish a link between each of the image quality parameters (for example the noise, blur, and contrast) and the camera settings (for example the gain (G), the shutter speed (S), and the aperture (A)) as follows: noise=fnoise_calibration(G,S,A), in short noisecal(G,S,A) blur=fblur_calibration(G,S,A), in short blurcal(G,S,A) contrast=fcontrast_calibration(G,S,A), in short contrastcal(G,S,A)estimating a function to establish a link between an image metric (for example the luminance) and the camera settings (for example the gain (G), the shutter speed (S), and the aperture (A)). According to embodiments, luminance is used during the operation phase to infer new calibration functions when scene lighting is modified. It may be expressed as follows: luminance=fluminance_calibration(G,S,A), in shortIcal(G,S,A) FIG.4ais a block diagram illustrating a first example of steps carried out during a calibration phase of an auto-setting method as illustrated inFIG.3. As illustrated, a first step (step400) is directed to selecting camera settings. According to embodiments, this step comprises exploring the manifold of all camera setting values, for example all triplets of gain, shutter speed, and aperture values, and selecting a set of representative triplets in order to reduce the number of camera settings to analyse. According to other embodiments, this step comprises exploring the manifold of all all pairs of gain and shutter speed values, and selecting a set of representative pairs in order to reduce the number of camera settings to analyse. For the sake of illustration, the shutter speed values to be used may be selected as follows: S0=min(S) andSi+1=Si×2 with index i varying from 0 to n so that Sn≤max(S) and Sn+1>max(S) and where min(S) is the smallest shutter speed and max(S) is the highest shutter speed. If shutter speeds the camera may accept are discrete values, the shutter speeds are selected so that their values are the closest to the ones selected according to the previous relation (corresponding to a logarithmic scale). Similarly, the gain values to be used may be selected according to a uniform linear scale as follows: G0=min(G) and Gi+1is determined such that I⁡(Gi+1)I⁡(Gi)≈I⁡(Si+1)I⁡(Si) with index i varying from 0 to n such that Gn≤max(G) and Gn+1>max(G) and where I is the luminance of the image, min(G) is the smallest gain, and max(G) is the higher gain. Likewise, the aperture values to be used may be selected according to a uniform linear scale, like the gain values, as follows: A0=min(A) and Ai+1is determined such that I⁡(Ai+1)I⁡(Ai)≈I⁡(Si+1)I⁡(Si) As a consequence, the gain, shutter speed, and aperture values have an equivalent scale in terms of impact on the luminance. In other words, if luminance of the image is increased by a value Δ when shutter speed value or aperture value goes from one value to the next, gain value is selected such that the luminance is also increased by the value Δ when moving from the current gain value to the next one. After having selected a set of gain, shutter speed, and aperture values at step400, images are obtained from the camera set to these values (step405). For the sake of illustration, three to ten images may be obtained, preferably during a short period of time, for each triplet (G, S, A) of gain, shutter speed, and aperture values. In order to optimize the time for obtaining these images and the stability of the camera during acquisition of the images, the change of camera settings is preferably minimized, i.e. the settings of the camera are preferably changed from one gain, shutter speed, and/or aperture value to the next ones (since it takes a longer time for a camera to proceed to large changes in gain, shutter speed, and aperture). Therefore, according to embodiments, images are obtained as follows for each of the selected gain and shutter speed values:the aperture is set to its minimum value (min(A));the gain is set to its minimum value (min(G)) and all the selected values of the shutter speed are set one after the other according to their ascending order (from min(S) to max(S)), a number of three to ten images being obtained for each triplet of values (G, S, A);the value of the gain is set to the next selected one and all the selected values of the shutter speed are set one after the other according to their descending order (from max(S) to min(S)), a number of three to ten images being obtained for each pair of values (G, S, A);the previous step is repeated with the next values of the gain until images have been obtained for all selected values of the gain and shutter speed; andthe three previous steps are repeated with the next values of the aperture until images have been obtained for all selected values of the gain, shutter speed, and aperture. Next, after having obtained images for all the selected values of the gain, shutter speed, and aperture, an image metric is measured for all the obtained images (step410), here the luminance, and an image quality analysis is performed for each of these images (step415). The measurement of the luminance aims at determining a relation between the luminance of an image and the camera settings used when obtaining this image, for example a gain, a shutter, and an aperture values. For each obtained image, the luminance is computed and associated with the corresponding gain, shutter speed, and aperture values so as to determine the corresponding function or to build a 3-dimensional array wherein a luminance is associated with a triplet of gain, shutter speed, and aperture values (denoted Ical(G, S, A)). According to embodiments, the luminance corresponds to the mean of pixel values (i.e. intensity values) for each pixel of the image. According to embodiments, the entropy of the images is also computed during measurement of the luminance for making it possible to determine a contrast value during the image quality analysis. Like the luminance, the entropy is computed for each of the obtained images and associated with the corresponding gain, shutter speed, and aperture values so as to determine the corresponding function or to build a 3-dimensional array wherein an entropy is associated with a triplet of gain, shutter speed, and aperture values (denoted Ecal(G, S, A)). According to embodiments, measurement of the entropy comprises the steps of:determining the histogram of the image pixel values, for each channel (i.e. for each component), that is to say counting the number of pixels cifor each possible pixel value (for example for i varying from 0 to 255 if each component is coded with 8 bits); andcomputing the Shannon entropy according to the following relation: E=-Σi=02⁢5⁢5⁢cin⁢log2(cin), with n is the total number of pixels in all channels. As described hereafter, the entropy may be determined as a function of the luminance (and not of the camera settings, e.g. gain, shutter speed, and aperture). Such a relationship between the entropy and the luminance can be considered as valid for any environmental conditions (and not only the environmental conditions associated with the calibration). Therefore, after having computed an entropy and a luminance for each of the obtained images, the entropy values are associated with the corresponding luminance values so as to determine the corresponding function or to build a 1-dimensional array wherein entropy is associated with luminance (denoted E(I)). Turning back toFIG.4aand as described above, the image quality analysis (step415) aims at determining image quality parameter values, for example values of noise, blur, and contrast from the images obtained at step405, in order to establish a relationship between each of these parameters and the camera settings used for obtaining the corresponding images. During this step, a relationship between the contrast and the luminance is also established. Noise values are measured for the obtained images and the measured values are associated with the corresponding gain, shutter speed, and aperture values so as to determine the corresponding function or to build a 3-dimensional array wherein a noise value is associated with a triplet of gain, shutter speed, and aperture values (noisecal(G, S, A)). According to an embodiment, the noise of an image is determined as a function of a set of several images (obtained in a short period of time) corresponding to the same camera settings and as a result of the following steps:removing the motion pixels, i.e. the pixels corresponding to objects in motion or in other words, removing the foreground;computing a temporal variance for each pixel (i.e., the variance of the fluctuation of each pixel value over time, for each channel); andcomputing a global noise value for the set of images as the mean value of the computed variances between all pixels and all channels. The obtained values make it possible to establish a relationship between the noise and the camera settings. Likewise, blur values are computed for the obtained images so as to establish a relationship between the blur and the camera settings. Each blur value corresponds to the addition of a motion blur value and a focus blur value. According to embodiments, a motion blur value is determined as a function of a target velocity and of a shutter speed according to the following relation: blurS=v→target*shutter_speed where {right arrow over (ν)}targetis the target velocity, the motion blur value being given in pixels, the target velocity being given in pixels/second, and the shutter speed being given in seconds. Therefore, in view of the environmental conditions associated with the calibration phase (denoted “calibration environmental conditions”), the motion blur may be determined as follows: blurS,ca⁢l(S)=v→target*S The target velocity may be predetermined, set by a user, or measured from a sequence of images as described hereafter. The focus blur may be determined according to different solutions. According to particular embodiments, the solution to be used is determined as a function of whether or not targets of interest are moving. This can be set by a user or determined by image analysis. If the targets of interest are moving, they are detected on obtained images, typically by using a standard image processing algorithm, and their size is determined by using knowledge on the target such as their real size and camera optical settings. Indeed, it is observed that targets generally belong to specific classes (for example humans, cars, bikes, trucks, etc.) and thus, they can be recognized and analyzed as a function of statistical information, for example to determine their size. This makes it possible to compute the distance of the targets to the camera and to build a distance map within a considered region of interest. A distance map typically represents the distribution of target distances for locations of the considered region of interest or a distance value for locations of the considered region of interest that can be expressed as follows distance=fdistance(x, y) with x and y being the pixel coordinates, i.e. the row and column indices of each pixel.FIG.4billustrates an example of steps carried out for building a distance map of moving targets. On the contrary, if the targets (or at least a part of the targets) are stationary, the whole range of the focus may be explored while recording images for the different focus values that are used. The obtained images are analyzed and for locations of the considered region of interest, the focus leading to the sharpest images is determined so as to construct a focus map for the considered region of interest. A focus map typically represents the distribution of focus to be used for locations of the considered region of interest or a focus value to be used for locations of the considered region of interest that can be expressed as follows focus=ffocus(x, y).FIG.4cillustrates an example of steps carried out for building a distance map of moving targets. Next, the distance map or the focus map, depending on whether or not targets are moving, is used to compute an optical aperture and focus blur as a function of aperture values, based on geometric optics calculation. It is observed that the function establishing a relation between target distances and locations within a considered region of interest is very close to the function establishing a relation between focus values and locations within this considered region of interest since an optimal focus value for a target only depends on the distance between this target and the camera. As a consequence, determining the optimal focus (Foptim) for a considered region of interest may consist in analyzing these regions of interest while varying the focus or in computing an optimal focus in view of the target distances within this region of interest. From this optimal focus, a focus blur may be determined by analyzing the region of interest or may be estimated as a function of the target distances within this region of interest. According to embodiments, the optimal focus and the focus blur may be determined as a function of the distance map or focus map, denoted fmap(x, y), as follows, for moving targets: Foptimum=argminF(〈F×❘"\[LeftBracketingBar]"1fmap(x,y)-1dF❘"\[RightBracketingBar]"〉(x,y)⁢ROI)andblurA=A×Foptimum×〈❘"\[LeftBracketingBar]"1fmap(x,y)-1dF❘"\[RightBracketingBar]"〉(x,y)⁢ROI where xcorresponds to the operator “mean over x variable”, argminxcorresponds to the operator “argmin over x variable”, 1dF is the focal distance, that is to say the real distance of an object from the camera, for which the representation in the image is sharp for the current focus value. If it is not available, it can be retrieved from the image distance denoted ν that corresponds to the distance between the lens and the sensor, according to the following relation: 1dF+1v=1F, for stationary targets: Foptimum=argminF(〈❘"\[LeftBracketingBar]"1-fmap(x,y)F❘"\[RightBracketingBar]"〉(x,y)⁢motionlessROI)andblurA=A×〈❘"\[LeftBracketingBar]"1-fmap(x,y)Foptimum❘"\[RightBracketingBar]"〉(x,y)⁢motionlessROI where motionlessROI corresponds to the considered region of interest wherein areas where motions are detected have been removed, as described by reference toFIG.4c. It is to be noted that the units of the results are given in the USI (m) for the focus blur and for the optimal focus. Regarding the focus blur, it is preferably expressed in pixels. This can be done according to the following formula: b⁢l⁢u⁢rA,pixels=b⁢l⁢u⁢rA,USI⁢resolutionsensor_size where resolution and sensor_size represent the resolution in pixels and the sensor size in USI, respectively. The blur, comprising the motion blur and the focus blur (blur=blurS+blurA), is computed for each of the obtained images according to the previous relations and the obtained values are associated with the corresponding shutter speed and aperture values (the gain does not affect the blur) so as to determine the corresponding function or to build a 2-dimensional array wherein a blur value is associated with shutter speed and aperture values (blurcal(S, A)). Similarly, the contrast is computed for each of the obtained images. It may be obtained from the entropy according to the following relation: contrast=2entropy2max-⁢entropy where, for example, max_entropy is equal to 8 when the processed images are RGB images and each component is encoded over 8 bits. Accordingly, the contrast contrastcal(G, S, A) may be obtained from the entropy Ecal(G, S, A). In other words, contrast values may be expressed as a function of the gain, of the shutter speed, and of the aperture values from the entropy expressed as a function of the gain, of the shutter speed, and of the aperture values. Likewise, the contrast contrast(I) expressed as a function of the luminance may be obtained from the entropy E(I) that is also expressed as a function of the luminance. This can be done as a result of the following steps:measuring the entropy of each of the obtained images;determining the relationships between the measured entropy values and the camera settings, for example the gain, the shutter speed, and the aperture, denoted Ecal(G, S, A);obtaining the previously determined relationships between the luminance values and the camera settings, for example the gain, the shutter speed, and the aperture, denoted Ical(G, S, A);discarding selected camera settings corresponding to gain values leading to noise values that exceed a predetermined noise threshold (the noise may have an impact on the entropy when the noise is too large and thus, by limiting noise to variance values below a predetermined threshold, for example 5 to 10, the impact is significantly reduced);gathering the remaining entropy values and luminance values, that are associated with gain, shutter speed, and the aperture values, to obtain a reduced data collection of entropy and luminance values sharing the same camera settings. This data collection makes it possible to establish the relationships between entropy and luminance values, for example by using simple regression functions such as a linear interpolation on the entropy and luminance values;determining the relationships between the contrast and the entropy as a function of the luminance, for example according to the following relation: contrast(I)=2E⁡(I)2max-⁢entropy Turning back toFIG.4a, it is illustrated how scene-dependent parameter values, for example target size and/or target velocity, may be obtained. To that end, short sequences of consecutive images, also called chunks, are obtained. For the sake of illustration, ten to twenty chunks representative of the natural diversity of the targets are obtained. According to particular embodiments, chunks are recorded by using the auto-mode (although the result is not perfect, the chunk analysis is robust to the blur and to the noise and thus, does not lead to significant errors). A motion detector of the camera can be used to detect motion and thus, to select chunks to be obtained. The recording duration depends on the time it takes to get enough targets to reach statistical significance (10 to 20 targets is generally enough). Depending on the case, it can take only few minutes to several hours (if very few targets are spotted per hour). In order to avoid waiting, it is possible to use chunk fetching instead of chunk recording (i.e. if the camera had already been used prior to the calibration step, the corresponding videos may be retrieved and used). Alternatively, according to other embodiments, a user of the video surveillance system may be enabled to select the chunks to be used. The main advantage of this solution comes from the fact that such a user may know which chunks are representative of the targets that should be monitored by the system. Therefore, fewer chunks may be considered when the user manages to ensure that the relevant chunks have been chosen. It may even be possible for a user to select a single chunk. This makes the chunks determination and analysis process faster. To enable chunks selection, a dedicated user interface may be provided in the camera configuration user interface (e.g. as a specific tab in said camera configuration user interface). This enables a user to easily select chunks while configuring the camera. In addition, once chunks have been selected, a user interface may also advantageously provide access to the selected chunks and enable the selection to be edited by adding or removing chunks. This enables a user to check which chunks have been used for a given camera, and possibly to decide to replace them. Chunks may be selected from among a set of existing recordings already recorded by a considered camera and displayed through a dedicated user interface; in this case, the user may be enabled to specify chunks as fragments of a recording, typically by indicating a start time and an end time. Multiple chunks may be specified from a single recording. Another solution may consist in enabling the user to record a chunk with a considered camera. By doing so, the user can easily create a chunk that contains the types of targets that should be monitored. In any case, it may be also advantageous to clearly indicate to the user the status of chunks, i.e. whether the chunks have to be specified, whether the chunks are being processed, or whether the chunks have been processed. As a matter of fact, this enables a potential user who may be involved in the process to understand the behavior of the system: as long as the chunks have not been obtained, auto-setting cannot be fully operational. Once they have been obtained (from user or automatically), indicating that they are being processed enables the user to understand that auto-setting is not yet fully operational but that it will soon be. Finally, when chunks have been obtained and processed, the user can understand that auto-setting is fully operational (provided other steps of the auto-setting process have also been successfully performed). After being obtained, the chunks are analyzed to detect targets (step425) to make it possible to estimate (step430) their size and preferably their velocity and distance (for moving targets). This estimating step may comprise performing a statistical analysis of the values of the parameters of interest (e.g. target size, target velocity). Next, the mean, median, or any other suitable value extracted from the distribution of parameter values is computed and used as the value of reference. The velocity of targets can be very accurately derived by tracking some points of interest of the target. By using this in combination with a background subtraction method (e.g. the known MOG or MOG2 method described, for example, in Zoran Zivkovic and Ferdinand van der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction”. Pattern recognition letters, 27(7):773-780, 2006), it is possible to avoid the detection of the fixed points of interest from the background and thus, to determine velocity with high accuracy even with blurry targets. The target velocity is simply the main velocity of points of interest. FIG.5illustrates an example of the distribution of the target velocity (or, similarly, the distribution of the velocity of the points of interest). From such a representation, a target velocity value may be obtained. For the sake of illustration, it can be chosen so as to correspond to the mean velocity for given targets. For the sake of illustration, one can choose a value corresponding to the “median 80%”, i.e. a velocity value such that 80% of velocities are under this value and 20% of velocities are over this value. The target size can be obtained through methods as simple as background subtraction, or more sophisticated ones like target detection algorithms (e.g. face recognition, human detection, or license plate recognition), which are more directly related to the detection of the targets corresponding to the task. Deep learning methods are also very effective. Outliers can be removed by using consensus-derived methods, or by using combinations of background subtraction and target detection at the same time. However, since only statistical results are obtained, it does not matter if some errors exist with such algorithms, since the errors should be averaged out to zero. This tolerance to errors makes such methods robust. FIG.4bis a block diagram illustrating an example of steps carried out for building a distance map of moving targets from a region of interest (ROI), a focus value (F), and images. As illustrated, a first step is directed to target detection in a given region of interest in images (step450), for example in the images of the chunks obtained in step420. Detecting targets may be based on standard algorithms. For the sake of illustration, there exist deep learning-based computer vision methods that are really efficient for detecting well-known targets such as humans, pets, and vehicles, with a low error rate. Examples of such methods are known as “You Only Look Once” (YOLO, https://arxiv.org/pdf/1612.08242.pdf), “Single Snapshot MultiBox Detector” (SSD, https://arxiv.org/pdf/1512.02325.pdf), and “Faster RCNN” (https://arxiv.org/pdf/1506.01497.pdf). This makes it possible to localize the targets of interest depending on their types. As a result, for each analyzed image, a bounding box surrounding the identified target is obtained. The bounding box height and width correspond approximately to the target size denoted bounding_box_size. Next, the poses of the detected targets are estimated (step455). This can be done by using similar techniques of computer vision based on the image of each detected target, which make it possible to determine the angle of the target relative to the camera and thus, to estimate its pose. Next, a target size is obtained for as many locations as possible of the considered region of interest, resulting in a target size map (step460). To that end, the real size, for example in pixels, is estimated for each detected target, for example according to the following formula: t⁢a⁢r⁢g⁢e⁢ts⁢i⁢z⁢e=bounding_box⁢_sizecos⁢(α) where bounding_box_size is the apparent size of the detected target as obtained in step450, and α is the angle of the detected target relative to the camera as obtained in step455, The results for all the detected targets and all the analyzed images are concatenated. It is to be noted that for each detected target, a point of reference can be used, for example the centroid of the bounding box and the target size can be associated with this point. As a result, a collection of sizes associated with locations in the considered region of interest is obtained. It can be represented as a list of target sizes {size0, size1, . . . , sizen} associated with a list of coordinates {(x0,y0), (x1,y1), . . . , (xn,yn)}, where sizei is the target size value corresponding to the location having index i. These results are then used to obtain a map of the target size (target size map). This can be done by using a regression method, such as linear or nonlinear regression (e.g. svm, gradient boosting, or even deep-learning techniques) applied to the size results associated with the coordinates. Next, the distance map is computed (step465). This can be done by converting the obtained target size map, where sizes are expressed in pixels, into a distance map, for example according to the following formula, applied to each location of the target size map: distance=F⁢real_sizepixel_size where F is the focus value used during image acquisition (corresponding to the settings of the camera), real_size is the real-world size of the target (that may be determined statistically by using a priori knowledge about the targets, for example, it can be set that the mean size of adults is ˜1.75 m). In order to increase the accuracy of such value, the median size or any derivative of the statistical size of the targets can also be used, and pixel_size is obtained from each point of the considered region of interest as computed during step465. The result is a distance map, i.e. a function distance=(x) for each location (x,y) of the considered region of interest. Steps450to465ofFIG.4bmay be carried out during steps425and430ofFIG.4a. FIG.4cis a block diagram illustrating an example of steps carried out for building a focus map of stationary targets from a region of interest and images. It is observed that stationary targets like machinery or buildings can be very diverse in nature. Therefore, since every building is unique and since there are so many existing machines, recognizing such types of targets according to common features would not be efficient. However, these targets being stationary or at least partially stationary, it is possible to compare their representation in different images, in particular in images obtained with different focus values so as to determine an optimal focus value for each area of the considered region of interest, making it possible to build a focus map associating a focus value with each location of the considered region of interest. As illustrated inFIG.4c, a first step is directed to sampling the focus values that are available in the camera in order to get a finite number of focus values and to obtain at least one image for each sampled focus value (step470). For the sake of illustration, a linear sampling of the focus values F may be performed or a more sophisticated sampling such as a linear sampling of the inverse value of focus values 1/F. According to other embodiments, the default sampling of the camera can be used (it being noted that most of the cameras have only a limited number of available focus values). The images corresponding to each of the sampled focus values are preferably obtained from the obtained chunks (for example the chunks obtained at step420). Next, the obtained images are analyzed to identify areas where motion is detected (step475). According to embodiments, areas wherein few movements are detected are not considered. Such detection can be based on a standard motion detection mechanism, by using a motion activity threshold. As a result, a subpart of the considered region of interest where no motion or small movements have been detected (i.e. corresponding to the considered region of interest wherein the areas where movements have been detected are removed) is obtained. It is referred to as the motionless region of interest (denoted motionlessROI). Next, an optimal focus is determined for each location of the motion less region of interest (step480), so that the obtained sharpness is at a maximum value. In other words, the focus that provides the maximum microcontrast in the vicinity of this point is determined. As a consequence, the focus blur is minimum (as close as possible to 0) for this focus value. Several techniques make it possible to analyse the blur or the microcontrast of a point or some points. Accordingly, for each location of the motionless region of interest, a focus value providing a maximum microcontrast is obtained, leading to a list of locations or points denoted {(x0,y0), (x1,y1), . . . , (xn,yn)} and to a list of corresponding focus values denoted {focus0,focus1, . . . focusn}, where focusi is the focus value corresponding to a maximum microcontrast for the location having index i. Since the motionless region of interest may comprise areas where small movements have been detected, the previous analysis may lead to sources of uncertainty and thus to outliers. In order to increase the accuracy and remove these outliers, a regression performed on the focus values and locations may be performed using well-known regression technique such as linear or nonlinear regression (e.g. svm, gradient boosting, or even deep-learning techniques) to obtain a mapping associating a focus value with a location for each point of the motionless region of interest (denoted focus=(x,y). As described above, this mapping is referred to as the focus map. Operation Phase As described previously, the operation phase aims at improving camera settings, preferably at determining optimal (or near-optimal) camera settings for a current mission and current environmental conditions, without perturbing significantly the use of the camera. To that end, the operation phase is based on a prediction mechanism (and not on an exploration/measurement mechanism). It uses, in particular, the quality function (fquality) determined in the learning phase, the relationships between image quality parameters and camera settings (e.g. noisecal(G, S, A), blurcal(G, S, A), and contrastcal(G, S, A)) determined during the calibration phase, scene-dependent parameters also determined during the calibration phase, and image metrics relating to images obtained with the current camera settings. Indeed, since the environmental conditions of the calibration phase and the current environmental conditions (i.e. during the operation phase) are not the same, the new relationships between image quality parameters and camera settings should be predicted so as to determine camera settings as a function of the quality function, without perturbing the camera. According to embodiments, the noise may be predicted from the gain, independently from the shutter speed and the aperture. Moreover, it is independent from lighting conditions. Therefore, the relationships between the noise and the gain for the current environmental conditions may be expressed as follows: noisecurrent(G)=noiseca⁢l(G) wherein the noise value associated with a given gain value corresponds to the mean noise for this gain and all the shutter speed values associated with it. If a noise value should be determined for a gain value that has not been selected during the calibration phase (i.e., if there is a gain value for which there is no corresponding noise value), a linear interpolation may be carried out. Table 1 in the Appendix gives an example of the relationships between the noise and the gain. Still according to embodiments, the blur may be determined as a function of the target velocity and the shutter speed (motion blur) and of the aperture (focus blur), as described above. It does not depend on lighting conditions. Accordingly, the relationships between the blur and the shutter speed and the aperture for the current environmental conditions may be expressed as follows: blurcurrent(G,S,A)=blurca⁢l(S,A) Table 2 in the Appendix gives an example of the relationships between the blur and the shutter speed. Still according to embodiments, prediction of the contrast as a function of the camera settings according to the current environmental conditions (denoted contrastcurrent(G, S, A)) comprises prediction of the luminance as a function of the camera settings for the current environmental conditions (denoted Icurrent(G, S, A)) and the use of the relationships between the contrast and the luminance (contrast(I)) according to the following relation: contrastcurrent(G,S,A)=contrastcurrent(Icurrent(G,S,A)) Prediction of the luminance as a function of the camera settings for the current environmental conditions (Icurrent(G, S, A)) may be based on the luminance expressed as a function of the camera settings for the calibration environmental conditions (noted Ical(G, S, A)) and on a so-called shutter shift method. The latter is based on the assumption that there is a formal similarity between a change in lighting conditions and a change in shutter speed. Based on this assumption, the current luminance Iactmay be expressed as follows: Iact=Icurrent(Ga⁢c⁢t,Sa⁢c⁢t,Aa⁢c⁢t)=Ica⁢l(Gact,Sact+Δ⁢S,Aact) where (Gact, Sact, Aact) is the current camera settings and ΔS is a shutter speed variation. Therefore, the relationship between the luminance and the camera settings for the current environmental conditions may be determined as follows:interpolating the computed luminance values Ical(G, S, A) to obtain a continuous or pseudo-continuous function;for the current gain Gact, determining ΔS so that Ical(Gact, Sact+ΔS, Aact)=Iactfor example by using the inverse function of the luminance expressed as a function of the shutter speed (for the current gain Gact), i.e. the shutter speed expressed as a function of the luminance, and computing ΔS as ΔS=S(Iact)−Sact; anddetermining the whole function Icurrent(G,S, A) by using the formula Icurrent(G, S, A)=Ical(G, S+ΔS, A) However, if the assumption that there is a formal similarity between a change in lighting conditions and a change in shutter speed is correct in the vicinity of the current camera settings, it is not always true for distant camera settings. Accordingly, an iterative process may be used to determine the camera settings to be used, as described hereafter. Table 3 in the Appendix gives an example of the relationships between the contrast and the gain, the shutter speed, and the aperture. After having predicted the image quality parameters for the current environmental conditions, optimization of the current camera settings may be carried out. It may be based on a grid search algorithm according to the following steps:sampling the manifold of possible gain, shutter speed, and aperture values to create a 3D grid of different (Gpred, Spred, Apred) triplets;for each of the (Gpred, Spred, Apred) triplets, denoted (Gpred,i, Spred,j, Apred,j), computing the values of the image quality parameters according to the previous predictions (noisecurrent(Gpred,i), blurcurrent(Spred,j, Apred,k), and contrastcurrent(I(Gpred,i, Spred,j, Apred,k)));for each (Gpred,i, Spred,j, Apred,k) triplet, computing a score as a function of the quality function determined during the learning phase, of the current mission (missionact), and of the computed values of the image quality parameters as follows: scorei,j,k=fquality(missionact)(noisecurrent(Gpred,i),blurcurrent(Spred,i,Apred,k), and contrastcurrent(I(Gpred,i,Spred,j,Apred,k)),target_size,target_velocity) where target_size and target_velocity values have been calculated during the calibration phase,identifying the best score (or one of the best scores), i.e. max(scorei,j), to determine the camera settings to be used, i.e. (Gnext, SnextAnext)=argmax(scorei,j,k). Table 4 in the Appendix gives an example of the relationships between the score and the gain, the shutter speed, and the aperture. In order to improve the accuracy of the camera settings, the latter may be determined on an iterative basis (in particular to take into account that the assumption that there is a formal similarity between a change in lighting conditions and a change in shutter speed is not always true for distant camera settings). Accordingly, after the next camera settings have been determined, as described above, and set, the luminance corresponding to these next camera settings is predicted (Ipred=Icurrent(Gnext, Snext, Anext)), a new image corresponding to these camera settings is obtained, and the luminance of this image is computed. The predicted luminance and the computed luminance are compared. If the difference between the predicted luminance and the computed luminance exceeds a threshold, for example a predetermined threshold, the process is repeated to determine new camera settings. The process may be repeated until the difference between the predicted luminance and the computed luminance is less than the threshold or until camera settings are stable. It is to be noted that region of interests (ROIs) may be taken into account for determining image quality parameter values (in such a case, the image quality parameter values are determined from the ROIs only) and for optimizing camera settings. FIG.6aillustrates a first example of steps for determining new camera settings during the operational use of a camera, without perturbing the use of the camera. This may correspond at least partially to step315inFIG.3. As illustrated, first steps are directed to:obtaining images (step600) from a camera set with current camera settings, from which an actual luminance (Iact) may be computed,obtaining these camera settings (step605), i.e. the actual gain, the shutter speed, and aperture (Gact, Sact, Aact) in the given example, andobtaining the relationships (step615) between the contrast and the camera settings for the calibration environmental conditions (contrastcal(G, S, A)), between the contrast and the luminance (contrast(I)), and between the luminance and the camera settings for the calibration environmental conditions (Ical(G, S, A)). Next, the relationships between the luminance and the camera settings for the current environmental conditions (Icurrent(G, S, A)) and the relationship between the contrast and the camera settings for the current environmental conditions (contrastcurrent(G, S, A)) are predicted (step620), for example using the method and formula described above. In parallel, before, or after, the quality function (fquality), the relationships between the noise and the camera settings for the calibration environmental conditions (noisecal(G, S, A)), the relationships between the blur and the camera settings for the calibration environmental conditions (blurcal(G, S, A)), and the scene-dependent parameter values, e.g. the target size and preferably the target velocity, are obtained (step625). Next, these relationships as well as the relationships between the contrast and the camera settings for the current environmental conditions (contrastcurrent(G, S, A)) are used to predict image quality parameter values for possible gain, shutter speed, and aperture values (step630). As described above, these image quality parameter values may be computed for different (Gpred, Spred, Apred) pairs forming a 3D grid. These image quality parameter values are then used with the scene-dependent parameter values to compute scores according to the previously obtained quality function (step635). According to embodiments, a score is computed for each of the predicted image quality parameter values. Next, optimized camera settings are selected as a function of the obtained scores and the settings of the camera are modified accordingly (step640). According to embodiments, it is determined whether or not predetermined criteria are met (step645), for example whether or not the actual luminance of an obtained image is close to the predicted luminance. If the criteria are met, the process is stopped until a new optimization of the camera settings should be made. Otherwise, if the criteria are not met, new camera settings are estimated, as described above. According to embodiments and as described above, prediction of the luminance as a function of the camera settings for the current environmental conditions (Ipred(G, S, A) or Icurrent(G, S? a)) may be based on the luminance expressed as a function of the camera settings for the calibration environmental conditions (Ical(G, S, A)) and computed according to the shutter shift method. However, it has been observed that the accuracy of the results obtained according to these embodiments is increasingly better when current environmental conditions get closer to the calibration environmental conditions and that it decreases when current environmental conditions deviate from the calibration environmental conditions. This may lead to prediction errors, e.g. when trying at night to apply the results of a calibration performed at the brightest hours of a day for an outdoor camera. Accordingly, it may be efficient to determine the relationships between the luminance and the camera settings for different calibration environmental conditions i (denoted Icali(G,S,A)), i varying, for example, from 0 to n. In such a case, the relationships between the luminance and the camera settings to be used for the current environmental conditions may be selected from among all the relationships between the luminance and the camera settings determined during the calibration phase (Icali(G, S, A)) so that: i=arg⁢mini(|Iact-Icali(Gact,Sact,Aact)|) In other words, the relationships associated with the calibration environmental conditions i are selected so as to minimize the gap between the measured luminance (Iact) and the luminance (Icali(Gact, Sact, Aact)) obtained in the same conditions (i.e. for same G, S, and A as in the current situation). FIG.6billustrates a second example of steps for determining new camera settings during the operational use of a camera, without perturbing the use of the camera. As illustrated, the steps are similar to those described with reference toFIG.6aexcept steps615′ and620′. According to the illustrated example, step615′ is similar to step615described with reference toFIG.6aexcept that several relationships between the luminance and the camera settings (Icali(G, S, A)) corresponding to different environmental conditions i are obtained. In step620′, the relationships between the luminance and the camera settings corresponding to the calibration environmental conditions i that are the closest to the current environmental conditions are selected (i.e. i is determined) and the relationships between the luminance and the camera settings for the current environmental conditions (Icurrent(G, S, A)) and the relationship between the contrast and the camera settings for the current environmental conditions (contrastcurrent(G, S, A)) are predicted, for example using the method and formula described above. It has been observed that such a way of determining the relationships between the luminance and the camera settings provides accurate results as long as the current environmental conditions are not too far from the calibration environmental conditions. As a consequence, if the current environmental conditions are too far from the calibration environmental conditions, it may be appropriate to determine new relationships between the luminance and the camera settings. Therefore, according to particular embodiments, the relationships between the luminance and the camera settings (Ical(G, S, A)) for the current environmental conditions may be determined if the latter are too different from the calibration environmental conditions. Indeed, the obtained relationships between the luminance and the camera settings should correspond to environmental conditions uniformly spanning the whole manifold of environment conditions. However, since there is no way of setting the environment conditions, it is not possible to obtain relationships between the luminance and the camera settings at will, for example during a complete calibration process. Accordingly, it may be useful to detect when environmental conditions are suitable for obtaining new relationships between the luminance and the camera settings and then, possibly, obtain these new relationships. This can be done during operational use of the camera. Obtaining the relationships between the luminance and the camera settings may consist in carrying out steps400,405, and410(at least the step of measuring image metrics Ical(G,S, A)) described with reference toFIG.4a, for the current environmental conditions. According to a particular embodiment, detection of environmental conditions that should trigger obtaining relationships between the luminance and the camera settings for the current environmental conditions may be based on direct measurements of the current environmental conditions via a sensor, for example a light meter. By comparing the current output of the sensor (environment_valueact) with its output(s) during the calibration phase (environment_valuecalibration), one may determine whether or not the relationships between the luminance and the camera settings should be determined for the current environmental conditions. For example, if the difference between these outputs is greater than a predetermined threshold (|environment_valueact−environment_valuecalibration|>threshold), the relationships between the luminance and the camera settings is determined for the current environmental conditions. Still according to a particular embodiment, the environmental conditions may be determined indirectly through the images, by comparing the luminance value (Iact) of a current image with the corresponding one associated with the calibration environmental conditions (i.e. the luminance associated with the corresponding camera settings (Icalibration(Gact, Sact, Aact)). Again, for the sake of illustration, if the difference between these values is greater than a predetermined threshold (|Iact−Icalibration(Gact, Sact, Aact)|>threshold), the relationships between the luminance and the camera settings is determined for the current environmental conditions. Still according to a particular embodiment, triggering a step of obtaining the relationships between the luminance and the camera settings for the current environmental conditions is based on measuring an error prediction. This can be done by comparing the predicted luminance value (Ipred(Gact, Sact, Aact) or Icurrent(Gact, Sact, Aact)) with the luminance value (Iact) of a current image. To that end, predicted luminance values are advantageously stored after setting new camera settings (e.g. step640inFIG.6aor6b). Still for the sake of illustration, if the difference between these values is greater than a predetermined threshold (|Iact−Ipred(Gact, Sact, Aact)|>threshold), the relationships between the luminance and the camera settings is determined for the current environmental conditions. Alternatively, the relationships between the luminance and the camera settings is determined for the current environmental conditions if |Ip⁢r⁢e⁢d(Gact,Sact,Aact)-IactIact-Imαx2|>threshold where Imaxrepresents the luminance maximum possible value. It is observed that the last embodiment is generally more efficient than the others in that it is based on a parameter (luminance prediction) that is to be optimized. Moreover, it does not require any additional sensor. It is further observed that determining the relationships between the luminance and the camera settings is an invasive process for the camera since images from this camera are not usable for other purpose during such a step. It may take few minutes. For this reason, approval from the user is preferably requested before carrying out such a step. FIG.6cillustrates another example of steps for determining new camera settings during the operational use of a camera, while perturbing as little as possible the use of the camera. Steps600to640are similar to the corresponding steps described by reference toFIG.6b. As illustrated, once camera settings have been modified, the camera is used for its purpose on a standard basis (step650). In parallel, a prediction error (PredE) is estimated (step655). Such a prediction error is typically based on the predicted luminance value (Ipred(Gact, Sact, Aact) or Icurrent(Gact, Sact, Aact)) and the current luminance value (Iact), as described above. Next, this prediction error is compared to a threshold (θ) (step660). If the prediction error is greater than the threshold, it is preferably proposed to a user to measure the luminance for several camera settings so as to obtain new relationships between the luminance and the camera settings (Incal(G, S,A)) (step665). As described above, this step is optional. If it is determined that the luminance is to be measured for several camera settings according to the current environmental conditions (denoted n) for obtaining new relationships between the luminance and the camera settings (Incal(G,S,A)), these steps are carried out (step670). As mentioned above, this can be done by carrying out steps400,405, and410(at least the step of measuring image metrics Incal(G,S,A)) described in reference toFIG.4, for the current environmental conditions. Then, the camera settings are determined and the settings of the camera are modified as described above, for example by reference toFIG.6b. According to particular embodiments, the calibration data are associated with environmental conditions corresponding to a single given time (i.e. the calibration data are associated with a single given type of environmental conditions). In such a case, new calibration data corresponding to new environmental conditions are stored in lieu of the previous calibration data. While the process described above aims at optimizing camera settings on a request basis, for example upon request of a user, it is possible to control automatically the triggering of the process of auto-setting camera parameters. It is also possible to pre-determine camera settings so that as soon as conditions have changed significantly, new settings are applied instantaneously without calculations. Such an automatic process presents several advantages among which are:the whole operation phase is automated and can be run continuously without any user decision;the time needed to make changes of camera settings is much reduced between the decision to change and the change itself; andsuch an auto-setting-monitored system is able to react very quickly to a sudden change of environment conditions such as on/off lighting. To that end, the current camera setting values and the luminance value should be obtained on a regular basis. The other steps of the operation phase remain basically the same since computations are based on these values and on values determined during the calibration phase. According to particular embodiments, predicting image quality parameter values (e.g. steps620and630inFIG.6a), determining scores for camera settings (e.g. step635inFIG.6a), and enabling selection of camera settings are carried out in advance, for example at the end of the calibration phase, for all (or many) possible measurement values such as the gain, shutter speed, aperture, and luminance (G, S, A, I). This leads to a best camera setting function that gives optimized camera settings as a function of camera settings and luminance in view of the values obtained during the calibration phase. Such a best camera setting function may be expressed as follows: (Gnext,Snext,Anext)=best_camera⁢_settings⁢(G,S,A,I) To determine such a continuous function, a simple data regression or an interpolation may be used. Operation phase mainly consists in measuring the current camera setting values and the luminance of the current image (Gact, Sact, Aact, Iact) and determining optimized camera settings as a result of the best camera setting function determined during the calibration phase. If optimal determined camera setting values (Gnext, Snext, Anext) are different from the current values (Gact, Sact, Aact), the camera settings are changed. FIG.7is a block diagram illustrating a second example of steps carried out during a calibration phase of an auto-setting method as illustrated inFIG.3. The steps illustrated inFIG.7differ as a whole from the those ofFIG.4in that they comprise steps of predicting image quality parameter values (step700), of determining scores for camera settings and luminance values (step705), and of determining a function for determining camera settings (step710), for all possible camera setting values and for all possible luminance values (G, S, A, I). FIGS.8and9are sequence diagrams illustrating an example of steps carried out during a calibration phase of an auto-setting method as illustrated inFIG.3. Step810corresponds to the recording of images generated with different camera parameters, e.g., different values of gain and shutter speed, and comprises steps811to817. In step811, controller801requests to camera803the minimal and maximal values of gain, shutter speed, and aperture it supports. Upon reception of request811, the camera transmits its upper and lower bounds of gain, shutter speed, and aperture to the controller. Based on the obtained bounds of gain, shutter speed, and aperture, the controller determines intermediate values of gain, shutter speed, and aperture (step813). An example of method for determining intermediate values of gain, shutter speed, and aperture is described at step400inFIG.4. The different triplets of (G, S, A) values form a manifold. In a variant, the camera transmits triplets of (G, S, A) values to the controller that selects at least a subset of the obtained triplets of (G, S, A) values to form a manifold. In step814, the controller requests reception of a video stream to the camera. Upon reception of request814, the camera starts transmission of a video stream. In step816, the controller selects a triplet of (G, S, A) values of the manifold and sets the gain, shutter speed, and aperture parameters of the camera with the selected triplet of values. The controller analyses the received stream and detects a modification of the image parameters. The analysis may be launched after a predetermined amount of time or when detecting that characteristics of the obtained images are rather fixed, since the modification of gain, shutter speed, and aperture values may temporary lead to a generation of images with variable/changing characteristics. For a given triplet of (G, S, A) values, N images are recorded and stored in the controller memory802(step817). The recording of N images (with N>1) is useful for computing noise. Steps816and817are carried out for each triplet (G, S, A) of the manifold determined at step813. Steps816and817are similar to step405inFIG.4. Step820is an analysis of the stored images, and comprises steps821,822and823. In step821, the controller retrieves, for a given triplet of values (G, S, A), the associated images stored in the controller memory, and an image metric is measured for all the obtained images (e.g., the luminance) (step822). The measurement of the luminance aims at determining a relation between the luminance of an image and the camera settings used when obtaining this image, for example a gain, a shutter speed, and an aperture value. For each obtained image, the luminance is computed and associated with the corresponding gain, shutter speed, and aperture values so as to determine the corresponding function or to build a 3-dimensional array wherein a luminance is associated with a triplet of gain, shutter speed, and aperture values (denoted Ical(G, S, A)). According to embodiments, the luminance corresponds to the mean of pixel values (i.e. intensity values) of the image. According to embodiments, the entropy of the images is also computed during measurement of the luminance for making it possible to determine a contrast value during the image quality analysis. Like the luminance, the entropy is computed for each of the obtained images and associated with the corresponding gain, shutter speed, and aperture values so as to determine the corresponding function or to build a 3-dimensional array wherein an entropy is associated with a pair of gain and shutter speed values (denoted Ecal(G, S, A)). According to embodiments, image quality parameter values are also computed, for example values of noise from the images obtained at step821, in order to establish a relationship between each of these parameters and the camera settings used for obtaining the corresponding images (similarly to step415inFIG.4). Then, the image metrics (e.g., luminance and entropy values) and the image quality parameter values of the given (G, S, A) values are stored in the controller memory (step823). Steps821to823are applied to each triplet (G, S, A) of values of the manifold. FIG.9is a sequence diagram illustrating an example of steps carried out during a calibration phase of an auto-setting method as illustrated inFIG.3, and may be applied following the method ofFIG.8. Step910is a chunk retrieval method, and comprises steps911to915. In step911, recording server904requests a video stream to the camera. Upon reception of request911, the video stream is transmitted to the recording server (step912). The recording server may apply basic image analysis technics, such as image motion detection, and stores the relevant parts of the video streams (named “chunks”), e.g. parts of video streams with moving targets. In step913, controller801requests chunks to the recording server. Upon reception of request913, the recording server transmits chunks previously stored to the controller. This step is similar to step420inFIG.4. In a variant, the camera may apply basic image analysis technics, and at step911′, the controller directly requests chunks to the camera. Upon reception of request911′ from the controller, the camera transmits chunks to the controller. In step915, chunks are selected and analysed (step920) by applying computer vision-based technics (step921), thereby determining scene-dependent parameters values (i.e., related to target size and optionally to target velocity). This step is similar to step430inFIG.4. Finally, the determined scene-dependent parameters values are stored in the controller memory (step922). FIG.10is a sequence diagram illustrating an example of steps carried out during an operation phase of an auto-setting method as illustrated inFIG.3. In step1011, the controller requests an image to the camera. Upon reception of request1011, the camera transmits an image to the controller. Then, the controller determines the current luminance value Iactof the obtained image. In step1014, the controller requests to the camera its current camera settings (Gact, Iact), which are transmitted to the controller at step1015. These steps are similar to steps600and605inFIGS.6aand6b. In step1016, the controller obtains the relationships between the contrast and the camera settings for the calibration environmental conditions (contrastcal(G, S, A)), between the contrast and the luminance (contrast(I)), and between the luminance and the camera settings for the calibration environmental conditions (Ical(G, S, A)). In parallel, before, or after, the quality function (fquality), the relationships between the noise and the camera settings for the calibration environmental conditions (noisecal(G, S, A)), the relationships between the blur and the camera settings for the calibration environmental conditions (blurcal(G, S, A)), and the scene-dependent parameter values, e.g. the target size and preferably the target velocity, are obtained. This step is similar to steps615and625inFIG.6a. At step1017, based on the relationships obtained at step1016, a triplet (Gbest, Sbest, Abest) of “best” values are determined, and optionally, at step1018, it is determined if it is different from the current camera settings (Gact, Sact, Aact). If true, the controller sets the camera parameters with the “best” values (step1019). In a variant, at step1014, the controller requests the current camera settings (Gact, Sact, Aact) to the controller memory, which are transmitted to the controller at step1015. Then, steps1016to1019are applied, and the couple (Gbest, Sbest, Aact) of “best” values is stored in the controller memory. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations on the disclosed embodiment can be understood and performed by those skilled in the art, in carrying out the claimed invention, from a study of the drawings, the disclosure and the appended claims. Such variations may derive, in particular, from combining embodiments as set forth in the summary of the invention and/or in the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention. APPENDIX TABLE 1relationships between the noise and the gainGainG0G1G2. . .GnNoisenoisecurrent(G0)noisecurrent(G1)noisecurrent(G2). . .noisecurrent(Gn) TABLE 2relationships between the blur and the shutter speed (motion blur) and theaperture (focus blur)Shutter speed/apertureS0S1. . .SnA0blurcurrent(S0, A0)blurcurrent(S1, A0). . .blurcurrent(Sn, A0)A1blurcurrent(S0, A1)blurcurrent(S1, A1). . .blurcurrent(Sn, A1). . .. . .. . .Anblurcurrent(S0, An)blurcurrent(S1, An). . .blurcurrent(Sn, An)
81,795
11943542
DETAILED DESCRIPTION Embodiments of the present invention relate to imaging systems that include single-photon avalanche diodes (SPADs). Some imaging systems include image sensors that sense light by converting impinging photons into electrons or holes that are integrated (collected) in pixel photodiodes within the sensor array. After completion of an integration cycle, collected charge is converted into a voltage, which is supplied to the output terminals of the sensor. In complementary metal-oxide semiconductor (CMOS) image sensors, the charge to voltage conversion is accomplished directly in the pixels themselves, and the analog pixel voltage is transferred to the output terminals through various pixel addressing and scanning schemes. The analog pixel voltage can also be later converted on-chip to a digital equivalent and processed in various ways in the digital domain. In single-photon avalanche diode (SPAD) devices (such as the ones described in connection withFIGS.1-4), on the other hand, the photon detection principle is different. The light sensing diode is biased above its breakdown point, and when an incident photon generates an electron or hole, this carrier initiates an avalanche breakdown with additional carriers being generated. The avalanche multiplication may produce a current signal that can be easily detected by readout circuitry associated with the SPAD. The avalanche process can be stopped (or quenched) by lowering the diode bias below its breakdown point. Each SPAD may therefore include a passive and/or active quenching circuit for halting the avalanche. This concept can be used in two ways. First, the arriving photons may simply be counted (e.g., in low light level applications). Second, the SPAD pixels may be used to measure photon time-of-flight (ToF) from a synchronized light source to a scene object point and back to the sensor, which can be used to obtain a 3-dimensional image of the scene. FIG.1is a circuit diagram of an illustrative SPAD device202. As shown inFIG.1, SPAD device202includes a SPAD204that is coupled in series with quenching circuitry206between a first supply voltage terminal210(e.g., a ground power supply voltage terminal) and a second supply voltage terminal208(e.g., a positive power supply voltage terminal). In particular, SPAD device202includes a SPAD204having an anode terminal connected to power supply voltage terminal210and a cathode terminal connected directly to quenching circuitry206. SPAD device202that includes SPAD204connected in series with a quenching resistor206is sometimes referred to collectively as a photo-triggered unit or “microcell.” During operation of SPAD device202, supply voltage terminals208and210may be used to bias SPAD204to a voltage that is higher than the breakdown voltage (e.g., bias voltage Vbias is applied to terminal208). Breakdown voltage is the largest reverse voltage that can be applied to SPAD204without causing an exponential increase in the leakage current in the diode. When SPAD204is reverse biased above the breakdown voltage in this manner, absorption of a single-photon can trigger a short-duration but relatively large avalanche current through impact ionization. Quenching circuitry206(sometimes referred to as quenching element206) may be used to lower the bias voltage of SPAD204below the level of the breakdown voltage. Lowering the bias voltage of SPAD204below the breakdown voltage stops the avalanche process and corresponding avalanche current. There are numerous ways to form quenching circuitry206. Quenching circuitry206may be passive quenching circuitry or active quenching circuitry. Passive quenching circuitry may, without external control or monitoring, automatically quench the avalanche current once initiated. For example,FIG.1shows an example where a resistor component is used to form quenching circuitry206. This is an example of passive quenching circuitry. This example of passive quenching circuitry is merely illustrative. Active quenching circuitry may also be used in SPAD device202. Active quenching circuitry may reduce the time it takes for SPAD device202to be reset. This may allow SPAD device202to detect incident light at a faster rate than when passive quenching circuitry is used, improving the dynamic range of the SPAD device. Active quenching circuitry may modulate the SPAD quench resistance. For example, before a photon is detected, quench resistance is set high and then once a photon is detected and the avalanche is quenched, quench resistance is minimized to reduce recovery time. SPAD device202may also include readout circuitry212. There are numerous ways to form readout circuitry212to obtain information from SPAD device202. Readout circuitry212may include a pulse counting circuit that counts arriving photons. Alternatively or in addition, readout circuitry212may include time-of-flight circuitry that is used to measure photon time-of-flight (ToF). The photon time-of-flight information may be used to perform depth sensing. In one example, photons may be counted by an analog counter to form the light intensity signal as a corresponding pixel voltage. The ToF signal may be obtained by also converting the time of photon flight to a voltage. The example of an analog pulse counting circuit being included in readout circuitry212is merely illustrative. If desired, readout circuitry212may include digital pulse counting circuits. Readout circuitry212may also include amplification circuitry if desired. The example inFIG.1of readout circuitry212being coupled to a node between diode204and quenching circuitry206is merely illustrative. Readout circuitry212may be coupled to terminal208or any desired portion of the SPAD device. In some cases, quenching circuitry206may be considered integral with readout circuitry212. Because SPAD devices can detect a single incident photon, the SPAD devices are effective at imaging scenes with low light levels. Each SPAD may detect the number of photons that are received within a given period of time (e.g., using readout circuitry that includes a counting circuit). However, as discussed above, each time a photon is received and an avalanche current initiated, the SPAD device must be quenched and reset before being ready to detect another photon. As incident light levels increase, the reset time becomes limiting to the dynamic range of the SPAD device (e.g., once incident light levels exceed a given level, the SPAD device is triggered immediately upon being reset). Multiple SPAD devices may be grouped together to help increase dynamic range.FIG.2is a circuit diagram of an illustrative group220of SPAD devices202. The group or array of SPAD devices may sometimes be referred to as a silicon photomultiplier (SiPM). As shown inFIG.2, silicon photomultiplier220may include multiple SPAD devices that are coupled in parallel between first supply voltage terminal208and second supply voltage terminal210.FIG.2shows N SPAD devices202coupled in parallel (e.g., SPAD device202-1, SPAD device202-2, SPAD device202-3, SPAD device202-4, . . . , SPAD device202-N). More than two SPAD devices, more than ten SPAD devices, more than one hundred SPAD devices, more than one thousand SPAD devices, etc. may be included in a given silicon photomultiplier220. Each SPAD device202may sometimes be referred to herein as a SPAD pixel202. Although not shown explicitly inFIG.2, readout circuitry for the silicon photomultiplier220may measure the combined output current from all of SPAD pixels in the silicon photomultiplier. Configured in this way, the dynamic range of an imaging system including the SPAD pixels may be increased. Each SPAD pixel is not guaranteed to have an avalanche current triggered when an incident photon is received. The SPAD pixels may have an associated probability of an avalanche current being triggered when an incident photon is received. There is a first probability of an electron being created when a photon reaches the diode and then a second probability of the electron triggering an avalanche current. The total probability of a photon triggering an avalanche current may be referred to as the SPAD's photon-detection efficiency (PDE). Grouping multiple SPAD pixels together in the silicon photomultiplier therefore allows for a more accurate measurement of the incoming incident light. For example, if a single SPAD pixel has a PDE of 50% and receives one photon during a time period, there is a 50% chance the photon will not be detected. With the silicon photomultiplier220ofFIG.2, chances are that two of the four SPAD pixels will detect the photon, thus improving the provided image data for the time period. The example ofFIG.2in which the plurality of SPAD pixels202share a common output in silicon photomultiplier220is merely illustrative. In the case of an imaging system including a silicon photomultiplier having a common output for all of the SPAD pixels, the imaging system may not have any resolution in imaging a scene (e.g., the silicon photomultiplier can just detect photon flux at a single point). It may be desirable to use SPAD pixels to obtain image data across an array to allow a higher resolution reproduction of the imaged scene. In cases such as these, SPAD pixels in a single imaging system may have per-pixel readout capabilities. Alternatively, an array of silicon photomultipliers (each including more than one SPAD pixel) may be included in the imaging system. The outputs from each pixel or from each silicon photomultiplier may be used to generate image data for an imaged scene. The array may be capable of independent detection (whether using a single SPAD pixel or a plurality of SPAD pixels in a silicon photomultiplier) in a line array (e.g., an array having a single row and multiple columns or a single column and multiple rows) or an array having more than ten, more than one hundred, or more than one thousand rows and/or columns. While there are a number of possible use cases for SPAD pixels as discussed above, the underlying technology used to detect incident light is the same. All of the aforementioned examples of devices that use SPAD pixels may collectively be referred to as SPAD-based semiconductor devices. A silicon photomultiplier with a plurality of SPAD pixels having a common output may be referred to as a SPAD-based semiconductor device. An array of SPAD pixels with per-pixel readout capabilities may be referred to as a SPAD-based semiconductor device. An array of silicon photomultipliers with per-silicon-photomultiplier readout capabilities may be referred to as a SPAD-based semiconductor device. FIG.3illustrates a silicon photomultiplier30. As shown inFIG.3, SiPM30has a third terminal35which is capacitively coupled to each cathode terminal31in order to provide a fast readout of the avalanche signals from the SPADs33. When then SPADs33emits a current pulse, part of the resulting change in voltage at the cathode31will be coupled via the mutual capacitance into the third (“fast”) output terminal35. Using the third terminal35for readout avoids the compromised transient performance resulting from the relatively large RC time constant associated with the biasing circuit that biases the top terminal of the quenching resistor. It will be appreciated by those skilled in the art that silicon photomultipliers include major bus lines44and minor bus lines45as illustrated inFIG.4. The minor bus lines45may connect directly to each individual microcell25. The minor bus lines45are then coupled to the major bus lines44which connect to the bond pads associated with terminals37and35. Typically, the minor bus lines45extend vertically between the columns of microcells25, whereas the major bus lines44extend horizontally adjacent the outer row of the microcells25. An imaging system10with a SPAD-based semiconductor device is shown inFIG.5. Imaging system10may be an electronic device such as a digital camera, a computer, a cellular telephone, a medical device, or other electronic device. Imaging system10may be an imaging system on a vehicle (sometimes referred to as vehicular imaging system). Imaging system may be used for LIDAR applications. Imaging system10may sometimes be referred to as a SPAD-based imaging system. Imaging system10may include one or more SPAD-based semiconductor devices14(sometimes referred to as semiconductor devices14, devices14, SPAD-based image sensors14, or image sensors14). One or more lenses28may optionally cover each semiconductor device14. During operation, lenses28(sometimes referred to as optics28) may focus light onto SPAD-based semiconductor device14. SPAD-based semiconductor device14may include SPAD pixels that convert the light into digital data. The SPAD-based semiconductor device may have any number of SPAD pixels (e.g., hundreds, thousands, millions, or more). In some SPAD-based semiconductor devices, each SPAD pixel may be covered by a respective color filter element and/or microlens. SPAD-based semiconductor device14may include circuitry such as control circuitry50. The control circuitry for the SPAD-based semiconductor device may be formed either on-chip (e.g., on the same semiconductor substrate as the SPAD devices) or off-chip (e.g., on a different semiconductor substrate as the SPAD devices). The control circuitry may control operation of the SPAD-based semiconductor device. For example, the control circuitry may operate active quenching circuitry within the SPAD-based semiconductor device, may control a bias voltage provided to bias voltage supply terminal208of each SPAD, may control/monitor the readout circuitry coupled to the SPAD devices, etc. The SPAD-based semiconductor device14may optionally include additional circuitry such as logic gates, digital counters, time-to-digital converters, bias circuitry (e.g., source follower load circuits), sample and hold circuitry, correlated double sampling (CDS) circuitry, amplifier circuitry, analog-to-digital (ADC) converter circuitry, data output circuitry, memory (e.g., buffer circuitry), address circuitry, etc. Any of the aforementioned circuits may be considered part of the control circuitry50ofFIG.5. Image data from SPAD-based semiconductor device14may be provided to image processing circuitry16. Image processing circuitry16may be used to perform image processing functions such as automatic focusing functions, depth sensing, data formatting, adjusting white balance and exposure, implementing video image stabilization, face detection, etc. For example, during automatic focusing operations, image processing circuitry16may process data gathered by the SPAD pixels to determine the magnitude and direction of lens movement (e.g., movement of lens28) needed to bring an object of interest into focus. Image processing circuitry16may process data gathered by the SPAD pixels to determine a depth map of the scene. In some cases, some or all of control circuitry50may be formed integrally with image processing circuitry16. Imaging system10may provide a user with numerous high-level functions. In a computer or advanced cellular telephone, for example, a user may be provided with the ability to run user applications. To implement these functions, the imaging system may include input-output devices22such as keypads, buttons, input-output ports, joysticks, and displays. Additional storage and processing circuitry such as volatile and nonvolatile memory (e.g., random-access memory, flash memory, hard drives, solid state drives, etc.), microprocessors, microcontrollers, digital signal processors, application specific integrated circuits, and/or other processing circuits may also be included in the imaging system. Input-output devices22may include output devices that work in combination with the SPAD-based semiconductor device. For example, a light-emitting component52may be included in the imaging system to emit light (e.g., infrared light or light of any other desired type). Light-emitting component52may be a laser, light-emitting diode, or any other desired type of light-emitting component. Semiconductor device14may measure the reflection of the light off of an object to measure distance to the object in a LIDAR (light detection and ranging) scheme. Control circuitry50that is used to control operation of the SPAD-based semiconductor device may also optionally be used to control operation of light-emitting component52. Image processing circuitry16may use known times (or a known pattern) of light pulses from the light-emitting component while processing data from the SPAD-based semiconductor device. In general, it may be desirable for SPAD devices to have a high photon detection efficiency (PDE). The total probability of a photon triggering an avalanche current may be referred to as the SPAD's photon-detection efficiency (PDE). It may be desirable for the SPAD to have a higher PDE, as this improves the sensitivity and performance of the SPAD. However, a high PDE may also limit the dynamic range of the SPAD. Due to the high sensitivity provided by the high PDE, a SPAD with high PDE may have a low maximum input photon rate (which is equal to 1/(PDE×recovery time)). A high PDE therefore causes saturation and low signal-to-noise ratio (SNR) in high light conditions. FIG.6is a graph showing saturation rate as a function of photon detection efficiency. As shown, the saturation rate (sometimes referred to as saturation level) is at a maximum when the photon detection efficiency is at a minimum. As photon detection efficiency increases, the saturation rate decreases. FIGS.7and8illustrate the effect of ambient light conditions on a SPAD-based semiconductor device.FIG.7is a graph of detection probability versus distance for a SPAD-based semiconductor device (e.g., a silicon photomultiplier) operating in low light conditions.FIG.8is a graph of detection probability versus distance for a SPAD-based semiconductor device (e.g., a silicon photomultiplier) operating in high light conditions. Distance probability may refer to applications where the SPAD-based semiconductor device14is used in combination with light-emitting component52to measure distance to an object (e.g., LIDAR applications). In LIDAR applications, semiconductor device14may measure the reflection of the light off of an object to measure distance to the object. Detection probability refers to the probability that the semiconductor device14correctly measures the distance to the imaged object. As objects move further away, it may be more difficult to correctly measure the distance to the object. As shown inFIG.7, in low light conditions, the detection probability may start at approximately 100% at low distances. As the distance increases, the detection probability may remain at approximately 100% until distance D1. At distance D1, the detection probability starts to decrease (with increasing distance) as shown in the graph. Distance D1may be between 100 and 200 meters, greater than 100 meters, greater than 50 meters, etc. The SPAD-based semiconductor device profiled inFIG.7may have a relatively high photon detection efficiency (PDE). Consequently, in low light conditions the detection probability is approximately 100% for a relatively long range. However, the performance of the SPAD-based semiconductor device may be limited in high ambient light conditions. As discussed in connection withFIG.6, due to the high photon detection efficiency, the SPAD-based semiconductor device has a low saturation rate. As shown inFIG.8, in high ambient light conditions, the detection probability may start at a level that is less than 100%. The detection probability also may decrease from its peak starting at distance D2. Distance D2inFIG.8may be less than distance D1inFIG.7. Due to saturation of the SPAD-based semiconductor device caused by the high light levels, the ambient light may be difficult to distinguish from the light from light-emitting component52. Therefore, the detection probability is reduced in the high light conditions. In high light conditions, it is therefore desirable for the photon detection efficiency to be reduced to increase the saturation level of the SPAD devices in the SPAD-based semiconductor device. To optimize performance of the SPAD-based semiconductor device in a wide range of ambient light conditions, the SPAD-based semiconductor may operate using a high dynamic range exposure scheme. When only one type of exposure is used by the SPAD-based semiconductor device, there will be a tradeoff between high light level performance and low light level performance. For example, if a low photon detection efficiency is used, the SPAD-based semiconductor device may have improved high light level performance but a lower overall detection range. If a high photon detection efficiency is used, the SPAD-based semiconductor device may have a far range of effective detection in low ambient light conditions but may perform poorly in high ambient light conditions. To leverage the advantages of both a low PDE and a high PDE, the SPAD-based semiconductor device may use two sub-exposures, one with a low PDE and one with a high PDE. The image data from both of the sub-exposures may be used by image processing circuitry to produce a single high dynamic range depth map. The SPAD-based semiconductor device may therefore dynamically switch between a low PDE and a high PDE during operation. To achieve this control over PDE, the over-bias voltage of the SPAD devices may be modulated. The over-bias voltage may refer to the amount that the bias voltage (e.g., the bias voltage at terminal208inFIG.1) exceeds the breakdown voltage of the SPAD. Breakdown voltage is the largest reverse voltage that can be applied to a SPAD without causing an exponential increase in the leakage current in the diode. The more the bias voltage exceeds the breakdown voltage (e.g., the larger the over-bias voltage or over-bias amount), the more sensitive the SPAD becomes. FIG.9is a graph illustrating how photon detection efficiency increases with increasing over-bias. Increasing the over-bias amount may result in an increase in PDE, as shown by the graph ofFIG.9. The linear profile ofFIG.9is merely illustrative. Changing the over-bias voltage between low and high levels may allow the SPAD devices to be changed between low and high photon detection efficiencies. Control circuitry such as control circuitry50may change the bias voltage provided to terminal208between different sub-exposures. FIG.10is a flowchart showing illustrative steps for operating a SPAD-based semiconductor device with different sub-exposures to produce a high dynamic range depth map. First, at step302, control circuitry50may set the bias voltage (e.g., Vbiasat terminal208) to a first level. In other words, the control circuitry50may provide a first bias voltage to terminal208. The first bias voltage may be a low bias voltage with a correspondingly low photon detection efficiency. Next, at step304, the SPAD-based semiconductor device may have a first sub-exposure while the bias voltage is at the first level. The first sub-exposure may occur for half of the total pulses of the light-emitting component52(e.g., N/2 where N is the total number of pulses). Based on the data from the SPAD-based semiconductor device from the first sub-exposure, a first partial depth map may be generated at step306. At step308, control circuitry50may set the bias voltage (e.g., Vbiasat terminal208) to a second level. In other words, the control circuitry50may provide a second bias voltage to terminal208. The second bias voltage may be higher than the first bias voltage. The SPADs may have a correspondingly higher photon detection efficiency. At step310, the SPAD-based semiconductor device may have a second sub-exposure while the bias voltage is at the second level. The second sub-exposure may occur for half of the total pulses of the light-emitting component52(e.g., N/2). Based on the data from the SPAD-based semiconductor device from the second sub-exposure, a second partial depth map may be generated at step312. Finally, at step314, the first and second partial depth maps may be combined to form a single high dynamic range (HDR) depth map. Because the first partial depth map is generated using a low PDE, the first partial depth map may be optimal for high light conditions due to the low saturation point associated with the low PDE. Because the second partial depth map is generated using a high PDE, the second partial depth map may be optimal for low light conditions due to the improved sensitivity associated with the high PDE. Imaging processing circuitry16may combine the partial depth maps by selecting the most useful data from each partial depth map. The resulting HDR depth map may have accurate results over a wide range of distances and ambient light conditions. The example described inFIG.10of the low-PDE sub-exposure occurring before the high-PDE sub-exposure is merely illustrative. In general, the sub-exposures may occur in any desired order (e.g., the high-PDE sub-exposure may be before the low-PDE sub-exposure). Additionally, the example of only two sub-exposures is merely illustrative. In some embodiments, three sub-exposures (with three respective over-bias voltages and corresponding PDEs) or more may be performed for each exposure. Additionally, it should be understood that the example of each sub-exposure having the same duration (e.g., half of the light pulses) is merely illustrative. In some cases, one sub-exposure may have a longer duration than another sub-exposure. For example, the first sub-exposure may occur for one third of the light pulses and the second sub-exposure may occur for two thirds of the light pulses. Any desired sub-exposure durations may be used during each exposure. The sub-exposure durations may remain constant for each exposure or may change between different exposures. FIG.11is a schematic diagram showing how the SPAD-based semiconductor device14may generate a first partial depth map (e.g., first image data) during a first sub-exposure and a second partial depth map (e.g., second image data) during a second sub-exposure. The first partial depth map may be generated while the SPAD devices in the SPAD-based semiconductor device have a low photon detection efficiency. Accordingly, the first partial depth map will have high saturation rate and optimal high light performance. The second partial depth map may be generated while the SPAD devices in the SPAD-based semiconductor device have a high photon detection efficiency. Accordingly, the second partial depth map will have high sensitivity and optimal low light performance. Image processing circuitry16may receive the first and second partial depth maps from the SPAD-based semiconductor device and may generate a single HDR depth map. The HDR depth map may be associated with a single exposure of the SPAD-based semiconductor device. Subsequent exposures may be performed to produce additional HDR depth maps, allowing objects in a scene to be tracked over time. Image processing circuitry16may generate the HDR depth map using any desired techniques. The HDR depth map may be a superposition of the first and second partial depth maps, in one illustrative example. The image processing circuitry may include memory for storing image data that is used to then form the HDR depth map. FIG.12is a graph of detection probability versus distance for a SPAD-based semiconductor device (e.g., a silicon photomultiplier) having multiple sub-exposures with different photon detection efficiencies. The graph ofFIG.12shows performance of the SPAD-based semiconductor device during high light conditions. Profile402shows the detection probability of a single-exposure SPAD-based semiconductor device with a high photon detection efficiency during high light conditions (similar to as shown inFIG.8). As shown, without the multiple sub-exposure high dynamic range scheme, the detection probability starts below 100% and decreases quickly. In contrast, profile404shows the detection probability of a SPAD-based semiconductor device having multiple sub-exposures with different photon detection efficiencies. As shown by profile404, the detection probability may start at approximately 100% at low distances. As the distance increases, the detection probability may remain at approximately 100% until distance D3. At distance D3, the detection probability starts to decrease (with increasing distance) as shown in the graph. Distance D3may be between 100 and 200 meters, greater than 100 meters, greater than 150 meters, etc. Therefore, the high dynamic range sub-exposure scheme allows for a high detection probability even in high light conditions. The detection probability profile404may be similar regardless of ambient light levels due to the high dynamic range afforded by the different sub-exposures with different photon detection efficiencies. The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art. The foregoing embodiments may be implemented individually or in any combination.
29,223
11943543
DETAILED DESCRIPTION This patent document provides implementations and examples of a photographing or imaging device for generating a high dynamic range (HDR) image that may be used to substantially address one or more technical or engineering issues and to mitigate limitations or disadvantages encountered in some other imaging devices. Some implementations of the disclosed technology relate to the photographing or imaging device for generating a high dynamic range (HDR) image using an optimal method that is selected from among various methods. In the context of this patent document, the word optimal that is used in conjunction with the method for generating HDR images is used to indicate methods that provide a better performance for the imaging device. In this sense, the words optimal may or may not convey the best possible performance achievable by the imaging device. The disclosed technology can be implemented in some embodiments to determine a luminance and controllable items associated with a target object to be captured and control the controllable items by reflecting hardware characteristics (e.g., response characteristics) into a process of controlling the sensitivity of the imaging device, thereby generating a high dynamic range (HDR) image having a maximum dynamic range while minimizing noise. Reference will now be made in detail to the embodiments of the disclosed technology, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. While the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings. However, the disclosure should not be construed as being limited to the embodiments set forth herein. Hereafter, various embodiments will be described with reference to the accompanying drawings. However, it should be understood that the disclosed technology is not limited to specific embodiments, but includes various modifications, equivalents and/or alternatives of the embodiments. The embodiments of the disclosed technology may provide a variety of effects capable of being directly or indirectly recognized through the disclosed technology. FIG.1is a block diagram illustrating an example of a photographing or imaging device1based on some implementations of the disclosed technology.FIG.2is a graph illustrating examples of responses that vary depending on luminance of a high-sensitivity pixel and a low-sensitivity pixel implemented based on some implementations of the disclosed technology. In some implementations, the word “sensitivity” can be used to indicate the image sensor sensitivity to light. Therefore, the high-sensitivity pixel is more sensitive to light than the low-sensitivity pixel. Referring toFIG.1, the imaging device1may include any mechanical or electronic devices that can take still or motion pictures such as a digital still camera for photographing still images or a digital video camera for photographing motion pictures. Examples of the imaging device may include a digital single lens reflex (DSLR) camera, a mirrorless camera, or a smartphone, and others. The imaging device1may include a lens and an image pickup element to capture (or photograph) light of a target object and create an image of the target object. The imaging device1may include a lens10, an aperture20, a lens driver30, an aperture driver40, an image sensing device100, and an image signal processor200. The lens10may include an optical lens or assembly of lenses aligned with respect to an optical axis. The lens10may be disposed in front of the image sensing device100so that light rays can be focused to a point on the image sensing device100. The location of the lens10can be adjusted by the lens driver30. For example, the lens10may move along the optical axis by the lens driver30. The aperture20may be disposed in front of the image sensing device100. The aperture driver40may control the amount of light that reaches the image sensing device100by adjusting and the degree of opening or closing of the aperture20.FIG.1illustrates the aperture20as being disposed between the lens10and the image sensing device100to receive light rays having penetrated the lens10by way of example only. In other implementations, the aperture20may be disposed between lenses included in the lens10, or may be disposed at the front end of the lens10. The light rays having penetrated the lens10and the aperture20reach the light reception surface of the image sensing device100, forming an image of a target object to be captured on the image sensing device100. The lens driver30may adjust the position of the lens10in response to a control signal (CS) received from the image signal processor200. The lens driver30may perform various operations such as autofocusing, zooming in and out, focusing by adjusting the position of the lens10. The aperture driver40may adjust the degree of exposure to light by controlling the opening/closing of the aperture20in response to the control signal (CS) received from the image signal processor200. In this way, the aperture driver40may adjust the amount of light rays that reach the image sensing device100. The image sensing device100may be a Complementary Metal Oxide Semiconductor Image Sensor (CIS) for converting incident light into electrical signals. The image sensing device100may adjust an exposure time, a conversion gain, an analog gain, and others by the image signal processor200. The image sensing device100may convert incident light into electrical signals on a pixel basis, thereby generating image data (IDATA). The image sensing device100may include a plurality of pixels having different sensitivities to light. In some implementations, the sensitivity may refer to an increase (or an increase in response) in the values of image data (IDATA) in response to an increase in the intensity of incident light. That is, as the sensitivity increases, the amount of increase in the values of the image data (IDATA) corresponding to the intensity of incident light also increases. As the sensitivity decreases, the amount of increase in the values of the image data (IDATA) corresponding to the intensity of incident light also decreases. The sensitivity may be determined based on light transmittance, a conversion gain, an exposure time, an analog gain, etc. A detailed structure and operations of the image sensing device100will be described later with reference toFIG.3. The image signal processor200may process image data (IDATA) received from the image sensing device100, and may control the constituent elements of the imaging device1according to the result of processing the image data or an external input signal. The image signal processor200may reduce noise in the image data (IDATA), and may perform various kinds of image signal processing (e.g., gamma correction, color filter array interpolation, color matrix, color correction, color enhancement, etc.) for image-quality improvement in the image data (IDATA). In addition, the image signal processor200may compress image data (IDATA) that has been created by the image signal processing and create an image file using the compressed image data. Alternatively, the image signal processor200may restore the image data (IDATA) from the image file. In this case, the format for compressing such image data (IDATA) may be a reversible format or an irreversible format. The examples of such compression formats include Joint Photographic Experts Group (JPEG) format and JPEG 2000 format for a still image. Motion pictures can be compressed by compressing a plurality of frames according to Moving Picture Experts Group (MPEG) standards. For example, the image files may be created according to Exchangeable image file format (Exif) standards. In addition, the image signal processor200may generate an HDR image by synthesizing at least two images that are generated using image sensing pixels having different sensitivities. A configuration for generating the HDR image may be defined as an image synthesis unit that is distinct from the HDR controller300. For example, the image sensing device100may output a low-sensitivity image generated by a low-sensitivity pixel with a relatively low sensitivity to light and a high-sensitivity image generated by a high-sensitivity pixel with a relatively high sensitivity to light. The image signal processor200may combine the low-sensitivity image and the high-sensitivity image to generate an HDR image. Although the example discussed above uses the low-sensitivity image and the high-sensitivity image to generate the HDR image, the disclosed technology can be implemented in some embodiments to use N images having N different sensitivities, where N is an integer greater than or equal to 2. In one example, the image sensing processor200may generate the HDR image using image data (IDATA) with N different sensitivities. The HDR image generated by the image signal processor200may be stored in an internal memory of the imaging device1or an external memory in response to a user request or in an autonomous manner to display the stored HDR image on a display device. In addition, the image signal processor200may perform unclearness removal processing, blur removal processing, edge emphasis processing, image analysis processing, image recognition processing, image effect processing, or others. In addition, the image signal processor200may perform display image signal processing for the display. For example, the image signal processor200may perform luminance level adjustment, color correction, contrast adjustment, outline emphasis adjustment, screen division processing, character image generation, and image synthesis processing, or others. The image signal processor200may control the lens driver30, the aperture driver40, and the image sensing device100according to (1) control information automatically generated by image data (IDATA) that is input in real time, or (2) control information that is manually input by a user. In some implementations particular, the image signal processor200may include an HDR controller300. In other implementations, the HDR controller300may also be implemented independently of the image signal processor200. For example, the HDR controller300may be included in the image sensing device100. The HDR controller300may control at least one of the aperture driver40and the image sensing device100so that pixels of the image sensing device100may have an optimal dynamic range. FIG.2shows responses as a function of the intensity of light that is incident upon the high-sensitivity pixel and the low-sensitivity pixel. The high-sensitivity pixel exhibits a relatively large amount of increase in its response as the intensity of incident light increases, and the low-sensitivity pixel exhibits a relatively small amount of increase in its response as the intensity of incident light increases. The response of the high-sensitivity pixel and the response of the low-sensitivity pixel vary depending on the luminance or the intensity of incident light applied to the corresponding pixel. Here, the response may refer to image data (IDATA) or values of the image data (IDATA) of the corresponding pixel. The response may have a signal-to-noise ratio (SNR) limit (denoted by SNR limit) and a saturation level (denoted by Saturation). InFIG.2, the signal-to-noise ratio (SNR) threshold level refers to a threshold value that can satisfy a reference SNR that is predetermined. A response less than the SNR threshold level may be treated as an invalid response not satisfying the reference SNR, and a response greater than the SNR threshold level may be treated as a valid response satisfying the reference SNR. The reference SNR may be determined experimentally in consideration of characteristics of the image sensing device100. A saturation level refers to a maximum level that indicates the intensity of incident light. The saturation level may be determined based on: the capability of the pixel (e.g., capacitance of a photoelectric conversion element) for converting the intensity of incident light into photocharges; the capability (e.g., capacitance of a floating diffusion (FD) region) for converting photocharges into analog signals; and the capability (e.g., an input range of the ADC) for converting analog signals into digital signals. As the intensity of incident light increases, the response may increase in proportion to the intensity of incident light until the response reaches the saturation level. After the response reaches the saturation level, the response may not increase although the intensity of incident light increases. For example, after the response reaches the saturation level, the response remains at the same value as the saturation value and does not increase above the saturation level. The valid response of each pixel may refer to a response that can indicate the intensity of incident light while satisfying the reference SNR. The range of the intensity of incident light corresponding to the valid response of a pixel may be referred to as a dynamic range of the pixel. That is, the dynamic range of the pixel may refer to the intensity range of the incident light in which each pixel has a valid response. The response of the high-sensitivity pixel in response to an increase in the intensity of incident light is relatively large. Thus, the response graph of the high-sensitivity pixel inFIG.2has a relatively large slope until the response reaches the saturation level and has a fixed level corresponding to the saturation level regardless of the increase in the intensity of incident light after the response reaches the saturation level. The response of the low-sensitivity pixel in response to an increase in the intensity of incident light is relatively small. Thus, the response graph of the low-sensitivity pixel inFIG.2has a relatively small slope until the response reaches the saturation level and has a fixed level corresponding to the saturation level regardless of the increase in the intensity of incident light after the response reaches the saturation level. As illustrated inFIG.2, a minimum value of a high-sensitivity pixel dynamic range (DR_H) may be less than the minimum value of a low-sensitivity pixel dynamic range (DR_L), and a maximum value of the high-sensitivity pixel dynamic range (DR_H) may be less than the maximum value of the low-sensitivity pixel dynamic range (DR_L). Therefore, in a low-luminance range in which the intensity of incident light is relatively small, the high-sensitivity pixel may be more suitable to detecting the intensity of incident light. In a high-luminance range in which the intensity of incident light is relatively large, the low-sensitivity pixel may be more suitable to detecting the intensity of incident light. High dynamic range (HDR) can be implemented using both the response of the high-sensitivity pixel suitable for the low-luminance range and the response of the low-sensitivity pixel suitable for the high-luminance range. In other words, as compared to using only one of the high-sensitivity pixel and the low-sensitivity pixel is used, using both the high-sensitivity pixel and the low-sensitivity pixel can allow the overall pixel array to have a high dynamic range (HDR) corresponding to a specific range from the minimum value of the high-sensitivity pixel dynamic range to the maximum value of the low-sensitivity pixel dynamic range. To this end, at least a portion of the high-sensitivity pixel dynamic range and at least a portion of the low-sensitivity pixel dynamic range may overlap each other. In some implementations, a high dynamic range (HDR) image corresponding to the high dynamic range (HDR) can be synthesized using the high-sensitivity pixel (HPX) and the low-sensitivity pixel (LPX) by using: a method for synthesizing the HDR image by calculating (e.g., summing) the HPX response and the LPX response; and/or a method for forming an image based on the HPX response at a low-luminance level and forming an image based on the LPX response at a high-luminance level. As shown inFIG.2, when the sensitivity of each pixel (i.e., a slope of response) is adjusted, the dynamic range of the corresponding pixel can be adjusted. The sensitivity of the pixel may be determined based on one or more sensitivity items. Here, the sensitivity items include at least one of: the amount of light exposure; light transmittance; exposure time; conversion gain; and analog gain, which are indicative of the pixel sensitivity. Accordingly, the HDR controller300can adjust the dynamic range of each pixel by adjusting controllable items from among the sensitivity items discussed above. The controllable item can include sensitivity items that can be controlled by the HDR controller300. As the dynamic range is widened, there is an advantage in that a valid response corresponding to incident light having a wide luminance range can be obtained. However, the widening of the dynamic range may lead to an excessive change in the exposure time and in the time point where incident light is captured by the respective pixels, causing the incident light to be captured by the respective pixels at different time points. As a result, motion artifacts of a fast moving object (e.g., a target object moving at high speed) may increase. If the dynamic range is unnecessarily extended to the low luminance range while photographing a very bright scene, responses of the high-sensitivity pixels may be saturated, worsening the overall image quality. Therefore, the HDR controller300may control at least one of the aperture driver40and the image sensing device100based on characteristics of the scene and characteristics of the controllable items, so that the pixels of the image sensing device100can have the optimal dynamic range. In the context of this patent document, the word optimal that is used in conjunction with the dynamic range is used to indicate a dynamic range that provides a better performance for the imaging device. FIG.3is a block diagram illustrating an example of the image sensing device and the high dynamic range (HDR) controller300shown inFIG.1based on some implementations of the disclosed technology. Referring toFIG.3, the image sensing device100may include a pixel array110, a row driver120, a correlated double sampler (CDS)130, an analog-digital converter (ADC)140, an output buffer150, a column driver160and a timing controller170. The components of the image sensing device100illustrated inFIG.1are discussed by way of example only, and this patent document encompasses numerous other changes, substitutions, variations, alterations, and modifications. The pixel array110may include a plurality of unit imaging pixels arranged in rows and columns. In one example, the plurality of unit imaging pixels can be arranged in a two dimensional pixel array including rows and columns. In another example, the plurality of unit imaging pixels can be arranged in a three dimensional pixel array. The plurality of unit pixels may convert an optical signal into an electrical signal on a unit pixel basis or a pixel group basis, where unit pixels in a pixel group share at least certain internal circuitry. The pixel array110may receive driving signals, including a row selection signal, a pixel reset signal and a transmission signal, from the row driver120. Upon receiving the driving signal, corresponding imaging pixels in the pixel array110may be activated to perform the operations corresponding to the row selection signal, the pixel reset signal, and the transmission signal. The row driver120may activate the pixel array110to perform certain operations on the imaging pixels in the corresponding row based on commands and control signals provided by controller circuitry such as the timing controller170. In some implementations, the row driver120may select one or more imaging pixels arranged in one or more rows of the pixel array110. The row driver120may generate a row selection signal to select one or more rows among the plurality of rows. The row decoder120may sequentially enable the pixel reset signal for resetting imaging pixels corresponding to at least one selected row, and the transmission signal for the pixels corresponding to the at least one selected row. Thus, a reference signal and an image signal, which are analog signals generated by each of the imaging pixels of the selected row, may be sequentially transferred to the CDS130. The reference signal may be an electrical signal that is provided to the CDS130when a sensing node of an imaging pixel (e.g., floating diffusion node) is reset, and the image signal may be an electrical signal that is provided to the CDS130when photocharges generated by the imaging pixel are accumulated in the sensing node. The reference signal indicating unique reset noise of each pixel and the image signal indicating the intensity of incident light may be generically called a pixel signal as needed. CMOS image sensors may use the correlated double sampling (CDS) to remove undesired offset values of pixels known as the fixed pattern noise by sampling a pixel signal twice to remove the difference between these two samples. In one example, the correlated double sampling (CDS) may remove the undesired offset value of pixels by comparing pixel output voltages obtained before and after photocharges generated by incident light are accumulated in the sensing node so that only pixel output voltages based on the incident light can be measured. In some embodiments of the disclosed technology, the CDS130may sequentially sample and hold voltage levels of the reference signal and the image signal, which are provided to each of a plurality of column lines from the pixel array110. That is, the CDS130may sample and hold the voltage levels of the reference signal and the image signal which correspond to each of the columns of the pixel array110. In some implementations, the CDS130may transfer the reference signal and the image signal of each of the columns as a correlate double sampling signal to the ADC140based on control signals from the timing controller170. The ADC140is used to convert analog CDS signals into digital signals. In some implementations, the ADC140may convert the correlate double sampling signal generated by the CDS130for each of the columns into a digital signal, and output the digital signal. The ADC140may include a plurality of column counters. Each column of the pixel array110is coupled to a column counter, and image data can be generated by converting the correlate double sampling signals received from each column into digital signals using the column counter. In another embodiment of the disclosed technology, the ADC140may include a global counter to convert the correlate double sampling signals corresponding to the columns into digital signals using a global code provided from the global counter. The output buffer150may temporarily hold the column-based image data provided from the ADC140to output the image data. In one example, the image data provided to the output buffer150from the ADC140may be temporarily stored in the output buffer150based on control signals of the timing controller170. The output buffer150may provide an interface to compensate for data rate differences or transmission rate differences between the image sensing device100and other devices. The column driver160may select a column of the output buffer upon receiving a control signal from the timing controller170, and sequentially output the image data, which are temporarily stored in the selected column of the output buffer150. In some implementations, upon receiving an address signal from the timing controller170, the column driver160may generate a column selection signal based on the address signal and select a column of the output buffer150, outputting the image data as an output signal from the selected column of the output buffer150. The timing controller170may control operations of the row driver120, the ADC140, the output buffer150and the column driver160. The timing controller170may provide the row driver120, the CDS130, the ADC140, the output buffer150, and the column driver160with a clock signal required for the operations of the respective components of the image sensing device100, a control signal for timing control, and address signals for selecting a row or column. In an embodiment of the disclosed technology, the timing controller170may include a logic control circuit, a phase lock loop (PLL) circuit, a timing control circuit, a communication interface circuit and others. The timing controller170may control the sensitivity of each of the pixels included in the pixel array110. The sensitivity of each pixel may be determined by light transmittance, exposure time, conversion gain, and analog gain. Here, the light transmittance may refer to a ratio of the intensity of light reaching a device (i.e., a photoelectric conversion element to be described later) that converts light into photocharges within the pixel with respect to the intensity of light incident upon the pixel. The exposure time may refer to a time required to convert light incident upon the pixel into photocharges. The conversion gain may refer to a ratio of a pixel signal (i.e., voltage) obtained by converting photocharges to the amount of photocharges generated by the pixel. The analog gain may refer to a ratio of a digital value (i.e., image data) obtained by converting the pixel signal to a level of the pixel signal that is output from the pixel. The higher the light transmittance, the higher the pixel sensitivity. The shorter the exposure time, the higher the pixel sensitivity. The higher the conversion gain, the higher the pixel sensitivity. In addition, the higher the analog gain, the higher the pixel sensitivity. Conversely, the lower the light transmittance, the lower the pixel sensitivity. The longer the exposure time, the lower the pixel sensitivity. The lower the conversion gain, the lower the pixel sensitivity. In addition, the lower the analog gain, the lower the pixel sensitivity. The light transmittance may have a fixed value that is predetermined for each pixel. The exposure time, the conversion gain, and the analog gain may be controllable items. In order to control the exposure time or the conversion gain of each pixel, the timing controller170may control the row driver for supplying a control signal to the pixel array110. In order to control the analog gain of each pixel, the timing controller170may control the ADC40configured to perform analog-to-digital conversion (ADC). The disclosed technology can be implemented in some embodiments to control the exposure time, the conversion gain, and the analog gain and control the sensitivity of each pixel by the timing controller170as will be discussed below with reference toFIG.6. The HDR controller300may include a luminance acquisition unit310, a controllable item acquisition unit320, and a set value calculation unit330. The luminance acquisition unit310may acquire luminance information associated with a target region from among a captured scene image based on image data (IDATA) generated by the image sensing device100. In this case, the captured scene may be represented by image data (IDATA) corresponding to a frame that is photographed by the image sensing device100. That is, the scene may be an image captured by the entire pixel array110during a predetermined period corresponding to the frame, and pixels belonging to the target region corresponding to at least a portion of the scene image may be defined as target pixels. The target region may refer to a region designated by a user (e.g., a specific subject) as a region included in the scene image, may refer to a region (i.e., the brightest region) in which the image data (IDATA) having a high luminance from among the scene image is concentrated, or may refer to a region corresponding to the entire scene image. Further, the luminance may refer to the value of image data (IDATA). In some other implementations, the luminance acquisition unit310may also acquire luminance of the target region using a separate device (e.g., a photometric sensor, other camera devices, etc.). The controllable item acquisition unit320may acquire controllable items from among the amount of light exposure, exposure time, conversion gain, and analog gain. Here, the amount of light exposure may be a value that is determined according to the degree of opening of the aperture20under control of the aperture driver40. Each of the amount of light exposure, the exposure time, the conversion gain, and the analog gain may be a fixed value that is predetermined or a controllable value. For example, in a night photographing mode, a system (e.g., an application processor) may fix or set the amount of light exposure to a maximum amount of light exposure. In another example, in a video mode or a high-speed photographing mode in which a time allocated to each frame is very short, the system may limit the exposure time to be less than or equal to a predetermined time, or may fix the exposure time to a constant time. In order to reduce power consumption required for controlling and image processing of the image sensing device100, the system may limit the controlling of at least one of the conversion gain and the analog gain. The set value calculation unit330may calculate a set value for each controllable item based on the target-region luminance received from the luminance acquisition unit310and the controllable items received from the controllable item acquisition unit320, and may generate a control signal (CS) indicating the calculated set value. Here, the set value may refer to information for controlling the controllable items (e.g., the amount of light exposure, exposure time, conversion gain, and analog gain). A detailed operation of the set value calculation unit330will be described later with reference toFIG.10. FIG.4is a circuit diagram illustrating an example of pixels included in the pixel array110shown inFIG.3based on some implementations of the disclosed technology. Referring toFIG.4, the pixel (PX) may be any one of the plurality of pixels included in the pixel array110. AlthoughFIG.4shows only one pixel (PX) for convenience of description, it should be noted that other pixels may also have structures and perform operations that are similar or identical to those of the pixel (PX). The pixel (PX) may include a photoelectric conversion element (PD), a transfer transistor (TX), a reset transistor (RX), a floating diffusion region (FD), a conversion gain (CG) transistor (CX), first and second capacitors C1-C2, a source follower transistor (SF), and a select transistor (SX). AlthoughFIG.4shows that the pixel (PX) includes only one photoelectric conversion element (PD) by way of example, it should be noted that the pixel (PX) can also be a shared pixel including a plurality of photoelectric conversion elements. In this case, the plurality of transfer transistors may be provided to correspond to the photoelectric conversion elements, respectively. Each of the photoelectric conversion elements (PDs) may generate and accumulate photocharges corresponding to the intensity of incident light. For example, each of the photoelectric conversion elements (PDs) may be implemented as a photodiode, a phototransistor, a photogate, a pinned photodiode or a combination thereof. If the photoelectric conversion element (PD) is implemented as a photodiode, the photoelectric conversion element (PD) may be a region that is doped with second conductive impurities (e.g., N-type impurities) in a substrate including first conductive impurities (e.g., P-type impurities). The transfer transistor (TX) may be coupled between the photoelectric conversion element (PD) and the floating diffusion region (FD). The transfer transistor (TX) may be turned on or off in response to a transfer control signal (TG). If the transfer transistor (TX) is turned on, photocharges accumulated in the corresponding photoelectric conversion element (PD) can be transmitted to the floating diffusion region (FD). The reset transistor (RX) may be disposed between the floating diffusion region (FD) and the power-supply voltage (VDD), and the voltage of the floating diffusion region (FD) can be reset to the power-supply voltage (VDD) in response to a reset control signal (RG). The floating diffusion region (FD) may accumulate photocharges received from the transfer transistor (TX). The floating diffusion region (FD) can be coupled to the first capacitor (C1) connected to a ground terminal. For example, the floating diffusion region (FD) may be a region that is doped with second conductive impurities (e.g., N-type impurities) in a substrate (e.g., a P-type substrate) including first conductive impurities. In this case, the substrate and the impurity doped region can be modeled as the first capacitor (C1) acting as a junction capacitor. The CG transistor (CX) may be coupled between the floating diffusion region (FD) and the second capacitor (C2), and may selectively connect the second capacitor (C2) to the floating diffusion region (FD) in response to a CG control signal (CG). The second capacitor (C2) may include at least one of a Metal-Insulator-Metal (MIM) capacitor, a Metal-Insulator-Polysilicon (MIP) capacitor, a Metal-Oxide-Semiconductor (MOS) capacitor, and a junction capacitor. When the CG transistor (CX) is turned off, the floating diffusion region (FD) may have electrostatic capacity corresponding to capacitance of the first capacitor (C1). When the CG transistor (CX) is turned on, the floating diffusion region (FD) may have electrostatic capacitance corresponding to the sum of capacitance of the first capacitor (C1) and capacitance of the second capacitor (C2). That is, the CG transistor (CX) may control capacitance of the floating diffusion region (FD). AlthoughFIG.4illustrates only one CG transistor (CX) for convenience of description, it should be noted that a plurality of CG transistors can also be used. In this case, the capacitance of the floating diffusion region (FD) may vary. The source follower transistor (SF) may be coupled between the power-supply voltage (VDD) and the select transistor (SX), may amplify a change in electric potential or voltage of the floating diffusion region (FD) that has received photocharges accumulated in the photoelectric conversion element (PD), and may transmit the amplified result to the selection transistor (SX). The select transistor (SX) may be coupled between the source follower transistor (SF) and the output signal line, and may be turned on by a selection control signal (SEL), so that the selection transistor (SX) can output the electrical signal received from the source follower transistor (SF) as the pixel signal (PS). FIG.5is a conceptual diagram illustrating an example of a method for adjusting sensitivity of each pixel based on a difference in light transmittance. Referring toFIG.5, a high light-transmittance pixel (HPX) has a relatively high light-transmittance and a low light-transmittance pixel (LPX) has a relatively low light-transmittance. In other words, three high light-transmittance pixels (HPXs) and only one low light-transmittance pixel (LPX) may be arranged in a (2×2) matrix (e.g., a unit matrix). Each of the high light-transmittance pixel (HPX) and the low light-transmittance pixel (LPX) may have a structure corresponding to the circuit diagram ofFIG.4. In some implementations, each of the high light-transmittance pixel (HPX) and the low light-transmittance pixel (LPX) may include a photoelectric conversion element and a transfer transistor, and the remaining constituent elements other than the photoelectric conversion element and the transfer transistor may be implemented as a shared pixel structure shared by four pixels. In some implementations, the high light-transmittance pixel (HPX) and the low light-transmittance pixel (LPX) may be pixels that sense light of the same color (e.g., red, blue, or green). In this case, the pixel array may form a quad Bayer pattern structure on a (2×2) matrix basis. Referring to the cross-sectional view ofFIG.5, the high light-transmittance pixel (HPX) and the low light-transmittance pixel (LPX) taken along the line A-A′ may include a substrate510, at least one photoelectric conversion element520, at least one optical filter530, at least one microlens540, and at least one light blocking structure550. For example, the substrate510may be a P-type or N-type bulk substrate, may be a substrate formed by growing a P-type or N-type epitaxial layer on the P-type bulk substrate, or may be a substrate formed by growing a P-type or N-type epitaxial layer on the N-type bulk substrate. The photoelectric conversion element520may be formed in the substrate510, and may correspond to the photoelectric conversion element (PD) shown inFIG.4. That is, the photoelectric conversion element520may generate and accumulate photocharges corresponding to the intensity of incident light having penetrated the microlens540and the optical filter530. The optical filters530may selectively transmit light (e.g., red light, green light, blue light, magenta light, yellow light, cyan light, infrared (IR) light) having a wavelength band to be transmitted. In this case, the wavelength band may refer to a wavelength band of light to be selectively transmitted by the corresponding optical filter. For example, each of the optical filters530may include a colored photosensitive material corresponding to a specific color, or may include thin film layers that are alternately arranged. The optical filters included in the pixel array110may be arranged to correspond to the pixels arranged in a matrix array including a plurality of rows and a plurality of columns, resulting in formation of an optical filter array. Each of the microlenses540may be formed over each of the optical filters530, and may increase the light gathering power of incident light, resulting in an increase in the light reception (Rx) efficiency of the photoelectric conversion element520. The light blocking structure550may be disposed between one surface of the substrate510and the optical filter530, so that at least a portion of incident light that has penetrated the optical filter530in the low light-transmittance pixel (LPX) is blocked by the light blocking structure without being transferred to the photoelectric conversion element520. The light blocking structure550may include at least one of a material (e.g., silver or aluminum) having a high light reflectivity and a material (e.g., tungsten) having a high light absorption rate. The total area of the low light-transmittance pixel (LPX) may be defined as the sum of a blocked area of a region where the light blocking structure550is not disposed and an open area of a region where the light blocking structure550is disposed. Light transmittance of the low light-transmittance pixel (LPX) may be determined according to a ratio between the blocked area and the open area. The high light-transmittance pixel (HPX), which does not include the light blocking structure550, may have a higher light transmittance than low light-transmittance pixel (LPX) including the light blocking structure550. That is, when incident light having the same intensity is incident upon the high light-transmittance pixel (HPX) and the low light-transmittance pixel (LPX), the intensity of light transferred to the photoelectric conversion element520of the low light-transmittance pixel (LPX) may be less than the intensity of light transferred to the photoelectric conversion element520of the high light-transmittance pixel (HPX). In addition, the intensity of light transferred to the photoelectric conversion element520of the low light-transmittance pixel (LPX) may increase with a relatively low slope in response to the increasing intensity of incident light. The intensity of light transferred to the photoelectric conversion element520of the high light-transmittance pixel (HPX) may increase with a relatively high slope in response to the increasing intensity of incident light. Since each of the intensity of light transferred to the photoelectric conversion element520of the low light-transmittance pixel (LPX) and the intensity of light transferred to the photoelectric conversion element520of the high light-transmittance pixel (HPX) is converted into a pixel signal, the response of the low light-transmittance pixel (LPX) may be similar to the response of the low-sensitivity pixel shown inFIG.2, and the response of the high light-transmittance pixel (HPX) may be similar to the response of the high-sensitivity pixel shown inFIG.2. Although the light blocking structure550shown inFIG.5is disposed at the edge of the low light-transmittance pixel (LPX), other implementations are also possible. For example, the light blocking structure550may be disposed at any location of the low light-transmittance pixel (LPX), and the light blocking structure550may also be disposed in the entire region of the low light-transmittance pixel (LPX) in a situation where the light blocking structure550is partially blocked. The image sensing device100based on some implementations of the disclosed technology can generate the HDR image using only one image by implementing the low-sensitivity pixel and the high-sensitivity pixel within only one pixel array110. FIG.6is a timing diagram illustrating an example of a method for adjusting sensitivity of each pixel based on a difference in exposure time. Referring toFIG.6, it is assumed that at least one long-exposure pixel having a relatively long exposure time and at least one short-exposure pixel having a relatively short exposure time are disposed together in one row of the pixel array110for convenience of description. The pixel array110may be activated by the row driver120on a row basis, so that the long-exposure pixel and the short-exposure pixel may receive the same reset signal (RG) and the same row selection signal (SEL). However, in order to allow the long-exposure pixel and the short-exposure pixel to have different exposure times, the long-exposure pixel and the short-exposure pixel may receive different transfer signals such as a first transfer signal (TG_L) and a second transfer signal (TG_S). Although not shown in the drawings, the long-exposure pixel and the short-exposure pixel may have the same CG signal (CG) or different CG signals (CGs). Each of the reset signal (RG), the row selection signal (SEL), the first transfer signal (TG_L), and the second transfer signal (TG_S) may have a logic low level (L) and a logic high level (H). The transistor configured to receive a signal having a logic low level (L) may be turned off, and the transistor configured to receive a signal having a logic high level (H) may be turned on. An operation period (i.e., a time period for operation) of each of the long-exposure pixel and the short-exposure pixel may include a reset period (RS), an exposure period, and a readout period (RD). In other implementations, the operation period may further include the readout period (RD) for generating a reference signal after the reset period (RS). The reset period (RS) may be a time period in which photocharges that remain unused in the corresponding pixel are removed and the floating diffusion region (FD) is reset to the power-supply voltage (VDD). In the reset period (RS), each of the reset signal (RG), the row selection signal (SEL), the first transfer signal (TG_L), and the second transfer signal (TG_S) may have a logic high level (H). The exposure period EX1may be a time period in which a photoelectric conversion element (PD) of the long-exposure pixel generates and accumulates photocharges corresponding to the intensity of incident light and the accumulated photocharges are then transmitted to the floating diffusion region (FD). The exposure period EX2may be a time period in which a photoelectric conversion element (PD) of the short-exposure pixel generates and accumulates photocharges corresponding to the intensity of incident light and the accumulated photocharges are then transmitted to the floating diffusion region (FD). After the reset period (RS), the photoelectric conversion element (PD) of the long-exposure pixel may generate and accumulate photocharges corresponding to the intensity of incident light. Until the first transfer signal (TG_L) transitions from the logic high level (H) to the logic low level (L), the transfer transistor of the long-exposure pixel may transmit photocharges from the photoelectric conversion element (PD) of the long-exposure pixel to the floating diffusion region (FD) of the long-exposure pixel. In other words, the long-exposure pixel may accumulate photocharges that have been generated during the first exposure period (EX1) in the floating diffusion region (FD). After the reset period (RS), the photoelectric conversion element (PD) of the short-exposure pixel may generate and accumulate photocharges corresponding to the intensity of incident light. Until the second transfer signal (TG_S) transitions from the logic high level (H) to the logic low level (L), the transfer transistor of the short-exposure pixel may transmit photocharges from the photoelectric conversion element (PD) of the short-exposure pixel to the floating diffusion region (FD) of the short-exposure pixel. In other words, the short-exposure pixel may accumulate photocharges that have been generated during the second exposure period (EX2) in the floating diffusion region (FD). Each of the exposure periods EX1and EX2may refer to a time period in which photocharges are generated and accumulated in each pixel, and the transfer signal applied to each pixel may determine the length of the exposure period. The readout period (RD) may refer to a time period in which each of the long-exposure pixel and the short-exposure pixel generates electrical signals corresponding to the photocharges accumulated in the floating diffusion region (FD) and then output the electrical signals as a pixel signal (PS). A time point where the first transfer signal (TG_L) transitions from the logic high level (H) to the logic low level (L) may be later than a time point where the second transfer signal (TG_S) transitions from the logic high level (H) to the logic low level (L). Accordingly, the first exposure period (EX1) may be longer than the second exposure period (EX2). In other words, when incident light having the same intensity is incident upon the long-exposure pixel and the short-exposure pixel, the amount of photocharges accumulated in the floating diffusion region (FD) of the short-exposure pixel may be smaller than the amount of photocharges accumulated in the floating diffusion region (FD) of the long-exposure pixel. In addition, the amount of photocharges accumulated in the floating diffusion region (FD) of the short-exposure pixel may increase with a relatively low slope in response to the increasing intensity of incident light. The amount of photocharges accumulated in the floating diffusion region (FD) of the long-exposure pixel may increase with a relatively high slope in response to the increasing intensity of incident light. Since each of the amount of photocharges accumulated in the floating diffusion region (FD) of the short-exposure pixel and the amount of photocharges accumulated in the floating diffusion region (FD) of the long-exposure pixel is converted into a pixel signal, the response of the short-exposure pixel may follow the response of the low-sensitivity pixel shown inFIG.2, and the response of the long-exposure pixel may follow the response of the high-sensitivity pixel shown inFIG.2. The image sensing device100based on another embodiment of the disclosed technology can generate the HDR image using only one image by implementing the low-sensitivity pixel and the high-sensitivity pixel within only one pixel array110. FIG.7is a timing diagram illustrating an example of a method for adjusting sensitivity of each pixel based on a difference in conversion gain. Referring toFIG.7, it is assumed that at least one high conversion gain (CG) pixel having a relatively high conversion gain and at least one low CG pixel having a relatively low conversion gain are disposed together in one row of the pixel array110for convenience of description and better understanding of the disclosed technology. The pixel array110may be activated by the row driver120on a row basis, so that the high CG pixel and the low CG pixel may receive the same reset signal (RG) and the same row selection signal (SEL). For convenience of description, it is assumed that the high CG pixel and the low CG pixel shown inFIG.7receive the same transfer signal (TG), but the high CG pixel and the low CG pixel may also receive different transfer signals TG_L and TG_S as shown inFIG.6. In order to allow the high CG pixel and the low CG pixel to have different conversion gains, the high CG pixel may receive a first CG signal (CG_H) and the low CG pixel may receive a second CG signal (CG_L). The operation period (i.e., a time period for operation) of each of the high CG pixel and the low CG pixel may include a reset period (RS), an exposure period, and a readout period (RD). The operations in each of the reset period (RS), the exposure period, and the readout period (RD) are substantially the same as those ofFIG.6, and as such redundant description thereof will herein be omitted for convenience of description. During the reset period (RS), the exposure period, and the readout period (RD), the high CG pixel may receive a first CG signal (CG_H) kept at a logic low level (L), and the low CG pixel may receive a second CG signal (CG_L) kept at a logic high level (H). Accordingly, capacitance of the floating diffusion region (FD) of the high CG pixel may correspond to capacitance of the first capacitor (C1), and capacitance of the floating diffusion region (FD) of the low CG pixel may correspond to the sum of the capacitance of the first capacitor (C1) and the capacitance of the second capacitor (C2). In the readout period (RD), the photocharges accumulated in the floating diffusion region (FD) may generate a change in voltage of the floating diffusion region (FD), and such voltage change of the floating diffusion region (FD) may be converted into electrical signals by the source follower transistor (SF). In this case, the degree of voltage change in the floating diffusion region (FD) may be determined by capacitance of the floating diffusion region (FD). In association with the same amount of photocharges, as the capacitance of the floating diffusion region (FD) decreases, the voltage change of the floating diffusion region (FD) may increase, and as the capacitance of the floating diffusion region (FD) increases, the voltage change of the floating diffusion region (FD) may decrease. That is, when the same amount of photocharges is accumulated in the floating diffusion region (FD) of the high CG pixel and the floating diffusion region (FD) of the low CG pixel, the magnitude of a pixel signal generated by the high CG pixel may be greater than the magnitude of a pixel signal generated by the low CG pixel. In addition, the magnitude of the pixel signal of the low CG pixel may increase with a relatively low slope in response to the increasing amount of photocharges, and the magnitude of the pixel signal of the high CG pixel may increase with a relatively high slope in response to the increasing amount of photocharges. In addition, each of the magnitude of the pixel signal of the low CG pixel and the magnitude of the pixel signal of the high CG pixel may be converted into image data (IDATA), so that the response of the low CG pixel may follow the response of the low-sensitivity pixel shown inFIG.2and the response of the high CG pixel may follow the response of the high-sensitivity pixel shown inFIG.2. The image sensing device100based on still another embodiment of the disclosed technology can simultaneously implement the low-sensitivity pixel and the high-sensitivity pixel within only one pixel array110, and can thus form (or generate) the HDR image using only one image. FIG.8is a graph illustrating an example of a method for adjusting sensitivity of each pixel based on a difference in analog gain. Referring toFIG.8, the ADC140may be implemented as a ramp-compare type ADC. In some implementations, the ramp-compare type ADC may include a comparator for comparing a ramp signal falling over time with an analog pixel signal, and a counter for performing counting until the ramp signal matches the analog pixel signal. In addition, the ADC140may be independently provided for each column line to which pixels belonging to the same column of the pixel array110are connected. The respective ADCs140may perform analog-to-digital conversion (ADC) using the same or different ramp signals. In the graph shown inFIG.8, the X-axis may represent time, and the Y-axis may represent voltage. A first ramp signal RAMP1and a second ramp signal RAMP2are depicted inFIG.8. Each of the first and second ramp signals RAMP1and RAMP2may be kept at a constant voltage until reaching a first time point (t1), and may then linearly decrease after lapse of the first time point (t1). The slope of the first ramp signal (RAMP1) may be less than the slope of the second ramp signal (RAMP2). The slope of the first ramp signal (RAMP1) and the slope of the second ramp signal (RAMP2) can be adjusted by controlling a resistance value of a variable resistor included in a ramp circuit configured to generate such ramp signals under control of the timing controller170, but is not limited thereto. A time period from the first time point (t1) (where each of the first and second ramp signals RAMP1and RAM P2begins to linearly decrease) to the second time point (t2) located after a predetermined time from the first time point (t1) may be defined as a countable range. The countable range may refer to a maximum time period in which the counter of the ADC140can perform counting. The countable range may be determined in response to a maximum number of counts of the counter per unit time, and may represent the output range of the ADC140. The input range of the ADC140may refer to a voltage range of the pixel signal that can be effectively converted into image data (IDATA) belonging to a predetermined output range (e.g., digital number (DN) of 0-1023) of the ADC140. When the first ramp signal (RAMP1) is input to the comparator, the voltage range of the pixel signal that can be effectively converted into pixel data within a countable time or range determined by the output range of the ADC140may correspond to a first input range (IR1). In the example ofFIG.8, when the first ramp signal (RAMP1) and the pixel signal (PS) are input to the comparator, the counter starts counting from the first time point (t1) and continuously performs counting until reaching a third time point (t3) where the pixel signal (PS) reaches a value that is equal to or higher than the first ramp signal (RAMP1), so that the accumulated counting value can be output as image data. The pixel signal (PS) may be a signal belonging to the first input range (IR1), and may be effectively converted into image data. In this case, the operation of effectively converting the pixel signal (PS) into image data may represent that the pixel signal (PS) is converted into image data indicating the voltage of the pixel signal (PS). If the pixel signal (PS) has a voltage less than the first ramp signal (RAMP1) at the second time point (t2), the pixel signal (PS) cannot be effectively converted into pixel data. When the second ramp signal (RAMP2) is input to the comparator, the voltage range of the pixel signal that can be effectively converted into pixel data within a countable time determined by the output range of the ADC140may correspond to a second input range (IR2). In the example ofFIG.8, when the second ramp signal (RAMP2) and the pixel signal (PS) are input to the comparator, the counter starts counting from the first time point (t1) and continuously performs counting until reaching a fourth time point (t4) where the pixel signal (PS) reaches a value that is equal to or higher than the second ramp signal (RAMP2), so that the accumulated counting value can be output as image data. One pixel connected to the ADC140that performs ADC using the first ramp signal (RAMP1) will hereinafter be referred to as a high AG pixel, and one pixel connected to the ADC140that performs ADC using the second ramp signal (RAMP2) will hereinafter be referred to as a low AG pixel. In other words, for the pixel signal (PS) having the same voltage, image data that is output from the ADC140connected to the high AG pixel may be higher than image data that is output from the ADC140connected to the low AG pixel. In addition, the image data output from the ADC140connected to the low AG pixel may increase with a relatively low slope in response to an increase of the pixel signal (e.g., the increase of the pixel signal may indicate the increase of an absolute value of the pixel signal), and the image data output from the ADC140connected to the high AG pixel may increase with a relatively high slope in response to an increase of the pixel signal. Therefore, the response of the low AG pixel may follow the response of the low-sensitivity pixel shown inFIG.2, and the response of the high AG pixel may follow the response of the high-sensitivity pixel shown inFIG.2. The image sensing device100based on still another embodiment of the disclosed technology can generate the HDR image using only one image by implementing the low-sensitivity pixel and the high-sensitivity pixel within only one pixel array110. Various embodiments in which the sensitivity of each pixel is adjusted in response to a difference in light transmittance, a difference in exposure time, a difference in conversion gain, and a difference in analog gain described inFIGS.5to8can be combined with each other as needed. For example, the CG transistor (CG) of the low light-transmittance pixel (LPX) having a relatively low light-transmittance may be turned on to reduce the slope of the response, or the CG transistor (CG) of the low light-transmittance pixel (LPX) may be turned off to increase the slope of the response. In addition, the sensitivity of each pixel can be adjusted using at least two selected from light transmittance, exposure time, conversion gain, and analog gain. Although each of the light transmittance, the exposure time, the conversion gain, and the analog gain described inFIGS.5to8has been described as having only two kinds of pixels (e.g., a high light-transmittance pixel HPX and a low light-transmittance pixel LPX) for convenience of description, it should be noted that each of the light transmittance, the exposure time, the conversion gain, and the analog gain described inFIGS.5to8may have three or more types (e.g., a high light-transmittance pixel (HPX), a low light-transmittance pixel (LPX), and a medium-transmittance pixel). FIG.9is a flowchart illustrating an example of a method for forming an HDR image by the imaging device based on some implementations of the disclosed technology.FIG.10is a flowchart illustrating an example of the operation S30shown inFIG.9based on some implementations of the disclosed technology. Referring toFIG.9, the luminance acquisition unit310may obtain luminance of the target region from among the scene image based on the image data (IDATA) generated by the image sensing device100(S10). The controllable item acquisition unit320may obtain at least one controllable item from among the sensitivity items (e.g., the amount of light exposure, light transmittance, exposure time, conversion gain, and analog gain) (S20). Here, while the light transmittance item belongs to the sensitivity items, light transmittance of each pixel is inevitably fixed in hardware, so that the controllable item acquisition unit320may not consider the light transmittance item as a controllable item. In response to luminance of the target region received from the luminance acquisition unit310and the controllable item received from the controllable item acquisition unit320, the set value calculation unit330may calculate the set value for each controllable item, and may generate a control signal (CS) indicating the calculated set value (S30). The set value calculation unit330may calculate the set value for each controllable item such that each of the pixels of the image sensing device100has an optimal dynamic range. Here, the optimal dynamic range may refer to a dynamic range that can minimize noise while being as wide as possible. In order to maximize the dynamic range, the amount of photocharges generated by the photoelectric conversion element (PD) should be increased to a maximum amount of photocharges unless saturated in the photoelectric conversion element (PD), and image data should be created with a maximum gain (e.g., a conversion gain and an analog gain) unless saturated in the ADC140. In this case, the saturation within the photoelectric conversion element (PD) may represent that the amount of photocharges has increased to exceed a full well capacity (FWC) indicating the amount of photocharges that can be maximally generated and accumulated by the photoelectric conversion element (PD). In addition, the saturation within the ADC140may represent that image data corresponding to the upper limit of the output range of the ADC140was generated. The method for increasing the amount of photocharges generated by the photoelectric conversion element (PD) may be implemented as a method of increasing the degree of opening of the aperture20by the aperture driver40so that the amount of light exposure can increase, or may be implemented as a method of allowing the row driver120to increase a time period in which the transfer signal has a logic high level so that the exposure time can increase. The method for increasing a gain required to generate image data may be implemented as a method for turning on the CG transistor by the row driver120so that the conversion gain can increase, or may be implemented as a method for reducing the slope of a ramp signal by the timing controller170so that the analog gain can increase. The set value calculation unit330may determine whether each pixel included in the target region has been saturated based on the image data of the target region received from the luminance acquisition unit310. The saturation of each pixel may conceptually include saturation in the photoelectric conversion element (PD) and saturation in the ADC140. In some implementations, it is assumed that, when image data of each pixel corresponds to the upper limit of the output range of the ADC140, this means that the corresponding pixel has been saturated. In addition, the set value calculation unit330may determine whether each pixel can be saturated through control of controllable items. To this end, the set value calculation unit330may prestore response characteristics indicating that the response of the corresponding pixel is changed in response to controlling each of the controllable items (e.g., light transmittance, exposure time, conversion gain, and analog gain) to a specific set value. Such response characteristics can be experimentally determined by measuring the response of each pixel while changing the set value of each controllable item. For example, based on current image data of the high light-transmittance pixel of the target region and response characteristics of the high light-transmittance pixel of the target region, the set value calculation unit330can determine whether the high light-transmittance pixel can be saturated in a situation where the set value for each controllable item (e.g., light transmittance, exposure time, conversion gain, and/or analog gain) is controlled to maximize the sensitivity of the high light-transmittance pixel. If the high light-transmittance pixel cannot be saturated, this means that current luminance is very low. As a result, although the exposure time is very short like the short-exposure pixel, it is impossible to obtain the effect of extension of the dynamic range, thereby deteriorating the signal-to-noise ratio (SNR). Therefore, the set value calculation unit330may exclude the exposure time from the controllable items, and may forcibly set the exposure time to the longest exposure time. In another embodiment of the disclosed technology, when either an average value (i.e., average luminance) or a maximum value (i.e., maximum luminance) of current image data of the high light-transmittance pixel (HPX) is lower than a predetermined value (e.g., for use in a night photographing mode or a moving-image photographing mode, etc.), the set value calculation unit330may determine the presence of the environment having a very low luminance, may exclude the exposure time from the controllable items, and may set the exposure time to the longest time serving as the only one value. The method for adjusting the sensitivity of each pixel by controlling light transmittance may spatially adjust the amount of light incident upon the pixel so that the low light-transmittance pixel (LPX) is not saturated even at a high luminance where the high light-transmittance pixel (HPX) is saturated, resulting in occurrence of a valid response. As a result, the above-described method for adjusting the sensitivity of each pixel by controlling light transmittance can significantly extend the dynamic range of each pixel by increasing the substantial full well capacity (FWC). However, the light transmittance may be fixed in advance in the process of manufacturing the image sensing device100, and the light transmittance cannot be dynamically controlled so that the light transmittance may be considered to be an uncontrollable item. The method for adjusting the sensitivity of each pixel by controlling the exposure time may temporally adjust the amount of light incident upon the pixel, so that the low-exposure pixel is not saturated even at a high luminance where the long-exposure pixel is saturated, resulting in occurrence of a valid response. As a result, the above-described method for adjusting the sensitivity of each pixel by controlling the exposure time can significantly extend the dynamic range of each pixel by increasing the substantial full well capacity (FWC). However, when the exposure time is excessively adjusted, a time point where light incident upon each of the pixels is captured is greatly changed. As a result, motion artifacts for the fast moving object to be captured increase, resulting in reduction in image quality. The method for adjusting the sensitivity of each pixel by controlling the conversion gain may control the conversion gain by which photocharges generated by the pixel are converted into a pixel signal (i.e., voltage), and may obtain the amplified voltage in response to a constant gain even at a low luminance where the amount of generated photocharges is considered insufficient, thereby preventing occurrence of noise. However, since a valid response cannot be obtained even when the conversion gain is controlled for the pixel in which the photoelectric conversion element (PD) is saturated, there is a limitation in extending the dynamic range of the pixel. The method for adjusting the sensitivity of each pixel by controlling the analog gain may control the gain by which the pixel signal is converted into image data (e.g., a digital value), and may obtain the amplified image data in response to a constant gain even at a low luminance where the amount of generated photocharges is considered insufficient, thereby preventing occurrence of noise. However, since a valid response cannot be obtained even when the analog gain is controlled for the pixel in which the photoelectric conversion element (PD) is saturated, there is a limitation in extending the dynamic range of the pixel. In addition, when the analog gain increases, the input range corresponding to a voltage range of the pixel signal that can be effectively converted into image data by the ADC140is unavoidably reduced as illustrated inFIG.8, so that there may occur side effects in which the dynamic range is rather reduced depending on luminance. The imaging device1can extend the dynamic range of the image data based on characteristics of the controllable items that can be used to adjust the sensitivity of each pixel. In an embodiment, the imaging device1may sequentially adjust the sensitivity of each pixel in the order of the light transmittance, sensitivity control using the conversion gain, sensitivity control using the exposure time, and sensitivity control using the analog gain, so that the imaging device1can extend the dynamic range through such sensitivity control. The order or priority of sensitivity control discussed above may be determined based on the dynamic range extension and the side effects. In another embodiment, the above-mentioned sensitivity control priority may also be changed as needed. In addition, the imaging device1may calculate the set value for each controllable item so that the intensity of light incident upon each pixel can be maximized when controlling the sensitivity of each pixel. In some implementations, the set value for each controllable item can be calculated by the set value calculation unit330as will be discussed below with reference toFIG.10. For convenience of description, it is assumed that the conversion gain and the analog gain can always be controlled and each of the amount of light exposure and the exposure time is controllable or uncontrollable. That is, the controllable item acquisition unit320may acquire any controllable item from among the amount of light exposure and the exposure time. It is also assumed that the arrangement and structure of the low light-transmittance pixel and the high light-transmittance pixel are the same as what is shown inFIG.5. That is, it is assumed that a single unit matrix includes one low light-transmittance pixel and three high light-transmittance pixels for convenience of description and better understanding of the disclosed technology. The set value calculation unit330may determine whether the amount of light exposure is a controllable item (S300). If the amount of light exposure is an uncontrollable item (No, S300), the set value calculation unit330may determine whether the exposure time is a controllable item (Yes, S310). If the exposure time is a controllable item (Yes, S310), the set value calculation unit330may perform first setting control (S320). Here, the first setting control (S320) may refer to a method for controlling the controllable items in a situation where the amount of light exposure is an uncontrollable item and each of the exposure time, the conversion gain, and the analog gain is a controllable item. Specifically, the set value calculation unit330may determine whether a requested dynamic range can be obtained from the target region by controlling a conversion gain of the high light-transmittance pixel and a conversion gain of the low light-transmittance pixel. In this case, the requested dynamic range may refer to a dynamic range suitable for photographing the target region. In some implementations, when the ratio of high light-transmittance pixels, each of which has a response of a predetermined value or less in the target region through control of the conversion gain for each of the high light-transmittance pixel and the low light-transmittance pixel, is less than a first ratio, and when the ratio of low light-transmittance pixels, each of which is saturated in the target region, is equal to or less than a second ratio, the set value calculation unit330may determine that the requested dynamic range can be obtained from the target region. Conversely, regardless of control of the conversion gain for each of the high light-transmittance pixel and the low light-transmittance pixel, when the ratio of high light-transmittance pixels, each of which has a response of a predetermined value or less in the target region, is equal to or higher than the first ratio, and when the ratio of low light-transmittance pixels, each of which is saturated in the target region, is higher than a second ratio, the set value calculation unit330may determine that the requested dynamic range cannot be obtained from the target region. Here, the first ratio may refer to the ratio of the number of high light-transmittance pixels each having a predetermined value or less to the total number of high light-transmittance pixels within the target region. The second ratio may refer to the ratio of the number of low light-transmittance pixels, each of which is saturated in the target region, to the total number of low light-transmittance pixels in the target region. The first ratio and the second ratio may be experimentally determined in advance in response to the requested dynamic range. For example, the predetermined value may refer to a lower limit of the dynamic range (DR_H) of the high-sensitivity pixel described inFIG.2. In addition, a first condition is that the ratio of high light-transmittance pixels each having a predetermined value in the target region is less than the first ratio, and a second condition is that the ratio of low light-transmittance pixels each of which is saturated in the target region is equal to or less than the second ratio. In another embodiment, the set value calculation unit330may receive information about whether the requested dynamic range can be obtained from another structure (e.g., a structure for synthesizing the HDR image) of the image signal processor200. In the first setting control, if it is assumed that the requested dynamic range can be obtained from the target region through control of the conversion gain for each of the high light-transmittance pixel and the low light-transmittance pixel, the set value calculation unit330may set or determine the longest exposure time and the optimal analog gain for each of the high light-transmittance pixel and the low light-transmittance pixel. Here, when the ratio of high light-transmittance pixels each having a predetermined value in the target region is less than the first ratio, and when the ratio of low light-transmittance pixels, each of which is saturated in the target region, is equal to or less than the second ratio, the longest exposure time may be set to a maximum exposure time. In addition, the lowest analog gain may be a minimum analog gain that can be set or determined by the ADC140. As the analog gain decreases, the input range of the ADC140can increase, so that the dynamic range can be prevented from being unexpectedly restricted. If the requested dynamic range cannot be obtained from the target region through control of the conversion gain for each of the high light-transmittance pixel and the low light-transmittance pixel, the set value calculation unit330can extend the dynamic range by controlling the exposure time. As can be seen fromFIG.6, the length of the first exposure period (EX1) of the long-exposure pixel will hereinafter be referred to as a first exposure time, and the length of the second exposure period (EX2) of the short-exposure pixel will hereinafter be referred to as a second exposure time. The set value calculation unit330may determine a first exposure time such that the ratio of high light-transmittance pixels each having a predetermined value or less in the target region is less than the first ratio. In addition, the set value calculation unit330may determine a second exposure time such that the ratio of low light-transmittance pixels each of which is saturated in the target region is equal to or less than the second ratio. However, a time difference between the first exposure time and the second exposure time may be minimized to reduce motion artifacts. In order to maximize the dynamic range by maximizing a difference in sensitivity between the low light-transmittance pixel and the high light-transmittance pixel using the first exposure time and the second exposure time having a minimum difference therebetween, the second exposure time may be configured in the low light-transmittance pixel and the first exposure time may be configured in the high light-transmittance pixel. As can be seen fromFIG.7, the conversion gain of the high CG pixel will hereinafter be referred to as a first conversion gain, and the conversion gain of the low CG pixel will hereinafter be referred to as a second conversion gain. Further, as depicted inFIG.8, the analog gain of the high AG pixel will hereinafter be referred to as a first analog gain, and the analog gain of the low AG pixel will hereinafter be referred to as a second analog gain. For convenience of description, it is assumed that the second analog gain is equal to the lowest analog gain described above. The set value calculation unit330may set or determine a second conversion gain, a second exposure time, and a second analog gain for the low light-transmittance pixel. In this case, the reason why the second conversion gain and the second exposure time are configured for the low light-transmittance pixel is to extend the dynamic range of each low light-transmittance pixel. The reason why the second analog gain is configured for the low light-transmittance pixel is to prevent restriction of the dynamic range. For the high light-transmittance pixels, the set value calculation unit330may set a plurality of conversion gains including the first conversion gain, a plurality of exposure times including the first exposure time, and a second analog gain. That is, the plurality of conversion gains may include the first conversion gain, and the plurality of exposure times may include the first exposure time, so that the dynamic range of the high light-transmittance pixel can be extended, and the reason why the second analog gain is configured for the high light-transmittance pixel is to prevent restriction of the dynamic range. In addition, the plurality of conversion gains and the plurality of exposure times for the high light-transmittance pixels are combined with each other and the result of combination is set or configured for the high light-transmittance pixels, so that the high light-transmittance pixels may have various dynamic ranges. When the high light-transmittance pixels having various dynamic ranges are synthesized as described above, only image data having excellent SNR is synthesized to generate the HDR image, resulting in improvement of the image quality. As an example of the unit matrix, the set value calculation unit330may set the first conversion gain and the first exposure time for only one high light-transmittance pixel, may set the second conversion gain and the second exposure time for another high light-transmittance pixel, and may set the second conversion gain and the first exposure time for the remaining high light-transmittance pixels other than the above two high light-transmittance pixels. If the exposure time is an uncontrollable item (No, S310), the set value calculation unit330may determine whether the ratio of the low light-transmittance pixels saturated in the target region exceeds the second ratio (S330). If the ratio of the low light-transmittance pixels saturated in the target region exceeds the second ratio (Yes, S330), the set value calculation unit330may perform second setting control (S340). In this case, the second setting control may refer to a method for controlling the controllable items, under the condition that the amount of light exposure and the exposure time are uncontrollable items, the conversion gain and the analog gain are controllable items, and the ratio of low light-transmittance pixels saturated in the target region exceeds the second ratio. The above-described condition may refer to a relatively high luminance condition in which even the low light-transmittance pixels corresponding to a predetermined ratio or higher are saturated. The set value calculation unit330may set a first conversion gain or a second conversion gain in a predetermined pattern for the low light-transmittance pixel and the high light-transmittance pixel. The set value calculation unit330may set or configure a first analog gain for each pixel having the first conversion gain. The set value calculation unit330may set or configure a second analog gain for each pixel having the second conversion gain. As a result, as large a sensitivity difference as possible can be implemented using the conversion gain and the analog gain. However, when the pixel having the first conversion gain is set to have the first analog gain, there is a possibility that the dynamic range is rather limited. In another embodiment, the set value calculation unit330may set a second analog gain for the pixel having the first conversion gain, or may selectively set the first analog gain or the second analog gain for the pixel having the first conversion gain as needed. If the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio (No, S330), the set value calculation unit330may determine whether the ratio of high light-transmittance pixels, each of which has a response less than a predetermined value in the target region, is less than the first ratio (S350). If the ratio of the high light-transmittance pixels, each of which has a response of a predetermined value or less in the target region is less than the first ratio (Yes, S350), the set value calculation unit330may perform third setting control (S360). In this case, the third setting control may refer to a method for controlling the controllable items, under the condition that the amount of light exposure and the exposure time are uncontrollable items, the conversion gain and the analog gain are controllable items, the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio, and the ratio of high light-transmittance pixels each having a predetermined value or less in the target region is less than the first ratio. The above-described condition may refer to an appropriate luminance condition in which the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio and the ratio of high light-transmittance pixels each having a response of a predetermined value or less is less than the first ratio. When the first conversion gain is set for the low light-transmittance pixel, the set value calculation unit330may determine whether the ratio of low light-transmittance pixels saturated in the target region can be kept at the second ratio or less. If the saturated low light-transmittance pixel can be kept at the second ratio or less, the set value calculation unit330may set the first conversion gain for the low light-transmittance pixel. If the ratio of the saturated low light-transmittance pixel exceeds the second ratio, the set value calculation unit330may set the second conversion gain for the low light-transmittance pixel. On the other hand, the set value calculation unit330may set the first conversion gain or the second conversion gain in a predetermined pattern for the high light-transmittance pixels. In this case, the predetermined pattern may set the same conversion gain as the low light-transmittance pixel for the high light-transmittance pixels belonging to the same row as the low light-transmittance pixels, and may set another conversion gain different from that of the low light-transmittance pixel for the high light-transmittance pixels belonging to another row different from that of the low light-transmittance pixels, but is not limited thereto. Under the condition that the conversion gain is set for each low light-transmittance pixel and the conversion gain is set for each high light-transmittance pixel as described above, the set value calculation unit330may set a maximum analog gain for each of the low light-transmittance pixel and the high light-transmittance pixel within the range in which the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio and the ratio of high light-transmittance pixels each having a response of a predetermined value or less is less than the first ratio. As a result, as large a dynamic range as possible can be obtained and occurrence of noise can also be prevented. If the ratio of high light-transmittance pixels each having a response of a predetermined value or less in the target region is equal to or higher than the first ratio (No, S350), the set value calculation unit330can perform fourth setting control (S370). In this case, the fourth setting control may refer to a method for controlling the controllable items, under the condition that the amount of light exposure and the exposure time are uncontrollable items, the conversion gain and the analog gain are controllable items, the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio, and the ratio of high light-transmittance pixels each having a predetermined value or less in the target region is equal to or higher than the first ratio. The above-described condition may refer to a relatively low luminance condition in which the ratio of high light-transmittance pixels each having a response of a predetermined value or less is equal to or higher than the first ratio. The set value calculation unit330may set the first conversion gain for the low light-transmittance pixel. On the other hand, the set value calculation unit330may set the first conversion gain or the second conversion gain in a predetermined pattern for the high light-transmittance pixels. In this case, when each of the high light-transmittance pixels is set to the first conversion gain and the ratio of high light-transmittance pixels each having a response of a predetermined value or less is equal to or higher than the first ratio, the set value calculation unit330may set the first conversion gain rather than a predetermined pattern for the high light-transmittance pixels. This is because extension of the dynamic range cannot be expected regardless of the second conversion gain that is set for the high light-transmittance pixels. In a state in which the conversion gain is set for the low light-transmittance pixels and the conversion gain is set the high light-transmittance pixels as described above, the set value calculation unit330may set as high an analog gain as possible for the low light-transmittance pixels and the high light-transmittance pixels while being kept in a specific condition where the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio. As a result, as large a dynamic range as possible can be obtained and occurrence of noise can also be prevented. If the amount of light exposure is a controllable item (Yes, S300), the set value calculation unit330may set the amount of light exposure such that the aperture20can be maximally opened within the range in which the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio (S380). In a state where the amount of light exposure is fixed to a specific set value, the set value calculation unit330may determine whether the exposure time is a controllable item (S390). If the exposure time is a controllable item (Yes, S390), the set value calculation unit330can perform first setting control (S320). The reason why the first setting control is performed is that, in a condition where the amount of light exposure is fixed and uncontrollable, the exposure time is considered to be a controllable item. If the exposure time is an uncontrollable item (No, S390), the set value calculation unit330can perform third setting control (S360). The reason why the third setting control is performed is as follows. In a state where the amount of light exposure is fixed and uncontrollable and the exposure time is a controllable item, the aperture20may be set to be maximally opened within the range in which the ratio of low light-transmittance pixels saturated in the target region is equal to or less than the second ratio, so that there is a high possibility that the ratio of high light-transmittance pixels each having a response of a predetermined value or less in the target region is less than the first ratio. As a result, the set value calculation unit330can perform the third setting control (S360). Referring back toFIG.9, the set value calculation unit330may transmit the control signal (CS) indicating a set value for each controllable item to the aperture driver40and the image sensing device100. The aperture driver40may control the degree of opening of the aperture20in a manner that the aperture driver40has the amount of light exposure corresponding to the control signal (CS). In addition, the timing controller170of the image sensing device100may control the row driver120and the ADC140in a manner that each pixel has exposure time, conversion gain, and analog gain corresponding to the control signal (CS). In a state where control of the control signal (CS) is completed, the image sensing device100may capture an image of a scene to form image data (IDATA), and may transmit the image data (IDATA) to the image signal processor200. The image signal processor200may synthesize at least two images having different sensitivities, and may thus form the HDR image using the result of image synthesis (S40). The types of sensitivity of image data (IDATA) may be determined in various ways by combination of light transmittance, exposure time, conversion gain, and analog gain. For example, the pixels included in the pixel array110may have any one of five sensitivities, and each of two pixels having different sensitivities may correspond to the low-sensitivity pixel or the high-sensitivity pixel according to the magnitude relationship in sensitivity between the two pixels. The imaging device1implemented based on some embodiments of the disclosed technology may determine the luminance of the subject to be captured and one or more controllable items to control the controllable items by reflecting hardware characteristics (e.g., response characteristics) when controlling the sensitivity, thereby generating the HDR image having a maximum dynamic range while minimizing noise. In various implementations, the imaging device based on some implementations of the disclosed technology can determine a luminance and controllable items of a target object to be captured (or photographed) and control the controllable items by reflecting hardware characteristics (e.g., response characteristics) into a process of controlling the sensitivity of the imaging device, generating a high dynamic range (HDR) image having a maximum dynamic range while minimizing noise. Although a number of illustrative embodiments have been described, it should be understood that modifications and enhancements to the disclosed embodiments and other embodiments can be devised based on what is described and/or illustrated in this patent document.
91,909
11943544
DESCRIPTION OF THE EMBODIMENTS Basic Configuration of Image Capturing Apparatus Embodiments of the present invention will be described below with reference to the attached drawings. A first embodiment will initially be described.FIG.1is a block diagram illustrating a configuration of a camera main body100that is an image capturing apparatus according to an embodiment of the present invention, a lens unit200, and a light emitting device300. One or more of the functional blocks illustrated inFIG.1may be implemented by hardware such as an application specific integrated circuit (ASIC) and a programmable logic array (PLA). One or more of the functional blocks may be implemented by a programmable processor (microprocessor or microcomputer) such as a central processing unit (CPU) and a micro processing unit (MPU) executing software. One or more of the functional blocks may be implemented by a combination of software and hardware. In the following description, the same piece of hardware can perform operations even if different functional blocks are described to perform the operations. Components of the camera main body100will be described. The camera main body100includes a frame memory (not illustrated) and functions as a storage unit that can temporarily store a signal (video signal) and from which the signal can be read as appropriate. Frame memories are typically also referred to as random access memories (RAMs). Dual Data Rate 3 Synchronous Dynamic RAMs (DDR3-SDRAMs) have often been used in recent years. The use of such a frame memory enables various types of processing. An image sensor101is an imaging unit using a solid image sensor of charge accumulation type, such as a complementary metal-oxide-semiconductor (CMOS) sensor and a charge-coupled device (CCD) sensor. The image sensor101can receive a light flux guided from an object into the camera main body100via the lens unit200and convert the light flux into an electrical image signal. An image (signal) obtained using the image sensor101under driving control by a CPU103to be described below is handled as various image signals for a live view display, for flicker detection, and as a captured image for recording. Since the electrical signal obtained by the image sensor101has an analog value, the image sensor101also has a function of converting the analog value into a digital value. An evaluation value (photometric value) related to the brightness of the object can be detected based on the image signal output from the image sensor101. An exposure time of the image sensor101can be controlled based on a shutter speed that can be set as an exposure control value related to the image sensor101. A mechanical shutter104is a light shielding unit that can run in a direction parallel to a signal scanning direction of the image sensor101. The mechanical shutter104can control the exposure time of the image sensor101by adjusting an exposure aperture formed by a plurality of shutter blades included in the mechanical shutter104based on the foregoing shutter speed. The adjustment of the exposure time according to the present embodiment can be implemented using an electronic shutter, the mechanical shutter104, or both the electronic shutter and the mechanical shutter104. The electronic shutter is implemented by adjusting signal reset and read timing of the image sensor101. A display unit102is a display device that a user can visually observe. The user can check an operation status of the camera main body100on the display unit102. For example, the display unit102displays a video image to which image processing has been applied based on the image signal of the object, and a setting menu. A thin-film transistor (TFT) liquid crystal display is used for a display element of the display unit102. A liquid crystal display (LCD) or an organic electroluminescence (EL) display may be used as the display unit102. Images obtained by the image sensor101and setting conditions, such as exposure control values, can be displayed on the display unit102in real time while capturing the images of the object, i.e., a live view display can be provided. The display unit102according to the present embodiment includes a resistive or capacitive thin film device called touch panel, and also serves as an operation unit on which the user can make touch operations. The CPU103is a control unit that can control the camera main body100and accessory units attached to the camera main body100in a centralized manner. A read-only memory (ROM) and a RAM are connected to the CPU103. The ROM (not illustrated) is a nonvolatile recording device, and records programs for operating the CPU103and various adjustment parameters. The programs read from the ROM are loaded into the volatile RAM (not illustrated) and executed. The RAM is typically a low-speed low-capacity device compared to the frame memory (not illustrated). Next, details of the lens unit200will be described. The lens unit200is an accessory attachable to and detachable from the camera main body100. The lens unit200is an interchangeable lens that includes a lens group201including a focus lens, a zoom lens, and a shift lens. For example, the focus lens included in the lens group201can make a focus adjustment to the object by adjusting the lens position in an optical axis direction of the lens. A diaphragm202is a light amount adjustment member for adjusting the light amount of the light flux guided from the object into the camera main body100via the lens unit200. In the present embodiment, the light amount can be adjusted by adjusting an aperture diameter of the diaphragm202. The light amount is adjusted by changing an aperture value serving as an exposure control value related to the aperture diameter of the diaphragm202. A lens processing unit (LPU)203is a control unit that controls various components of the lens unit200. For example, the LPU203can control driving of the lens group201and the diaphragm202. The LPU203is connected to the CPU103of the camera main body100via a not-illustrated terminal group, and can drive the components of the lens unit200based on control instructions from the CPU103. Next, details of the light emitting device300will be described. The light emitting device300is an external light emitting device attachable to and detachable from the camera main body100via a not-illustrated connection section on the camera main body100. A strobe processing unit (SPU)301is a control unit that controls various components of the light emitting device300, and capable mainly of light emission control and communication control with the camera man body100. The SPU301is connected to the CPU103of the camera main body100via a not-illustrated contact group, and can drive various components of the light emitting device300based on control instructions from the CPU103. While the components of the image capturing apparatus according to the first embodiment of the present invention have been described above, the present invention is not limited to the foregoing configuration. For example, the camera main body100may include built-in devices corresponding to the lens unit200and the light emitting device300. Method for Setting Shutter Speed Next, a method for setting the shutter speed that is an exposure control value for controlling the exposure time of the image sensor101according to the present embodiment will be specifically described with reference toFIG.2.FIG.2is a diagram illustrating a shutter speed setting (index) table according to the present embodiment. Shutter speed is typically known to be changeable in units of ½ or ⅓ steps of light amount. In the present embodiment, to cope with flicker occurring under light-emitting diode (LED) light sources that blink periodically at various frequencies, the shutter speed is adjustable in finer steps. Specifically, in the present embodiment, shutter speeds of 1/8192.0 to 1/4871.0 sec can be adjusted in units of ¼ steps, and 1/4096.0 to 1/2233.4 sec in units of ⅛ steps. Shutter speeds of 1/2048.0 to 1/1069.3 sec can be adjusted in units of 1/16 steps, and 1/1024.0 to 1/523.2 sec in units of 1/32 steps. Shutter speeds of 1/512.0 to 1/258.8 sec can be adjusted in units of 1/64 steps, 1/256.0 to 1/128.7 sec in units of 1/128 steps, and 1/128.0 to 1/50.0 sec in units of 1/256 steps. In the shutter speed setting table illustrated inFIG.2, some of the shutter speeds are omitted for the sake of viewability. The numerical index values in the shutter speed setting table illustrated inFIG.2are used in shutter speed selection processing for reducing flicker to be described below. The camera main body100according to the present embodiment preferentially uses the electronic shutter to allow free setting of the shutter speed, ranging from the foregoing high shutter speed shorter than 1/8000 sec to not-illustrated low shutter speeds longer than 1/50 sec. The user can change the shutter system setting (to singly use the electronic shutter or the mechanical shutter104or use the electronic shutter and the mechanical shutter104in combination) anytime, for example, by making manual operations via a menu screen displayed on the display unit102. Flicker Reduction Processing Next, flicker reduction processing according to the present embodiment will be described with reference to the flowchart illustrated inFIG.3.FIG.3is a flowchart illustrating the flicker reduction processing according to the first embodiment of the present invention. The flicker reduction processing is started in response to a predetermined operation, such as the user's manual operation based on a menu display displayed on the display unit102. The flicker reduction processing according to the present embodiment is processing for controlling the occurrence of flicker-based variations in a moving image such as a live view display by setting a shutter speed (i.e., exposure time) that reduces the effect of detected flicker. The flicker reduction processing according to the present embodiment is however not limited thereto. For example, the camera main body100may be configured to reduce flicker by applying a gain to reduce variations to the image aside from adjusting the shutter speed. When the flicker reduction processing is started, the CPU103initially repeats the processing of step S301until flicker detection processing is started. In step S301, in a case where it is determined that the flicker detection processing is started (YES in step S301), the processing proceeds to step S302. In step S302, the CPU103performs the flicker detection processing. Details of the flicker detection processing will be described below. In step S303, the CPU103determines whether flicker is detected based on the result of the processing of step S302. In step S303, in a case where it is determined that flicker is detected (YES in step S303), the processing proceeds to step S304. In a case where it is determined that no flicker is detected (NO in step S303), the processing proceeds to step S306. It is determined that flicker is detected if flicker of predetermined level or higher has occurred. A method for calculating the flicker level will be described below. In step S304, the CPU103determines an exposure time (shutter speed) that allows the effect of the detected flicker to be reduced (flicker reduction exposure time determination processing). Details of the flicker reduction exposure time determination processing will be described below. In step S305, the CPU103performs shutter speed selection processing for selecting a shutter speed that can reduce the effect of the flicker based on information about the exposure time suitable for flicker reduction determined in step S304. Details of the shutter speed selection processing will be described below. In step S306, the CPU103performs display processing for displaying the result of the flicker detection (i.e., whether flicker is detected) and a value selectable as the shutter speed that can reduce the effect of the flicker as a result of the processing of steps S304and S305. Details of the display processing will be described below. By such flicker reduction processing, an image with the effect of flicker reduced can be obtained, and image display and recording can be performed based on the image, regardless of the frequency of the flicker. Flicker Detection Processing Next, the flicker detection processing according to the present embodiment will be described with reference toFIG.4. As described above, LED light sources, unlike light sources such as a fluorescent lamp, cause changes (blinking) in the light amount, i.e., flicker, at a frequency different from the power supply frequency for driving the light sources since the driving current thereof is controlled by a rectifier circuit. In detecting flicker caused by light sources such as an LED light source, the frequencies targeted for detection are therefore unable to be narrowed down to certain numerical values like the driving power supply frequency. The occurrence of flicker is therefore to be analyzed over a wide range of frequencies. If the light amount change frequency of flicker (blinking cycle of the light source) is the same as or an integer multiple of the imaging cycle in successively capturing images of an object (hereinafter, such a state will be referred to as synchronization), changes (blinking) in the light amount between the successively obtained images are small. In such a case, for example, a live view display of successively displaying the images is free of a drop in image quality like flicker-based variations. However, a still image obtained by imaging at a given shutter speed can suffer exposure unevenness due to flicker. Moreover, even if the imaging frame rate of the images for the live view display is the same as the light amount change frequency of flicker, a moving image for recording obtained at a different frame rate can suffer exposure unevenness or luminance unevenness due to the flicker. There is a known method for identifying the light amount change frequency of flicker by detecting and comparing differences in the light amount (brightness level) between the images obtained by successive imaging. If such a method is used to identify the light amount change frequency of flicker, the light amount change frequency of the flicker and the imaging cycle (frame rate) are desirably adjusted to not be synchronous. In the present embodiment, the occurrence of flicker is thus detected by analyzing the light amount change frequency of flicker at a plurality of frequencies (imaging cycles). Such a method can prevent the light amount change frequency of flicker and the imaging cycle from being fully synchronous and enables effective flicker detection processing over a wide range of frequencies by analyzing the light amount change frequency of the flicker at a plurality of imaging cycles. FIG.4is a flowchart related to the flicker detection processing according to the first embodiment of the present invention. As illustrated inFIG.4, in step S401, the CPU103performs photometric operations on the object (object light measurement) to determine exposure in capturing images of the object for the flicker detection processing. Any method may be used for the photometric operations. For example, in the present embodiment, the CPU103obtains an evaluation value based on an average of image signals obtained by performing charge accumulation for photometric operations using the image sensor101. The CPU103then determines a representative luminance (photometric value) of the object as the result of the light measurement based on the obtained evaluation value. To calculate the photometric value, the angle of view corresponding to the image signals is divided into a plurality of blocks. Signals output from pixels corresponding to each block are averaged, and the averages determined in the respective blocks are arithmetically averaged to calculate the photometric value (representative luminance). The photometric value is in units such that Bv=1 in the Additive System of Photographic Exposure (APEX) system corresponds to one step of the luminance value. However, other units may be used. In step S402, the CPU103adjusts the imaging cycle to an imaging cycle for flicker detection (non-frame rate). Details of a method for adjusting the imaging cycle for flicker detection will be described below. In step S403, the CPU103determines exposure control values (changes exposure) based on the determined photometric value. In the present embodiment, the exposure control values refer to parameters capable of adjusting the brightness of the captured image of the object. The exposure control values include the shutter speed (i.e., accumulation time), an aperture value, and imaging sensitivity (International Organization for Standardization (ISO) speed). The determined exposure control values are stored in the RAM described above. The camera main body100changes its exposure and starts to obtain images for flicker detection. In step S404, the CPU103determines whether the luminance of the obtained images changes (i.e., whether flicker occurs). The CPU103determines the presence or absence of a change in the luminance based on the obtained images, since flicker is unable to be correctly detected if the blinking cycle of the light source and the imaging cycle of the object are synchronous as described above. In a case where it is determined that no change in luminance is detected in the obtained images (NO in step S404), the processing proceeds to step S406. In other words, flicker detection operations at the current frame rate (imaging cycle) are skipped based on the determination that the imaging cycle and the light amount change frequency of the flicker related to the object are synchronous or there is no flicker. In a case where a change in luminance is detected in the obtained images (YES in step S404), the processing proceeds to step S405. In step S405, the CPU103analyzes (detects) the occurrence of flicker at a plurality of different frequencies. Details of the method of detecting flicker at a plurality of frequencies in step S405will be described below. In step S406, the CPU103determines whether flicker detection has been completed with a predetermined number (n) of imaging cycles. In a case where flicker detection is determined to not have been completed with the predetermined number of imaging cycles (NO in step S406), the processing returns to step S402. In step S402, the CPU103changes the imaging cycle (frame rate). The processing of step S403and the subsequent steps is repeated. In a case where flicker detection is determined to have been completed with the predetermined number of imaging cycles (YES in step S406), the processing proceeds to step S407. In step S407, the CPU103identifies the frequency of the flicker of the object based on the determination results in step S405. In the processing of step S407, the occurrence of flicker has been detected at a plurality of different frequencies with a plurality of imaging cycles (frame rates). The CPU103compares the levels of flicker detected at respective frequencies, and identifies the flicker of the frequency where the level is the highest as the final detection result of the currently occurring flicker of the object. In the present embodiment, the magnitudes of changes in the light amount (the amplitudes of curves representing regular changes in the light amount) are compared as the flicker levels. However, this is not restrictive. For example, the camera main body100may be configured to compare the degrees of stability of changes in the light amount aside from the flicker levels. Now, the imaging cycles (frame rates) for flicker detection mentioned above will be specifically described. As described above, the camera main body100according to the present embodiment performs the flicker detection processing at a plurality of imaging cycles. Suppose, for example, that the camera main body100detects the light amount change frequency of flicker by switching the imaging cycle between 100 fps and 120 fps. Initially, take the case of detecting the light amount change frequency of flicker at an imaging cycle of 100 fps. In such a case, flicker changing in light amount at a frequency of k×100 Hz (k is a natural number), like 100 Hz, 200 Hz, and 300 Hz that are integer multiples of the imaging cycle of 100 fps, are unable to be correctly detected because of synchronization between the imaging cycle and the light amount change frequency of the flicker. Now, take the case of detecting the light amount change frequency of flicker at an imaging cycle of 120 fps. In such a case, flicker changing in light amount at a frequency of m×120 Hz (m is a natural number), such as 120 Hz, 240 Hz, and 360 Hz that are integer multiples of the imaging cycle of 120 fps, are unable to be correctly detected because of synchronization between the imaging cycle and the light amount change frequency of the flicker. Frequencies of 600 Hz and 1200 Hz that satisfy both the conditions k×100 Hz (k is a natural number) and m×120 Hz (m is a natural number) are common multiples of 100 Hz and 120 Hz. Flicker changing in light amount at such frequencies is unable to be correctly detected by using images obtained with either of the imaging cycles of 100 fps and 120 fps since the light amount change frequency of the flicker is synchronous with both the imaging cycles of 100 fps and 120 fps. For example, light sources including a rectifier circuit, like an LED light source, typically have an adjusted power supply frequency in the range of 50 Hz to 1000 Hz. Some LED light sources can thus flicker at the foregoing light amount change frequency of 600 Hz, and the flicker cannot be correctly detected depending on the imaging cycle. In other words, even if flicker detection is performed using images obtained with two respective imaging cycles, flicker cannot be detected correctly at some frequencies in a wide range of frequencies at which LED light sources can flicker. In the foregoing example, flicker changing in light amount at exactly the same frequency at an integer multiple of an imaging cycle (frame rate) has been described. However, the accuracy of flicker detection can drop even if the flicker frequency is not the same as an integer multiple of the imaging cycle. Suppose, for example, that flicker changes in light amount at a frequency near an integer multiple of the imaging cycle in obtaining images for flicker detection. Such flicker sometimes takes time to be detected or is unable to be correctly detected since exposure unevenness and other effects on the images are small. In the present embodiment, to effectively detect flicker that can occur under an LED light source in a wide range of frequencies, the number n of imaging cycles (frame rates) used during flicker detection is adjusted to satisfy a condition that “n≥3 (n is a natural number)”. In other words, the flicker detection is performed at n imaging cycles, where n is a natural number of 3 or more. The higher the light amount change frequency of flicker to be detected, the more accurately the light amount change frequency of the flicker can be detected by increasing the number n of imaging cycles to be used for flicker detection. However, increasing the number of imaging cycles to be used for flicker detection can increase the duration of the flicker detection, causing issues of a release time lag and a drop in the display frame rate of the live view image. In the present embodiment, the number n of imaging cycles to be used for flicker detection is set to 3 as the number of samples enabling effective detection of flicker that can occur under light sources considered to be often used like an LED light source. Next, a method of selecting specific numerical values of the respective n imaging cycles will be described. In the present embodiment, a reference imaging cycle is initially set. For example, a reference imaging cycle is assumed to be 100 fps. Light amount change frequencies of flicker synchronous with the imaging cycle of 100 fps are integer multiples of 100 Hz. Flicker occurring at such light amount change frequencies are unable to be correctly detected. If images are sampled at an imaging cycle of 200 fps that is twice the reference imaging cycle of 100 fps, the same issue as during sampling at the reference imaging cycle of 100 fps occurs. In other words, if an integer multiple of the imaging cycle for sampling images for flicker detection and an integer multiple of the light amount change frequency of flicker are the same, the flicker is unable to be correctly detected based on the sampled images because of synchronization between the imaging cycle and the light amount change frequency. In the present embodiment, the n (in the present embodiment, n=3) imaging cycles are set so that the remaining (n−1) imaging cycles (in the present embodiment, two) fall between the reference imaging cycle and an imaging cycle that is the immediate integer multiple of the reference imaging cycle. Take, for example, the case of detecting flicker at three imaging cycles with a reference imaging cycle of 100 fps. In such a case, the plurality of imaging cycles for flicker detection is set so that the remaining imaging cycles other than the reference imaging cycle of 100 fps fall are from 100 fps to 200 fps. In the present embodiment, the imaging cycles (frequencies) are set so that the least common multiple of the n imaging cycles is greater than or equal to a predetermined frequency. For example, the imaging cycles (frequencies) are set so that the least common multiple of the n imaging cycles (frame rates) is greater than or equal to a predetermined frequency of 10000 Hz, since LED light sources typically have a blinking frequency of 10000 Hz or less. Moreover, in order for the camera main body100to be able to reduce the effect of flicker, the imaging cycles (frequencies) are set so that the least common multiple of the n imaging cycles is greater than a predetermined frequency that is the reciprocal of the highest settable shutter speed of the camera main body100. With such a configuration, the camera main body100can effectively detect flicker occurring under a light source that changes in light amount at a high frequency (for example, 200 Hz or more) like an LED light source, and reduce the effect of the detected flicker by adjusting the shutter speed. FIGS.5A and5Bare diagrams illustrating a method of selecting a plurality of imaging cycles in detecting flicker according to the first embodiment of the present invention as an example. To accurately detect the light amount change frequency of flicker, the imaging cycles are set to values far from each other as much as possible so that one of the imaging cycles and the light amount change frequency of the flicker to be detected (blinking cycle of the light source) can have a difference that enables favorable flicker detection. In the present embodiment, as illustrated inFIG.5A, flicker is detected at imaging cycles set to be separated in steps of 2 to the one-third power so that the range of imaging cycles for detection (100 fps to 200 fps) is divided at predetermined intervals. Specifically, in the present embodiment, as illustrated inFIG.5A, the three imaging cycles are: the reference imaging cycle of 100 fps; 100 fps×2{circumflex over ( )}(⅓)=125.99 fps≈126 fps; and 100 fps×2{circumflex over ( )}(⅔)=158.74 fps≈159 fps. The three imaging cycles are different in steps of 2{circumflex over ( )}(⅓)=1.2599≈1.26 times, or approximately 26%. With such a configuration, even in a case where a wide frequency range of 50 to 1000 Hz or more is divided into a plurality of ranges for flicker detection, the ranges do not deviate greatly from the detection target frequencies. One of the imaging cycles has a sufficiently large difference from the light amount change frequency of flicker to be detected. In other words, in setting n imaging cycles and detecting flicker at each of the imaging cycles, a drop in detection accuracy at each detection target frequency can be prevented by setting the imaging cycles in steps of 2 to the one-nth power. FIG.5Bis a diagram illustrating the correspondence of the light amount change frequencies of flicker to be detected with the n imaging cycles as an example. In the present embodiment, flicker is detected based on images obtained at one of the n imaging cycles that corresponds to the farthest frequency from the light amount change frequency of the flicker to be detected. Specifically, in the present embodiment, flicker is detected based on data table illustrated inFIG.5B, where light amount change frequencies of flicker from 50 Hz to 1008 Hz are divided into ranges (A) to (P) for the three imaging cycles illustrated inFIG.5A. In the present embodiment, the effect of flicker is reduced by capturing images of the object at a shutter speed that is the reciprocal of the light amount change frequency of the flicker and setting an imaging period synchronous with the light amount change frequency of the flicker. If there is a deviation between the ideal shutter speed synchronous with the light amount change frequency of the flicker and the actual shutter speed, the effect of the flicker (such as exposure unevenness) on the images becomes more significant when the actual shutter speed is low than when the actual shutter speed is high. Suppose, for example, that shutter speeds of 1/101 sec and 1/1001 sec are set for flicker having a light amount change frequency of 100 Hz and 1000 Hz, respectively, with a deviation as much as 1 Hz from the respective ideal shutter speeds for reducing the effect of the flicker. In either case, the deviation between the ideal shutter speed that can reduce the effect of the flicker and the actual shutter speed is as much as 1 Hz, whereas the deviation is equivalent to 1% of the shutter speed of 1/100 sec and 0.1% of the shutter speed of 1/1000 sec. In other words, the higher the shutter speed, the smaller the effect of the flicker on the images with respect to a change of 1 Hz in the shutter speed. Meanwhile, a low shutter speed increases the period for which flicker-based changes in light amount are captured, and thus images with smoothened changes in light amount are more likely to be obtained. The detection ranges of the flicker at low frequencies may therefore be adjusted to be wider as appropriate if the flicker to be detected has a light amount change frequency such that the flicker is reduced at shutter speeds of a predetermined value or less (for example, as long as 1/25 sec or longer). In the present embodiment, as illustrated inFIG.5B, detection target ranges are set so that the range of light amount change frequencies of flicker to be detected is divided into a plurality of ranges and the frequencies of these successive ranges vary in steps of 2{circumflex over ( )}(⅓)=1.26 times. For example, the range (N) illustrated inFIG.5Bis a detection target range intended to detect flicker from 159 to 200 Hz, and the next range (C) is from 200 to 252 Hz that are approximately 1.26 times the frequencies of the range (N). As illustrated inFIG.5B, adjoining ones of the ranges of the light amount change frequencies of flicker to be detected at the same imaging cycle differ by approximately twice. For example, the ranges (A), (B), and (C) illustrated inFIG.5Bthat correspond to the imaging cycle of 159 fps have detection target frequencies of 50 Hz to 63 Hz, 100 Hz to 126 Hz, and 200 Hz to 252 Hz, respectively. The reason is to take into account the fact that changes in light amount due to flicker at integer multiples of frequency are the same. With such a configuration, the image capturing apparatus according to the present embodiment can detect flicker over a wide range of frequencies with stable accuracy. In the present embodiment, the imaging cycles in detecting flicker are described to differ by m to the one-nth power (m and n are natural numbers). In the foregoing description, m is two (m=2). However, this is not restrictive. For example, the imaging cycles may be set with m=3. In such a case, differences between the imaging cycles are greater, and the detection accuracy with respect to the light amount change frequency of flicker to be detected can be lower than with m=2. However, compared to the case where m=2, the imaging cycles set with m=3 can reduce detection time if the range of detection target frequencies is the same. Such a setting is thus suitable in the case of detecting flicker over a wider range of light amount change frequencies. Now, a different method (modification) of selecting n imaging cycles than the foregoing will be described with reference toFIGS.6A and6B.FIGS.6A and6Bare diagrams illustrating the modification of the method of selecting a plurality of imaging cycles in detecting flicker according to the first embodiment of the present invention as an example. A difference from this modification and the foregoing example described with reference toFIGS.5A and5Blies in the method of setting n imaging cycles in the range of imaging cycles for detection. In this modification, as illustrated inFIG.6A, the range of imaging cycles for detection is equally divided to set a plurality of imaging cycles. More specifically, with the range of imaging cycles for flicker detection (100 fps to 200 fps) as 100%, n imaging cycles are set to include imaging cycles 33% and 66% different from the reference imaging cycle of 100 fps. Specifically, the three imaging cycles are: the reference imaging cycle of 100 fps; 100 fps×1.333=133.33 fps≈133 fps; and 100 fps×1.666=166.66 fps≈167 fps. Differences between the foregoing three imaging cycles are 133.33/100=1.3333, 166.66/133.33=1.25, and 200/166.66=1.2. The imaging cycles are thus 20% or more different from each other. FIG.6Bis a diagram illustrating the correspondence of the light amount change frequencies of flicker to be detected with the n imaging cycles illustrated inFIG.6Aas an example. As illustrated inFIG.6B, in this modification, flicker is also detected based on images obtained at one of the n imaging cycles that corresponds to the farthest frequency from the light amount change frequency of the flicker to be detected as inFIG.5Bdescribed above. Differences between the imaging cycles for flicker detection will now be described. As described above, there is a relationship such that the greater the number of imaging cycles for flicker detection increases, the smaller the differences between the imaging cycles and the longer the sampling time. To accurately detect flicker in a short time, the differences between the imaging cycles are desirably as large as possible and the number of imaging cycles for sampling as small as possible within an extent where a wide range of light amount change frequencies of flicker can be detected. A case where the range between the reference imaging cycle and a cycle twice the reference imaging cycle is divided in steps of 2 to the one-nth power as described with reference toFIGS.5A and5Bwill be described with the range as 100%. In such a case, the imaging cycles for flicker detection vary at intervals expressed by the following Eq. 1: {2{circumflex over ( )}(1/n)−1}×100[%].  (Eq. 1) Now, suppose that the range between the reference imaging cycle and the cycle twice the reference imaging cycle is divided in steps of 100/n [%] as described with reference toFIGS.6A and6B, with the range as 100%. As calculated for n=3, with respect to the imaging cycle different from the reference imaging cycle by 100%×(n−1)/n and the imaging cycle different from the reference imaging cycle by twice, each difference becomes the smallest. The difference is given by the following Eq. 2: [200/{100+{100×(n-1)/n}-1]×100[%]={200⁢n/(200⁢n-100)-1}×100[%]={2⁢n/(2⁢n-1)-1}×100[%]={1/(2⁢n-1)}×100[%].(Eq.2) In other words, if the imaging cycles vary in steps of 100/n [%], the imaging cycles (frame rates) used for flicker detection differ from each other at a ratio of {2n/(2n−1)−1}×100% or more. In the camera main body100according to the first embodiment of the present invention, the imaging cycles (frame rates) used for flicker detection differ from each other by at least {2n/(2n−1)−1}×100%. This also applies to the foregoing case of dividing the range from the reference imaging cycle to the cycle twice the reference imaging cycle in steps of 2 to the one-nth power with the range as 100%. FIG.7provides a graphic representation of the relationship of the methods of selecting imaging cycles and the number of imaging cycles with differences between the imaging cycles based on the foregoing Eqs. 1 and 2.FIG.7is a chart (graph) illustrating the relationship of the methods of selecting imaging cycles for flicker detection according to the present embodiment and the number of imaging cycles with differences between the imaging cycles as an example. As illustrated inFIG.7, differences between the imaging cycles depending on the number n of imaging cycles are smaller in Eq. 2 illustrated by a solid line than in Eq. 1 illustrated by a broken line. Such a condition also applies to even greater numbers n of imaging cycles not illustrated inFIG.7. In other words, while two different methods for selecting imaging cycles have been described above, it can be seen that differences between the imaging cycles are greater than or equal to the values given by Eq. 2 regardless of which method is used. Next, details of the processing for analyzing (detecting) the occurrence of flicker at a plurality of different frequencies in the foregoing step S405will be described. The image capturing apparatus according to the present embodiment extracts changes in luminance over time based on the luminance of successively obtained images, and analyzes the periodicity of the changes in luminance to detect the light amount change frequency of flicker. The change in luminance in the images differ depending on the method for obtaining the images to be used for detection. For example, when images of an object are captured by a global shutter method using a CCD sensor and when the images are captured by a rolling shutter methods using a CMOS sensor, the change in luminance in the images differ between the cases. The ways the luminance changes when the images are obtained by the respective methods described above will be described below. A change in the luminance of images obtained by the global shutter method will initially be described with reference toFIG.8.FIG.8is a diagram illustrating a change in luminance based on images successively obtained by the global shutter method as an example. If an image of an object affected by the blinking of a light source due to flicker is captured, a captured image affected by the intensity of the blinking of the light source is obtained. Measuring the luminance of the entire captured image provides a photometric value affected by the intensity of the blinking of the light source. As employed herein, the luminance may refer to a luminance signal calculated by multiplying R, G1, G2, and B color signals in a raw image of Bayer arrangement by specific coefficients, or the R, G1, G2, and B color signals themselves. A color signal or luminance signal obtained from a sensor array of other than the Bayer arrangement may be used. Differences or ratios in luminance (photometric value) between successively captured images obtained by the foregoing method are then calculated. Alternatively, an average image of a plurality of images is set as a reference image, and differences or ratios in the luminance of the respective images with respect to the reference image may be calculated. By plotting changes in the luminance of the images obtained by such a method, transition of the changes in the luminance of the images as illustrated inFIG.8can be detected. Next, a change in luminance of images obtained by the rolling shutter method will be described with reference toFIG.9.FIG.9is a diagram illustrating a change in luminance based on images successively obtained by the rolling shutter method as an example. If a sensor is driven by the rolling shutter method, exposure and reading timing varies from one sensor line to another. The effect of the blinking of the light source due to flicker thus varies line by line, and changes in luminance occur differently in the vertical direction of the image. If the sensor (in the present embodiment, image sensor101) is driven by the rolling shutter method, changes in luminance due to the blinking of the light source can thus be extracted by obtaining integral values of the captured images line by line. Specifically, as illustrated inFIG.9, changes in the luminance of the same lines in successive (N−1)th and Nth frames images obtained by successively capturing images of the object are extracted. Here, the integral values of the captured images corresponding to the Nth and (N−1)th frames are calculated line by line. As described above regarding the global shutter method, the integrated values may be those of luminance signals obtained by multiplying the color signals by specific coefficients, or the integral values of the color signals themselves. The integral values of the Nth frame and those of the (N−1)th frame are compared line by line to calculate differences or ratios. Changes in luminance in the vertical direction of the captured images (i.e., in the scanning direction of the sensor) as illustrated inFIG.9can thereby be detected. The frames to be compared do not need to be two successive ones. For example, an average image may be obtained by averaging the signal values of a plurality of captured images, and the average image is set as a reference image. Changes in luminance in the vertical direction of the images may be calculated by comparing the integral values of the respective lines of the reference image with those of the respective lines of the Nth frame. By analyzing the captured images obtained by the rolling shutter method in the foregoing manner, transition of the changes in luminance in the vertical direction of the captured images as described above can be detected. The changes in the luminance represent the blinking of the light source (i.e., changes in the light amount of the flicker). Next, a technique for analyzing the frequency of changes in luminance from the transition of changes in the luminance of the images will be described. Among typical techniques for converting a signal changing in a temporal direction into frequency components, one is a Fourier transform. Here, a signal f(t) changing in a temporal direction is converted into a frequency-based function F(ω). F(ω)=∫−ωωf(t)e−iωtdt.(Eq. 3) Focus attention on the exponential function in Eq. 3. Exponential functions are known to be expandable into trigonometric functions representing the real part and the imaginary part based on a relationship between Maclaurin expansion and nth derivatives of the trigonometric functions (see the following Eq. 4): F(ω)=∫f(t)·cos(ωt)dt+j×(−1)×∫f(t)·sin(ωt)dt.(Eq. 4) Since an integration can be performed assuming that f(t) is the transition of changes in the image signals and dt is the sampling interval of the transition of changes, Eq. 4 can be expressed by the following Eq. (5): F(ω)=A(ω)+j×B(ω).  (Eq. 5) This is a complex function of a frequency ω so that the magnitude thereof is calculated as |F(ω)|. |F(ω)| has a large value if luminance change components of the frequency ω are included in the transition of changes in the luminance of the images. |F(ω)| has a small value if luminance change components of the frequency ω are not included in the transition of changes in the luminance of the images. In other words, |F(ω)| can be regarded as a flicker level with respect to each frequency. The presence or absence of changes in luminance due to the blinking of the light source (i.e., the light amount change frequency of flicker) can thus be detected in a wide range of frequencies by calculating frequency components over a wide range of frequencies targeted for detection, using the foregoing Eq. 5. The transition of changes in luminance does not cover one or more periods of the blinking of the light source (one or more periods of a change in the light amount of the flicker), the target frequency is unable to be favorably detected but erroneously detected as another frequency. It is therefore desirable to continue capturing images of the object for one or more periods of the target frequency, and detect the foregoing frequencies (i.e., the light amount change frequency of flicker) based on the captured images. Next, the exposure operation during flicker detection in foregoing step S403will be specifically described. As described above, if the imaging cycle in detecting flicker is synchronous with the blinking frequency of the light source (the light amount change frequency of the flicker), the flicker is difficult to effectively detect based on the sampled images. Aside from the imaging cycle, flicker is also difficult to detect if the exposure time in capturing the object (i.e., shutter speed) is synchronous with the blinking frequency of the light source, since the images obtained in such a state do not produce effective changes in luminance. In the present embodiment, the exposure time (shutter speed) is thus set to be synchronous with the imaging cycle in performing the flicker detection operation so that the exposure time will not be synchronous with a frequency other than the imaging cycle. Specifically, in detecting flicker, images of the object are desirably captured at an exposure time (shutter speed) 1/N (N is an integer) of the imaging cycle (frame rate) for detection. FIG.10is a diagram illustrating the setting values of the exposure time (shutter speed) for a first pattern of a plurality of imaging cycles for flicker detection according to the first embodiment of the present invention as an example. If, for example, the plurality of imaging cycles for flicker detection is 100 fps, 126 fps, and 159 fps as described above, images of the object are captured by setting the exposure time as illustrated inFIG.10. FIG.11is a diagram illustrating the setting values of the exposure time (shutter speed) for a second pattern of the plurality of imaging cycles for flicker detection according to the first embodiment of the present invention as an example. If, for example, the plurality of imaging cycles for flicker detection is 100 fps, 133 fps, and 167 fps as described above, images of the object are captured by setting the exposure time as illustrated inFIG.11. As illustrated inFIGS.10and11, images for flicker detection are obtained at exposure times 1/N (N is an integer) of the reciprocals of the imaging cycles (frame rates) for flicker detection. This can prevent synchronization between the exposure times and the light amount change frequency of the flicker. If the exposure condition varies depending on the light amount change frequency of flicker, the detected flicker levels differ from each other, thereby reducing the detection accuracy. In the present embodiment, the exposure operation is thus performed at the foregoing plurality of imaging cycles based on the result of the light measurement in step S401. This can reduce variations in the exposure amount from one exposure cycle to another, and enables stable detection of flicker levels. With such a configuration, the image capturing apparatus according to the present embodiment can stably and effectively detect flicker over a wide range of frequencies that can be the light amount change frequency of flicker. Next, details of the flicker reduction exposure time determination processing performed in the foregoing step S304will be described with reference toFIG.12.FIG.12is a flowchart related to the flicker reduction exposure time determination processing according to the first embodiment of the present invention. In step S1201, the CPU103initially reads the light amount change frequency of the flicker detected by the flicker detection processing performed in the foregoing step S302from the RAM. In step S1202, the CPU103calculates an ideal exposure time (ideal flicker reduction exposure time) IdealFlkExpTime for reducing the effect of the detected flicker based on the reciprocal of the light amount change of the flicker read in step S1201. For example, if the light amount change frequency of the detected flicker is 540.0 Hz, IdealFlkExpTime=1/540.0. The shutter speed that is the reciprocal of the light amount change frequency of the detected flicker is a shutter speed that most significantly reduces the effect of the detected flicker. In step S1203, the CPU103obtains a currently set shutter speed (current shutter speed) CurTv. An example of the current shutter speed CurTv is the shutter speed set by the user's manual operation. In the present embodiment, suppose that the imaging mode of the camera main body100is set to a manual mode in advance, and a plurality of exposure control values (parameters) is all manually set by the user. In step S1204, the CPU103performs initialization processing for integer multiplication of the ideal flicker reduction exposure time IdealFlkExpTime. Specifically, in step S1204, the CPU103sets an integer N=1, and stores information about the ideal flicker reduction exposure time IdealFlkExpTime before integer multiplication as previously set ideal flicker reduction exposure time PreIdealFlkExpTime. In step S1205, the CPU103compares the currently set shutter speed CurTv obtained in step S1203with the ideal flicker reduction exposure time IdealFlkExpTime. If the value of CurTv is less than or equal to that of IdealFlkExpTime (i.e., the exposure time is shorter) (YES in step S1205), the processing proceeds to step S1207. On the other hand, if the value of CurTv is greater than that of IdealFlkExpTime (the exposure time is longer) (NO in step S1205), the processing proceeds to step S1206. In step S1206, the CPU103stores the current ideal flicker reduction exposure time IdealFlkExpTime as the previously set ideal flicker reduction exposure time PreIdealFlkExpTime, increments the integer N by one, and multiplies the ideal flicker reduction exposure time IdealFlkExpTime by the integer N. Specifically, in step S1206, the CPU103substitutes IdealFlkExpTime into PreIdealFlkExpTime, increments N to N+1, and multiplies IdealFlkExpTime by the integer N. The processing of step S1206(ideal flicker reduction exposure time integer multiplication processing) is repeated until the currently set shutter speed becomes less than or equal to the ideal flicker reduction exposure time (CurTv≤IdealFlkExpTime) in step S1205. In other words, the processing of step S1206is processing for bringing the ideal flicker reduction exposure time IdealFlkExpTime as close to the currently set shutter speed CurTv as possible. By such processing, the exposure time for reducing flicker can be narrowed down to exposure times close to the user-set shutter speed, for example, since CurTv falls between IdealFlkExpTime and PreIdealFlkExpTime. In step S1207, the CPU103compares the absolute value of a difference between IdealFlkExpTime and CurTv with the absolute value of a difference between PreIdealFlkExpTime and CurTv. If the absolute value of the difference between IdealFlkExpTime and CurTv is less than or equal to the absolute value of the difference between PreIdealFlkExpTime and CurTv (NO in step S1207), this flicker reduction exposure time determination processing ends. The reason is that the value of the currently set ideal flicker reduction exposure time IdealFlkExpTime can be determined to be closer to the current shutter speed CurTv than the value of PreIdealFlkExpTime is. On the other hand, if the absolute value of the difference between IdealFlkExpTime and CurTv is greater than the absolute value of the difference between PreIdealFlkExpTime and CurTv (YES in step S1207), the processing proceeds to step S1208. The reason is that the previously set ideal flicker reduction exposure time PreIdealFlkExpTime can be determined to be closer to the current shutter speed CurTv than the currently set ideal flicker reduction exposure time IdealFlkExpTime is. In step S1208, the CPU103substitutes the previously set ideal flicker reduction exposure time PreIdealFlkExpTime into the ideal flicker reduction exposure time IdealFlkExpTime. This flicker reduction exposure time determination processing ends. By the flicker reduction exposure time determination processing according to the present embodiment described above, the exposure time (shutter speed) for reducing flicker can be determined to be a value close to the user-set shutter speed, for example. With such a configuration, images with reduced flicker effect can be obtained while reducing differences from the intended imaging effect due to the user adjusting the shutter speed, for example. FIGS.13A and13Bare diagrams illustrating a method for setting the ideal flicker reduction exposure time IdealFlkExpTime in the presence of flicker changing in light amount at a predetermined light amount change frequency according to the present embodiment as an example.FIG.13Aillustrates an example where the shutter speed is set to 1/5792.6 sec (CurTv=1/5792.6) by the user.FIG.13Billustrates an example where the shutter speed is set to 1/250.5 sec (CurTv=1/250.5) by the user. Suppose, for example, that the light amount change frequency of the detected flicker is 540.0 Hz. In the example illustrated inFIG.13A, the ideal flicker reduction exposure time IdealFlkExpTime is 1/540.0. In the example ofFIG.13B, the ideal flicker reduction exposure time IdealFlkExpTime is 1/270.0 at the same flicker light amount change frequency. Changes in the light amount of the flicker at integer multiple frequencies are the same. The effect of the flicker can thus be reduced as well if images of the object are captured at a shutter speed that is lower than the reciprocal of the light amount change frequency of the flicker and is the reciprocal of an integer multiple of the flicker frequency. If the user-set shutter speed is lower than or equal to the reciprocal of the light amount change frequency of the detected flicker, a value having a smallest difference from the user-set shutter speed among the reciprocals of integer multiples of the flicker frequency can thus be determined to be the ideal flicker reduction exposure time IdealFlkExpTime. With such a configuration, a shutter speed that most significantly reduces the effect of the detected flicker can be set while making a deviation from the user-intended shutter speed as small as possible. Next, details of the shutter speed selection processing performed in the foregoing step S305will be described with reference toFIG.14.FIG.14is a flowchart related to the shutter speed selection processing according to the first embodiment of the present invention. In step S1401, the CPU103initially performs initialization processing for selecting a shutter speed from the shutter speed setting (index) table described above with reference toFIG.2. Specifically, in step S1401, the CPU103sets a settable flicker reduction shutter speed SetPosFlkTv based on the shutter speed setting table, with an index i=1. In the present embodiment, as illustrated inFIG.2, the settable flicker reduction shutter speed SetPosFlkTv for the index i=1 is 1/8192.0 sec. In step S1402, the CPU103increment the index i of the shutter speed setting table by one. In step S1403, the CPU103compares the absolute value of a difference between SetPosFlkTv and the foregoing ideal flicker reduction exposure time IdealFlkExpTime with the absolute value of a difference between the shutter speed corresponding to the index i (hereinafter, referred to as shutter speed [i]) in the shutter speed setting table and the ideal flicker reduction exposure time IdealFlkExpTime. If the absolute value of the difference between SetPosFlkTv and IdealFlkExpTime is less than or equal to the absolute value of the difference between the shutter speed [i] and IdealFlkExpTime (NO in step S1403), the processing proceeds to step S1405. On the other hand, if the absolute value of the difference between SetPosFlkTv and IdealFlkExpTime is determined to be greater than the absolute value of the difference between the shutter speed [i] and IdealFlkExpTime (YES in step S1403), the processing proceeds to step S1404. In step S1404, the CPU103selects the settable flicker reduction shutter speed SetPosFlkTv based on the result of the determination made in step S1403. Specifically, in step S1404, the CPU103sets the settable flicker reduction shutter speed SetPosFlkTv to the shutter speed [i] corresponding to the current index i of the shutter speed setting table. The processing proceeds to step S1405. In step S1405, the CPU103determines whether the index i of the shutter speed setting table is greater than or equal to a maximum index. If the current index i is less than the maximum index (NO in step S1405), the processing returns to step S1402. The CPU103then repeats the processing of steps S1402to S1405. In the present embodiment, the maximum index is 600 as illustrated inFIG.2. If, in step S1405, the current index i is determined to have reached the maximum index (YES in step S1405), the current SetPosFlkTv is selected as the settable flicker reduction shutter speed, and the shutter speed selection processing ends. In the foregoing example, the shutter speed selection processing is performed for all the indexes that can be referred to in the shutter speed setting table. However, this is not restrictive. For example, if the currently set shutter speed CurTv is obtained by the flicker reduction exposure time determination processing, the settable flicker reduction shutter speed SetPosFlkTv may be determined from the vicinity of the currently set shutter speed CurTv. Specifically, if a specific value is recorded as the currently set shutter speed CurTv, the CPU103identifies the index corresponding to a shutter speed closest to CurTv. The CPU103can then determine differences of the shutter speed corresponding to the index and the shutter speeds corresponding to other indexes adjoining to the index from the ideal flicker reduction exposure time IdealFlkExpTime, and determine the shutter speed that minimizes the difference as the settable flicker reduction shutter speed SetPosFlkTv. This configuration is particularly effective in a case where a specific shutter speed is set by the user. The use of such a configuration can reduce processing time and processing load related to the shutter speed selection processing since deviations from the user-intended shutter speed are small and the indexes to be compared are significantly reduced. By performing the foregoing shutter speed selection processing, a shutter speed that can effectively reduce the effect of the flicker detected in advance can be selected from among the settable shutter speeds of the camera main body100. In other words, the camera main body100according to the present embodiment can select (set) one of the settable shutter speeds that is closest to the ideal shutter speed IdealFlkExpTime for reducing the effect of the detected flicker. FIGS.15A and15Bare diagrams illustrating a relative relationship between the shutter speed selected by the shutter speed selection processing according to the first embodiment of the present invention and the ideal shutter speed for reducing the effect of flicker as an example. InFIGS.15A and15B, the light amount change frequency of the flicker is assumed to be 540.0 Hz, and the ideal flicker reduction exposure time IdealFlkExpTime 1/540.0.FIG.15Aillustrates a case where the shutter speed CurTv currently set by the user is 1/5792.6.FIG.15Billustrates a case where the shutter speed CurTv currently set by the user is 1/250.5. InFIG.15A, a difference between Tv=1/546.4 indicated by an index of 58 in the shutter speed setting table and Tv=1/540.0 that is IdealFlkExpTime is denoted by Δ58. InFIG.15A, a difference between Tv=1/534.7 indicated by an index of 59 in the shutter speed setting table and Tv=1/540.0 that is IdealFlkExpTime is denoted by Δ59. In the case illustrated inFIG.15A, Tv=1/534.7 is selected as SetPosFlkTv by the foregoing shutter speed selection processing since Δ59<Δ58. InFIG.15B, a difference between Tv=1/273.2 indicated by an index of 119 in the shutter speed setting table and Tv=1/270.0 that is IdealFlkExpTime is denoted by Δ119. InFIG.15B, a difference between Tv=1/270.2 indicated by an index of 120 in the shutter speed setting table and Tv=1/270.0 that is IdealFlkExpTime is denoted by Δ120. In the case illustrated inFIG.15B, Tv=1/270.2 is selected as SetPosFlkTv by the foregoing shutter speed selection processing since Δ120<Δ119. As described above, the camera main body100according to the present embodiment can effectively detect the light amount change frequency of flicker occurring in the current imaging environment and the ideal shutter speed (exposure time) for reducing the effect of the detected flicker in as short a time as possible. The camera main body100according to the present embodiment can set, as the ideal shutter speed for reducing the effect of the flicker, a shutter speed with the shutter speed currently set by the user taken into account. The camera main body100according to the present embodiment can thus detect the shutter speed that can reduce the effect of the flicker while preventing a change from the user-intended exposure condition and imaging effect as much as possible. Moreover, the camera main body100according to the present embodiment can automatically select (set) a shutter speed closest to the ideal shutter speed that can reduce the effect of the flicker from among the settable shutter speeds of the camera main body100. The camera main body100according to the present embodiment can thus automatically select (set) a shutter speed that can reduce the effect of the flicker without a need for the user to manually adjust the shutter speed. Next, details of the display processing in the foregoing step S306according to the first embodiment of the present invention will be described with reference toFIGS.16A,16B, and17.FIGS.16A and16Bare diagrams illustrating a notification screen displayed on the display unit102by the display processing according to the first embodiment of the present invention as an example. FIG.16Aillustrates a case where flicker at 540.0 Hz is detected, CurTv is 1/5792.6, and SetPosFlkTv is 1/534.7.FIG.16Billustrates a case where flicker at 540.0 Hz is detected, CurTv is 1/250.5, and SetPosFlkTv is 1/270.2.FIG.17is a diagram illustrating a notification screen displayed by the display processing according to the first embodiment of the present invention in a case where no flicker is detected. A detected flicker area1601displays information indicating the light amount change frequency of the flicker detected based on the foregoing method (in the illustrated example, 540.0 Hz). A selectable shutter speed area1602displays the settable flicker reduction shutter speed SetPosFlkTv determined based on the foregoing method (inFIG.16A, 1/534.7; inFIG.16B, 1/270.2). A current shutter speed area1603displays the shutter speed of the camera main body100currently set by the user's manual setting (inFIG.16A, 1/5792.6; inFIG.16B, 1/250.5). A first user selection icon1604displays an option to not consent to change the shutter speed to the settable flicker reduction shutter speed SetPosFlkTv displayed on the notification screen. A second user selection icon1605displays an option to consent to change the shutter speed to the settable flicker reduction shutter speed SetPosFlkTv displayed on the notification screen. If no flicker at a predetermined level or more is detected by the flicker detection processing, a message1701indicating that no flicker is detected and an icon1702by using which the user can make a confirmation input are displayed on the display unit102as illustrated inFIG.17. As described above, if flicker having a predetermined light amount change frequency is detected by the flicker detection processing, various icons and messages such as illustrated inFIGS.16A and16Bare displayed on the display unit102to prompt the user to change the shutter speed. Such a configuration can facilitate setting a shutter speed that can reduce the effect of the flicker while reducing user's labor of adjusting the shutter speed by a manual operation such that the effect of the flicker can be reduced. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. The method for notifying the user of the light amount change frequency of flicker and the shutter speed that can reduce the effect of the flicker and the method for changing the shutter speed are not limited to the foregoing. In the foregoing example, the notification screen is described to be displayed on the display unit102. However, the notification screen may be displayed on other display devices or an external device connected to the camera main body100. The notification method is not limited to image display, either. Various notification units may be used instead to issue a notification using voice guidance, changing the lighting state of a lamp (not illustrated) provided on the camera main body100, or changing the light color. The camera main body100according to the present embodiment uses the method for inquiring of the user whether to change the shutter speed to the settable flicker reduction shutter speed SetPosFlkTv. However, this is not restrictive. For example, the camera main body100may be configured to automatically change the shutter speed to the settable flicker reduction shutter speed SetPosFlkTv without the user's consent. The camera main body100may be configured to switch whether to inquire of the user about the change to the settable flicker reduction shutter speed SetPosFlkTv based on the imaging mode. If the imaging mode is an auto mode where the camera main body100automatically determines parameters related to exposure control, the camera main body100desirably automatically sets the settable flicker reduction shutter speed SetPosFlkTv. By contrast, if the imaging mode is a manual mode where the user manually sets the parameters (exposure control values) related to exposure control, the method for inquiring of the user whether to change the shutter speed is desirably used as in the foregoing example. The camera main body100according to the present embodiment has been described to use the electronic shutter preferentially. However, this is not restrictive. For example, the camera main body100may be configured to control the exposure time of the image sensor101based on a given shutter speed using the mechanical shutter104. In capturing an image of an object using the mechanical shutter104with the shutter speed set high, the running timing of the mechanical shutter104can deviate from the ideal exposure time depending on variations in the physical characteristics of the mechanical shutter104and environmental differences. In other words, if the shutter speed set as the settable flicker reduction shutter speed SetPosFlkTv is high, the camera main body100is sometimes unable to capture an image of the object with the exposure time that can properly reduce the flicker effect. In the case of adjusting the exposure time using the mechanical shutter104, the camera main body100may therefore be configured to limit the settable flicker reduction shutter speed SetPosFlkTv so that the shutter speed becomes shorter than or equal to a predetermined speed. The predetermined speed (shutter speed) may have a value such that the amount of deviation (i.e., error) between the ideal exposure time and the timing of exposure and light-shielding of the image sensor101due to the driving of the mechanical shutter104falls within a predetermined range. In the present embodiment, the shutter speed that is the predetermined speed is set to 1/4000 sec as an example. In such a case, the settable flicker reduction shutter speed SetPosFlkTv can be determine using the foregoing shutter speed setting table within the range excluding the indices corresponding to the shutter speed of 1/4000 sec or less, or using new table data. The camera main body100according to the present embodiment may be configured to make a dynamic adjustment regarding whether to use the electronic shutter or the mechanical shutter104based on the value of the settable flicker reduction shutter speed SetPosFlkTv. For example, if the shutter speed is higher than 1/4000 sec, only the electronic shutter may be made usable. At other shutter speeds, both the electronic shutter and the mechanical shutter104may be made usable. In the foregoing first embodiment, a description is given of a configuration where only one settable flicker reduction shutter speed SetPosFlkTv is notified to the user. In a second embodiment, a configuration for notifying the user of a plurality of options for the settable flicker reduction shutter speed SetPosFlkTv will be described with reference toFIG.18. A configuration of a camera main body100that is an image capturing apparatus according to the present embodiment, a lens unit200, and a light emitting device300, and a basic driving method thereof are similar to those of the foregoing first embodiment. The components will thus be denoted by the same reference numerals, and a description thereof will be omitted. The present embodiment is different from the foregoing first embodiment in the display processing of step S306. FIGS.18A and18Bare diagrams each illustrating a notification screen displayed on the display unit102by the display processing according to the second embodiment of the present invention as an example.FIG.18Aillustrates a case where flicker at 540.0 Hz is detected, CurTv is 1/5792.6, and SetPosFlkTv is 1/534.7.FIG.18Billustrates a case where flicker at 540.0 Hz is detected, CurTv is 1/250.5, and SetPosFlkTv is 1/270.2. A detected flicker area1801displays information indicating the light amount change frequency of the flicker detected. A current shutter speed area1802displays the shutter speed CurTv of the camera main body100that is currently set by the user's manual setting (inFIG.18A, 1/5792.6; inFIG.18B, 1/250.5). A selectable shutter speed first candidate area1803displays the settable flicker reduction shutter speed SetPosFlkTv determined based on the method described in the first embodiment as a first candidate shutter speed selectable by the user.FIG.18Aillustrates a case where the selectable shutter speed first candidate area1803displays 1/534.7, andFIG.18B1/270.2. A selectable shutter speed second candidate area1804displays the shutter speed corresponding to an index at which the difference from IdealFlkExpTime is the second smallest after SetPosFlkTv as a second candidate shutter speed selectable by the user.FIG.18Aillustrates a case where the selectable shutter speed second candidate area1804displays 1/546.4, andFIG.18B1/273.2. A selectable shutter speed alternative candidate area1805displays a shutter speed, if any, that provides a higher effect of reducing the effect of the flicker regardless of the difference from CurTv as another candidate shutter speed selectable by the user.FIG.18Aillustrates an example where the selectable shutter speed alternative candidate area1805displays 1/270.2 that is close to Tv=1/270.0, i.e., twice Tv=1/540.0 that is IdealFlkExpTime. In the case where the flicker at 540.0 Hz is detected, Tv=1/270.2 has a greater difference from CurTv but has a higher effect of reducing the effect of the flicker than SetPosFlkTv (1/534.7). Shutter speed selection icons1806are displayed for the user to select a selectable candidate shutter speed. A white arrow icon indicates the absence of a candidate shutter speed. A black arrow icon indicates the presence of a candidate shutter speed. InFIG.18A, there is no other SetPosFlkTv candidate for the selectable shutter speed first candidate area1803, and a white arrow icon is thus displayed beside the selectable shutter speed first candidate area1803. The same applies to the example illustrated inFIG.18B. InFIG.18A, there is another candidate shutter speed (1/180.0) having a high effect of reducing the effect of the flicker for the selectable shutter speed alternative candidate area1805, and a black arrow icon is thus displayed beside the selectable shutter speed alternative candidate area1805. InFIG.18B, there also is another candidate shutter speed (1/135.0) having a high effect of reducing the effect of the flicker for the selectable shutter speed alternative candidate area1805, and a black arrow icon is thus displayed beside the selectable shutter speed alternative candidate area1805. As described above, the camera main body100according to the present embodiment can notify the user of a plurality of candidates for the shutter speed that can reduce the effect of flicker, aside from SetPosFlkTv. Such a configuration can facilitate setting a user-desired shutter speed among a plurality of candidates that can reduce the effect of flicker while reducing user's labor of adjusting the shutter speed by a manual operation so that the effect of the flicker can be reduced. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. In the foregoing first embodiment, a description is given of an example where the specific notification screen is displayed on the display unit102. In a third embodiment, a configuration for performing flicker detection processing during a live view display for successively displaying captured images will be described with reference toFIG.19. A configuration of a camera main body100that is an image capturing apparatus according to the present embodiment, a lens unit200, and a light emitting device300, and a basic driving method thereof are similar to those of the foregoing first embodiment. The components will thus be denoted by the same reference numerals, and a description thereof will be omitted. FIG.19is a diagram illustrating a screen for transitioning to the flicker reduction processing during a live view display according to the third embodiment of the present invention as an example. While the present embodiment deals with a configuration for providing the live view display on the display unit102, the camera main body100may be configured to provide the live view display on a not-illustrated electronic viewfinder. During the live view display, the image sensor101performs sampling (charge accumulation) for flicker detection at timing different from the charge accumulation timing for obtaining captured images for use in the live view display. As illustrated inFIG.19, a flicker detection icon1901is an icon for displaying the detection of flicker when the flicker is detected by the flicker detection processing described above in the foregoing first embodiment. If flicker detection processing different from the foregoing flicker detection processing can be performed, the flicker detection icon1901may be used to provide a similar display. Alternatively, the camera main body100may be configured to use an icon different from the flicker detection icon1901for the purpose. An example of the different flicker detection processing may be processing for detecting specific flicker (100 Hz or 120 Hz) occurring due to a change in the period of the commercial power source. The flicker detection icon1901may be configured to be displayed only when flicker is detected. The flicker detection icon1901may be constantly displayed and the display content may be changed (updated) depending on whether flicker is detected. Moreover, the camera main body100may be configured so that the CPU103controls execution of the flicker detection processing if the flicker detection icon1901is pressed by the user. A flicker reduction menu transition icon1902is an icon for causing the display content of the display unit102to transition to the notification screen described in the first and second embodiments if the user makes a pressing operation (including a touch operation) on the flicker reduction menu transition icon1902. In other words, the camera main body100according to the present embodiment can transition directly to the notification screen during a live view display without the user going through another user interface such as a menu screen. As described above, the camera main body100according to the present embodiment can implement transition to detection of flicker changing in light amount over a wide range of frequency and capturing of images with reduced effect of the flicker even in a state of capturing images of an object, like during a live view display, using the user's simple operation. Such a configuration can facilitate setting a user-desired shutter speed among a plurality of candidates that can reduce the effect of the flicker while reducing the number of user's manual operations related to the flicker detection. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. In the foregoing first embodiment, the flicker reduction exposure time determination processing performed in a case where the current shutter speed CurTv is set in advance is described. In a fourth embodiment, flicker reduction exposure time determination processing performed in a case where a specific shutter speed (CurTv) is not set by, e.g., the user's manual operations will be described. A configuration of a camera main body100that is an image capturing apparatus according to the present embodiment, a lens unit200, and a light emitting device300, and a basic driving method thereof are similar to those of the foregoing first embodiment. The components will thus be denoted by the same reference numerals, and a description thereof will be omitted. Aside from the foregoing auto mode and manual mode, the settable imaging modes of the camera main body100include priority modes where the user manually sets an exposure control value and the other exposure control values are automatically set. Among examples of settable priority modes of the camera main body100according to the present embodiment is a shutter speed priority mode where the user can manually set the shutter speed. For example, in an automatic exposure control state where the imaging mode of the camera main body100is set to the auto mode, the shutter speed is not freely set by the user. In the flicker reduction exposure time determination processing according to the foregoing first embodiment, determination of the ideal flicker reduction exposure time IdealFlkExpTime in consideration of the current shutter speed CurTv is not particularly needed. In the present embodiment, the ideal flicker reduction exposure time IdealFlkExpTime is therefore determined based on a result of determination as to whether the current shutter speed CurTv is a shutter speed CurUserTv manually set by the user. Specifically, the CPU103of the camera main body100according to the present embodiment determines whether CurTv≠CurUserTv. If CurTv≠CurUserTv, the CPU103sets a shutter speed that minimizes the difference from the ideal flicker reduction exposure time IdealFlkExpTime in the shutter speed setting table as the settable flicker reduction shutter speed SetPosFlkTv. If such a configuration is applied to the foregoing flicker reduction exposure time determination processing, processing of step S1203, step S1205, and the subsequent steps is not needed. Here, the ideal flicker reduction exposure time IdealFlkExpTime is set to the exposure time that is the reciprocal of the light amount change frequency of the flicker detected. However, this is not restrictive. For example, as described above in the second embodiment, the camera main body100may be configured to set the settable flicker reduction shutter speed SetPosFlkTv so that a difference from the value obtained by multiplying the ideal flicker reduction exposure time IdealFlkExpTime by an integer N is minimized to increase the effect of reducing the flicker effect. In such a case, the CPU103repeats the comparison between a shutter speed settable based on the shutter speed setting table and the value of an integer multiple of the ideal flicker reduction exposure time IdealFlkExpTime. The CPU103then selects the shutter speed that minimizes the difference as the settable flicker reduction shutter speed SetPosFlkTv. In the foregoing first and second embodiments assuming that CurTv is set in advance, the value of the settable flicker reduction shutter speed SetPosFlkTv is determined by taking into account differences from CurTv. However, this is not restrictive. For example, the camera main body100may compare differences of the reciprocals of the light amount change frequency of the flicker and the integer multiples thereof from a shutter speed corresponding to each index, and set the value that provides the minimum difference as the settable flicker reduction shutter speed SetPosFlkTv. In such a case, a range of light amount change frequencies of flicker that can be reduced by the settable shutter speeds of the camera main body100may be defined, and only the reciprocals of frequencies within the range may be compared. The determination regarding whether CurTv≠CurUserTv in the present embodiment may be made depending on the currently set imaging mode of the camera main body100. As described above, the camera main body100according to the present embodiment can calculate an optimum shutter speed that can effectively reduce the effect of flicker changing in light amount over a wide range of frequencies even if a specific shutter speed is not set by the user. Such a configuration can facilitate setting a shutter speed that can most effectively reduce the effect of the flicker without needing the user's complicated operations regardless of the imaging condition of the camera main body100. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. In the foregoing first embodiment, the flicker reduction processing during imaging of an object in obtaining a still image is described. In a fifth embodiment, flicker reduction processing during imaging of an object in obtaining a moving image will be described. A configuration of a camera main body100that is an image capturing apparatus according to the present embodiment, a lens unit200, and a light emitting device300, and a basic driving method thereof are similar to those of the foregoing first embodiment. The components will thus be denoted by the same reference numerals, and a description thereof will be omitted. In the case of obtaining a moving image, settable shutter speeds are limited by the update cycle of the frames constituting the moving image. In other words, some shutter speeds may not be set depending on the recording frame rate of the moving image. Moreover, some settable shutter speeds are not desirable as a shutter speed in obtaining a moving image. For example, high shutter speed results in a short exposure time in one frame. Since temporal differences between the frames constituting the moving image increase, the motion of the object in the moving image does not look smooth. In the present embodiment, the flicker reduction processing in obtaining a moving image is thus configured so that a longest exposure time settable at the set frame rate of the moving image is determined to be the ideal flicker reduction exposure time IdealFlkExpTime. In some cases, the ideal flicker reduction exposure time IdealFlkExpTime is not the same as a settable flicker reduction shutter speed. If the settable flicker reduction shutter speed SetPosFlkTv selected based on the newly determined ideal flicker reduction exposure time IdealFlkExpTime has a value not settable at the current frame rate of the moving image, the settable flicker reduction shutter speed SetPosFlkTv is adjusted accordingly. Specifically, the settable flicker reduction shutter speed SetPosFlkTv is set to a shutter speed closest to the newly determined ideal flicker reduction exposure time IdealFlkExpTime among the shutter speeds not limited by the frame rate of the moving image. In the present embodiment, the processing related to the comparison with CurTv in the foregoing flicker reduction exposure time determination processing can be omitted. However, the camera main body100may be configured to use the longest of the integer multiples of the ideal flicker reduction exposure time IdealFlkExpTime of which differences from the current shutter speed CurTv fall within a predetermined time as the final ideal flicker reduction exposure time IdealFlkExpTime. As described above, even in capturing images of an object to obtain a moving image, the camera main body100according to the present embodiment can detect flicker changing in light amount over a wide range of frequencies and capture the images with reduced effect of the flicker while preventing a drop in the quality of the moving image. With such a configuration, the camera main body100according to the present embodiment can facilitate setting a shutter speed that can reduce the effect of flicker both in obtaining a still image and in obtaining a moving image without needing the user's additional operation. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. In the foregoing first embodiment, a configuration for setting the ideal flicker reduction exposure time IdealFlkExpTime to reduce a difference from the current shutter speed CurTv is described. In a sixth embodiment, a method for setting an ideal flicker reduction exposure time IdealFlkExpTime that can reduce the effects of camera shakes and object motion will be described. A configuration of a camera main body100that is an image capturing apparatus according to the present embodiment, a lens unit200, and a light emitting device300, and a basic driving method thereof are similar to those of the foregoing first embodiment. The components will thus be denoted by the same reference numerals, and a description thereof will be omitted. In general, the lower the shutter speed (the longer the exposure time), the more likely an image including a blurred object is to be obtained due to the effects of camera shakes and object motion during imaging. In other words, to reduce such motion blurs occurring in an image, the shutter speed is desirably as high as possible. The camera main body100according to the present embodiment determines the ideal flicker reduction exposure time IdealFlkExpTime to be shorter than a predetermined exposure time by the flicker reduction exposure time determination processing according to the foregoing first embodiment. The predetermined exposure time may have any value that can reduce the effect of object motion in the image. In the present embodiment, the predetermined exposure time is 1/125 sec, for example. In the present embodiment, the processing related to the comparison with CurTv in the foregoing flicker reduction exposure time determination processing can be omitted. However, the camera main body100may be configured to determine the ideal flicker reduction exposure time IdealFlkExpTime to be an exposure time that is one of integer multiples of the ideal flicker reduction exposure time IdealFlkExpTime of which the difference from the current shutter speed CurTv falls within a predetermined range and is shorter than the predetermined exposure time. The camera main body100may also be configured to set an ideal flicker reduction exposure time IdealFlkExpTime that reduces the effect of object motion if a blur-reducing condition (such as a specific imaging scene (sport scene)) is set as the imaging condition of the camera main body100. As described above, the camera main body100according to the present embodiment can detect flicker changing in light amount over a wide range of frequencies and capture images with reduced effect of flicker while reducing the effect of object motion in the images. With such a configuration, the camera main body100according to the present embodiment can facilitate setting a shutter speed that can reduce the effect of flicker without needing the user's additional operation even if a specific imaging condition intended to reduce blur is set. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. In a seventh embodiment, flicker reduction processing during light emission imaging using a light emitting device300will be described. A configuration of a camera main body100that is an image capturing apparatus according to the present embodiment, a lens unit200, and the light emitting device300, and a basic driving method thereof are similar to those of the foregoing first embodiment. The components will thus be denoted by the same reference numerals, and a description thereof will be omitted. In the light emission imaging using the light emitting device300, settable flicker reduction shutter speeds are limited by synchronization speed determined based on the timing of exposure of the image sensor101and the timing of light emission from the light emitting device300. In other words, the camera main body100according to the present embodiment sets a settable flicker reduction shutter speed SetPosFlkTv from among candidate shutter speeds lower than the synchronization speed of the light emitting device300. Specifically, the CPU103determines whether to perform the light emission imaging using the light emitting device300. If the light emission imaging is determined to be performed, the CPU103limits shutter speeds selectable in the shutter speed setting table to a range lower than the synchronization speed of the light emitting device300. In the present embodiment, the processing related to the comparison with CurTv in the foregoing flicker reduction exposure time determination processing can be omitted. However, the camera main body100may be configured to use one of integer multiples of the ideal flicker reduction exposure time IdealFlkExpTime of which the difference from the current shutter speed CurTv is the smallest and that is less than the synchronization speed of the light emitting device300as the final ideal flicker reduction exposure time IdealFlkExpTime. As described above, the camera main body100according to the present embodiment can detect flicker changing in light amount over a wide range of frequencies and capture images with reduced effect of flicker while maintaining a state where the object is appropriately illuminated even during the light emission imaging using the light emitting device300. With such a configuration, the camera main body100according to the present embodiment can facilitate setting a shutter speed that can reduce the effect of flicker during the light emission imaging without needing the user's additional operation. The camera main body100according to the present embodiment can thus capture images with reduced effect of flicker over a wide range of light amount change frequencies without needing complicated operations and reduce image unevenness due to flicker regardless of light sources. While the embodiments of the present invention have been described above, the present invention is not limited to such embodiments, and various modifications and changes may be made within the gist thereof. For example, in the foregoing embodiments, digital cameras are described as examples of the image capturing apparatuses in which the present invention is implemented. However, this is not restrictive. For example, image capturing apparatuses other than digital cameras may be employed. Examples include a digital video camera, a portable device such as a smartphone, a wearable terminal, an on-vehicle camera, and a security camera. In the foregoing embodiments, configurations that can detect and reduce flicker over a wide range of frequencies regardless of light sources have been described. However, this is not restrictive. For example, a specific light source may be specified in advance, and the image capturing apparatus may be configured to detect flicker in a frequency range where the flicker is likely to occur. For example, like the shutter speed setting table illustrated inFIG.2, table data may be prepared for each light source (or each group of similar light sources). The image capturing apparatus then may be configured to limit the shutter speed to those likely to be set in each piece of table data by referring to the light amount change frequency of the light source. With such a configuration, a shutter speed that can reduce the effect of flicker can be efficiently set based on flicker likely to occur from each light source. This can reduce the data amount of the table data as much as possible while effectively reducing the effect of flicker. In the foregoing embodiments, the operation of the entire image capturing apparatus is controlled by the components of the image capturing apparatus cooperating with each other, with the CPU103playing the central role. However, this is not restrictive. For example, a (computer) program based on the procedures illustrated in the foregoing diagrams may be stored in the ROM of the camera main body100in advance. A microprocessor such as the CPU103may be configured to execute the program to control the operation of the entire image capturing apparatus. The program is not limited to any particular form as long as the functions of the program are provided. Examples include object code, a program to be executed by an interpreter, and script data to be supplied to an operating system (OS). Examples of a recording medium for supplying the program may include magnetic recording media such as a hard disk and a magnetic tape, and optical/magneto-optical recording media. In the foregoing embodiments, digital cameras are described as examples of the image capturing apparatuses in which the present invention is implemented. However, this is not restrictive. For example, various image capturing apparatuses may be employed, including a digital video camera, a portable device such as a smartphone, a wearable terminal, and a security camera. OTHER EMBODIMENTS An embodiment of the present invention can be implemented by supplying a program for implementing one or more functions of the foregoing embodiments to a system or an apparatus via a network or a recording medium, and reading and executing the program by one or more processors in a computer of the system or apparatus. A circuit for implementing the one or more functions (such as an ASIC) may be used for implementation. OTHER EMBODIMENTS Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™, a flash memory device, a memory card, and the like. While the present invention has been described with reference to embodiments, it is to be understood that the invention is not limited to the disclosed embodiments but is defined by the scope of the following claims. This application claims the benefit of Japanese Patent Application No. 2021-028809, filed Feb. 25, 2021, which is hereby incorporated by reference herein in its entirety.
98,371
11943545
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention focuses on a circuit that starts to be activated all at once according to a synchronization signal. In the following description of the embodiment of the present invention, an imaging part (imaging block) that receives data from an imaging element will be described, but the same technique is applicable to a display part that is activated in synchronization with a display device such as a television or a liquid crystal panel. That is, the present invention includes the contents of the display part. That is, in the following description, the input data from the imaging element will be used as an example, but the same technique can be applied to the output data to the display device. FIG.1is a block diagram showing an example of the overall configuration of the image processing device according to the embodiment of the present invention. The image processing device includes an image sensor (imaging element)1, an imaging part (imaging block)2, an image processing part3, a display part4, a display device5, a DRAM6, and a data bus7. The imaging part2includes an imaging interface part (imaging IF part)21and an imaging processing part22. The display part4includes a display processing part41and a display interface part (display IF part)42. The imaging part2operates by receiving input data (image signal) from the imaging element1. The imaging IF part21reads out the data (image signal) obtained by the imaging element1and generates an imaging signal. The image processing part22processes the image image signal from the imaging IF part21and transmits it to the image processing part3via the data bus7. The image processing part3performs image processing A, B, C, or the like on the data (image processed data) from the imaging part2. The data image-processed by the image processing part3is transmitted to the display processing part41via the data bus7, processed, and transmitted to the display device5via the display part IF part42. FIG.2is a diagram showing the relationship between the effective area and the control area and the voltage fluctuation in the imaging block of the image processing device according to the embodiment of the present invention. As shown inFIG.2, a two-dimensional image (effective area) is read in the horizontal direction from left to right by raster scanning, which is repeated by moving from top to bottom so that the image is read. When the reading position enters the control area of the control signal of the imaging element, the control circuit normally operates the arithmetic circuit (performs arithmetic processing). The dummy operation period described below is when the reading position enters the horizontal blanking period before and after the control area. As shown in the voltage fluctuation ofFIG.2, a sudden change in the operating power is suppressed during the dummy operation period, whereby the power supply current fluctuation can be suppressed. In the above description, in the imaging block of the image processing device, an example in which the horizontal blanking period before or after the control area of the control signal of the imaging element is the dummy operation period has been described. However, the vertical blanking period before or after the control area of the control signal of the imaging element may be the dummy operation period. Further, the present invention can be applied to other than the imaging block of the image processing device, in which case, the circuit block may be any circuit block in which the operation period is predetermined and the operation is intermittently performed according to the operation period. FIG.3is a diagram showing a functional block in an imaging block (giant circuit block) of the image processing device according to the embodiment of the present invention. As shown inFIG.3, the imaging block (imaging part) is configured by combining various functional blocks divided for each function. In the example ofFIG.3, functional blocks A to H (2A to2H) are present. The image signal from the imaging element1is processed by each functional block in the imaging block and finally transmitted to the DRAM6. There are many SRAMs in each functional block, but not all SRAMs are running (used). There are multiple fractional blocks in the huge circuit block, but unused SRAM (non-functional SRAM or non-working SRAM because data has not arrived yet) exists in these functional blocks. That is, in the huge circuit block, there are functional blocks separated for each function, and when a certain function (functional block) is not used, the SRAM in the functional block is unused and can be freely used. Therefore, these unused SRAMs are used to control power consumption and realize gradual fluctuation of operating power. FIG.4is a diagram for explaining the control of SRAM in each functional block in the imaging block of the image processing device according to the embodiment of the present invention. As shown inFIG.4, the SRAM23is controlled by the normal control circuit24or the dummy control circuit25. The normal control circuit24causes the SRAM23to operate normally. The dummy control circuit25causes the SRAM23to perform a dummy operation. As described above, as a control circuit for operating the SRAM23, a dummy control circuit25is provided separately from the normal control circuit24. As the control signal of the SRAM23, a general one can be used. Specifically, the control signal includes each control signal of the address ADR, the data WD to be written, the write enable NWD, and the clock CK. The mode switching part26switches the operation mode (normal operation, dummy operation) of the SRAM23by selecting the normal control circuit24and the dummy control circuit25. That is, the mode switching part26can change the timing of dummy operation of the SRAM23. At the start of operation, the control of the SRAM23is switched from the dummy control circuit25to the normal control circuit24, and the operation is started slowly. At the time of stop, the control of the SRAM23is switched from the normal control circuit24to the dummy control circuit25, and the operation is stopped gently. The strength switching part27switches the strength of the dummy operation of the SRAM23, that is, whether the dummy operation is strengthened or weakened. The strength of the dummy operation corresponds to the magnitude of power consumption and the strength of rising (gradient) in the SRAM23. To change the intensity of the dummy operation, the number of SRAMs to be dummy-operated or the control signal of the SRAM is changed. The control signal is a signal that controls one or more of an address, data, an enable signal, a clock, and the like. That is, the amount of power consumed by the SRAM is adjusted (switched) by the address, data, enable signal, clock, and the like. By switching the strength of the dummy operation (power consumption of SRAM), the fluctuation of the power supply voltage is adjusted so as to draw a gentle curve. For example, the strength of the dummy operation can be changed by changing the number of SRAMs to be operated as a dummy. Also, by changing the frequency of access to (one or more) SRAM(s), the operating speed of the SRAM can be changed and the intensity of the dummy operation can be changed. Specifically, the power consumption (power) of the SRAM can be changed by inputting the address signal of the SRAM and the data to be written (for example, the toggled data) one after another, or by changing the cycle of the enable signal. By changing the address signal of the SRAM, the address of the SRAM to be read or written can be changed, and the strength of the dummy operation can be changed. FIG.5is a timing chart showing the relationship between each control signal, dummy operation, and voltage fluctuation in the imaging block of the image processing device according to the embodiment of the present invention. When the ENABLF signal is turned on, it operates normally, and when it is turned of it stops. A clock (CK (main)) for normal operation is input in accordance with this. The dummy ENABLE signal (Dummy_En) is turned on before and after the normal operation period to perform a dummy operation. Further, a clock for dummy operation (CK (sram)) is prepared in a system different from the clock for normal operation (CK (main)). “Strength” inFIG.5indicates the strength of the dummy operation. When the dummy operation is turned on before the normal operation period, the strength of the dummy operation is gradually increased by changing the number of SRAMs that perform dummy operations and the number of toggles that are input to the SRAMs. Then, when the normal operation period is entered, the strength of the dummy operation is reduced. Similarly, when the dummy operation is turned on after the normal operation period, the strength of the dummy operation is gradually weakened by changing the number of SRAMs that perform dummy operations and the number of toggles that are input to the SRAMs. “Ivdd” inFIG.5indicates a transient current. Before the normal operation period, the transient current rises as the dummy operation turns ON, and the normal state is reached. Then, after the normal operation period, the transient current drops from the normal state as the dummy operation turns ON. By gradually changing the transient current before and after the normal operation period in this way, it is possible to suppress a sudden fluctuation of VDD (power supply voltage fluctuation). As described above, in the present invention, in the intermittent operation, the unused SRAM is subjected to a dummy operation, and the power consumption is gradually changed to stop or start. This makes it possible to reduce power consumption during intermittent operation. That is, in order to suppress power fluctuations by using an unused SRAM, a place different from the normally used path main path is intentionally activated. Since SRAM is used, control and output masking are easy. In addition, the area increase can be suppressed by using SRAM-Bist (built-in test circuit). Although the input data from the imaging element to the imaging part in the image processing device has been described above as an example, the same technique can be applied to the output data from the display part to the display device. In this case, the above-described embodiment is applied to the huge arithmetic circuit in the display processing part41to perform a dummy operation. The image processing device according to the embodiment of the present invention includes a circuit block having a plurality of circuits, an SRAM provided in the circuit, and a dummy control circuit. The operation period of the circuit block is predetermined, and the circuit performs intermittent operation according to (synchronously) the operation period. The dummy control circuit dummy-operates the unused SRAM before or after the operation period of the circuit block. Here, the circuit block may refer to the entire imaging block, or may refer to one or more functional blocks within the imaging block. The SRAM to be operated as a dummy is not limited to the SRAM in the same circuit block, and may be an SRAM in the same chip using the same power supply. In the present invention, an unused SRAM may be subjected to an intentional dummy operation before or after the intermittent operation (at the time of starting or stopping the operation). In addition, the mode switching part can change the timing at which the SRAM is operated as a dummy. Further, the intensity switching part can switch the intensity of the dummy operation and adjust the magnitude of the power consumption of the SRAM by changing the number of SRAMs operated by the dummy or the control signal of the SRAM. The present invention is not limited to the image processing device and the display device in the imaging device, and can be applied to any device having a circuit that operates in synchronization with the synchronization signal. Although one embodiment of the present invention has been described above, the technical scope of the present invention is not limited to the above-described embodiment. The combination of components can be changed, various changes can be made to each component, and the components can be deleted without departing from the scope of the present invention. Each component is for explaining the function and processing related to each component. One configuration (circuit) may simultaneously realize functions and processes related to a plurality of components. Each component, individually or as a whole, may be implemented in a computer consisting of one or more processors, logic circuits, memory, input/output interfaces, computer-readable recording media, and the like. In that case, the various functions and processes described above may be realized by recording a program for realizing each component or the entire function on a recording medium, loading the recorded program into a computer system, and executing the program. In this case, for example, the processor is at least one of a CPU, a DSP (Digital Signal Processor), and a GPU (Graphics Processing Part). For example, the logic circuit is at least one of ASIC (Application Specific Integrated Circuit) and FPGA (Field-Programmable Gate Array). Further, the “computer system” referred to here may include hardware such as an OS and peripheral devices. Further, the “computer system” includes a homepage providing environment (or a display environment) if a WWW system is used. The “computer-readable recording medium” refers to a writable non-volatile memory such as a flexible disk, a magneto-optical disk, a ROM, a writable non-volatile memory such as a flash memory, a portable medium such as a CD-ROM, and a storage device such as a hard disk built in a computer system. Further, the “computer-readable recording medium” also includes those that hold the program for a certain period of time, such as a volatile memory (for example, DRAM (Dynamic Random Access Memory)) inside a computer system that serves as a server or client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Further, the program may be transmitted from a computer system in which this program is stored in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting a program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line. Further, the above program may be for realizing a part of the above-mentioned functions. Further, it may be a so-called difference file (difference program) that realizes the above-mentioned function in combination with a program already recorded in the computer system. In the present specification, terms indicating directions such as “front, back, top, bottom, right, left, vertical, horizontal, vertical, horizontal, row and column” are used to describe these directions in the device of the present invention. Therefore, these terms used to describe the specification of the present invention should be interpreted relative to each other in the device of the present invention. The present invention can be widely applied to any device having a circuit that operates in synchronization with a synchronization signal, and the power supply fluctuation can be moderated by operating the arithmetic circuit in a dummy manner (dummy operation) before and after the normal operation period.
15,775
11943546
Various features of the technologies described herein will become more apparent to those skilled in the art from a study of the Detailed Description in conjunction with the drawings. Embodiments are illustrated by way of example and not limitation in the drawings, in which like references may indicate similar elements. While the drawings depict various embodiments for the purpose of illustration, those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technologies. Accordingly, while specific embodiments are shown in the drawings, the technologies are amenable to various modifications. DETAILED DESCRIPTION Not only is manually correcting the white balance during post-processing a time consuming process, but the results tend to be inconsistent—even when performed by professionals. Some professionals tend to favor warmer digital images, while other professionals tend to favor cooler digital images. To address these issues, modern digital cameras have been designed to automatically perform white balancing. For instance, a digital camera may have an automatic white balance (AWB) setting that, when selected, causes the scene to be assessed by algorithms designed to identify the brightest part as the white point and then attempt to balance the color of digital images based on that reference. In color-critical situations, a neutral reference may be introduced to the scene to ensure that the AWB algorithms do not encounter problems. For example, professional photographers/videographers may carry portable references that can be easily added to scenes. One example of a portable reference is a flat surface of a neutral gray color that derives from a flat reflectance spectrum. Another example of a portable reference is a flat surface with a series of different colors corresponding to different reflectance spectrums. The former is normally referred to as a “gray card,” while the latter is normally referred to as a “color checker.” Regardless of its form, a portable reference will introduce a known reflectance spectrum into a scene that can be used by the algorithms as a reference for automatic white balancing. When a scene does not include a known reflectance spectrum, the AWB algorithms can (and often do) produce digital images that are visibly incorrect in terms of color. For example, if the scene is predominantly red, the AWB algorithms may mistake this for a color cast induced by a warm illuminant and then try to compensate by making the average color closer to neural. But this introduces a bluish color cast that may be quite noticeable.FIGS.10-16include some examples of images in which the AWB algorithms failed to properly scale the color values. Introduced here, therefore, are computer programs and associated computer-implemented techniques for achieving high-fidelity color reproduction without portable references. To accomplish this, a new reference spectrum—the “reference illuminant spectrum”—is introduced into scenes to be imaged by image sensors. The reference illuminant spectrum is created by an illuminant whose spectral properties are known. As further discussed below, a single reference illuminant may be inadequate to properly render the colors in a scene. This is because, for any single illuminant spectrum, there are instances of color metamerism where pixels corresponding to objects with different reflectance properties have the same color values as measured by the image sensor. That is, these pixels will appear the same to the image sensor despite not actually being the same color. Accordingly, multiple reference illuminant spectrums may be differentially introduced to mitigate the impact of color metamerism. Embodiments may be described with reference to particular electronic devices, light sources, or image sensors. For example, the technology may be described in the context of mobile phones that include multi-channel light source with LEDs of several different colors and a multi-channel image sensor having red, green, and blue channels. However, those skilled in the art will recognize that these features are equally applicable to other types of electronic devices, light sources, and image sensors. For instance, the same features may be applied by multi-channel light sources configured to produce non-visible light (e.g., ultraviolet light and/or infrared light) instead of, or in addition to, visible light. Accordingly, while embodiments may be described in the context of light sources with multiple “color channels,” the features may be equally applicable to non-color channels (i.e., channels having one or more illuminants that produce non-visible light). Embodiments may also be described with reference to “flash events.” Generally, flash events are performed by an electronic device to flood a scene with visible light for a short interval of time while a digital image of the scene is captured. However, the features described herein are similarly applicable to other illumination events. For example, an electronic device could strobe through the color channels of a multi-channel light source, determine an effect of each color channel, and then simultaneously drive at least some of the color channels to produce visible light that floods the scene for an extended duration. Accordingly, while embodiments may be described in the context of capturing and then processing digital images, those skilled in the art will recognize that the features are equally applicable to capturing and then processing a series of digital images that represent the frames of a video. The technology can be embodied using special-purpose hardware (e.g., circuitry), programmable circuitry appropriately programmed with software and/or firmware, or a combination of special-purpose hardware and programmable circuitry. Accordingly, embodiments may include a machine-readable medium having instructions that, when executed, cause an electronic device to introduce a series of illuminant spectrums into a scene, capture a series of images in conjunction with the series of illuminant spectrums, and then establish spectral information on a per-pixel basis based on an analysis of the series of images. Terminology References in this description to “an embodiment” or “one embodiment” means that the feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another. Unless the context clearly requires otherwise, the words “comprise” and “comprising” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”). The terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect. The connection/coupling can be physical, logical, or a combination thereof. For example, components may be electrically or communicatively coupled to one another despite not sharing a physical connection. The term “based on” is also to be construed in an inclusive sense rather than an exclusive or exhaustive sense. Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.” The term “module” refers broadly to software components, firmware components, and/or hardware components. Modules are typically functional components that can generate useful data or other output(s) based on specified input(s). A module may be self-contained. A computer program may include one or more modules. For instance, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks. When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list. The sequences of steps performed in any of the processes described herein are exemplary. However, unless contrary to physical possibility, the steps may be performed in various sequences and combinations. For example, steps could be added to, or removed from, the processes described herein. Similarly, steps could be replaced or reordered. Thus, descriptions of any processes are intended to be open-ended. Overview of Light Source FIG.1Adepicts a top view of a multi-channel light source100that includes multiple color channels able to produce different colors. Each color channel can include one or more illuminants102designed to produce light of a substantially similar color. For example, the multi-channel light source100may include a single illuminant configured to produce a first color, multiple illuminants configured to produce a second color, etc. Note that, for the purpose of simplification, a color channel may be said to have “an illuminant” regardless of how many separate illuminants the color channel includes. One example of an illuminant is an LED. An LED is a two-lead illuminant that is generally comprised of an inorganic semiconductor material. While embodiments may be described in the context of LEDs, the technology is equally applicable to other types of illuminant. Table I includes several examples of available colors of LEDs, as well as the corresponding wavelength range and representative materials. TABLE IRange (in nanometers) in which the dominant wavelength resides andrepresentative materials for available colors of LEDs.ColorDominant WavelengthRepresentative MaterialsInfraredλ > 760Gallium arsenide; andAluminum gallium arsenideRed610 < λ < 760Aluminum gallium arsenide;Gallium arsenide phosphide;Aluminum gallium indiumphosphide; andGallium(III) phosphideOrange590 < λ < 610Gallium arsenide phosphide;Aluminum gallium indiumphosphide; andGallium(III) phosphideYellow570 < λ < 590Gallium arsenide phosphide;Aluminum gallium indiumphosphide; andGallium(III) phosphideGreen500 < λ < 570Aluminum gallium phosphide;Aluminum gallium indiumphosphide;Gallium(III) phosphide;Indium gallium nitride; andGallium(III) nitrideBlue450 < λ < 500Zinc selenide; andIndium gallium nitrideViolet400 < λ < 450Indium gallium nitrideUltravioletλ < 400Indium gallium nitride;Diamond;Boron nitride;Aluminum nitride;Aluminum gallium nitride; andAluminum gallium indiumnitride Other colors not shown in Table I may also be incorporated into the light source100. Examples of such colors include cyan (490<λ<515), lime (560<λ<575), amber (580<λ<590), and indigo (425<λ<450). Those skilled in the art will recognize that these wavelength ranges are simply included for the purpose of illustration. As noted above, a multi-channel light source100may include multiple color channels able to produce different colors. For example, the light source100may include three separate color channels configured to produce blue light, green light, and red light. Such light sources may be referred to as “RGB light sources.” As another example, the light source100may include four separate color channels configured to produce blue light, green light, red light, and either amber light or white light. Such light sources may be referred to as “RGBA light sources” or “RGBW light sources.” As another example, the light source100may include five separate color channels configured to produce blue light, cyan light, lime light, amber light, and red light. As another example, the light source100may include seven separate color channels configured to produce blue light, cyan light, green light, amber light, red light, violet light, and white light. Thus, the light source100could include three channels, four channels, five channels, seven channels, etc. While three- and four-channel light sources improve upon conventional flash technologies, they may have a lumpy spectral distribution or narrow range of high fidelity. Consequently, the multi-channel light source100will often include at least five different color channels. As the number of color channels increases, the light quality, CCT range, quality over range, and spectral sampling will also generally increase. For example, a five-channel light source having properly selected illuminants can be designed to deliver full-spectrum white light over a broad CCT range (e.g., from 1650K to over 10000K) at ΔuV of ±0.002. Moreover, by employing the five-channel light source, the spectral distribution can be sampled in a substantially continuous (i.e., non-lumpy) manner. Due to their low heat production, LEDs can be located close together. Accordingly, if the illuminants102of the multi-channel light source are LEDs, then the light source100may include an array comprised of multiple dies placed arbitrarily close together. Note, however, that the placement may be limited by “whitewall” space between adjacent dies. The whitewall space is generally on the order of approximately 0.1 millimeters (mm), though it may be limited (e.g., to no more than 0.2 mm) based on the desired diameter of the light source100as a whole. InFIG.2, for example, the array includes eight dies associated with five different color channels Such an array may be sized to fit within similar dimensions as conventional flash technology. The array may also be based on standard production dies requiring, for example, a 2-1-1-0.5-0.5 area ratio of lime-amber-cyan-red-blue. The array may be driven by one or more linear field-effect transistor-based (FET-based) current-regulated drivers110. In some embodiments, each color channel is driven by a corresponding driver. These drivers110may be affixed to, or embedded within, a substrate104arranged beneath the illuminants102. By independently driving each color channel, the multi-channel light source100can produce white light at different CCTs. For example, the multi-channel light source100may emit a flash of light that illuminates a scene in conjunction with the capture of an image by an electronic device. Examples of electronic devices include mobile phones, tablet computers, digital cameras (e.g., single-lens reflex (SLR) cameras, digital SLR (DSLR) cameras, and light-field cameras, which may also be referred to as “plenoptic cameras”), etc. Light produced by the multi-channel light source100can improve the quality of images taken in the context of consumer photography, prosumer photography, professional photography, etc. Controlling the multi-channel light source in such a manner enables better precision/accuracy of spectral control across various operating states in comparison to traditional lighting technologies. All traditional lighting technologies are designed to emit light in a desired segment of the electromagnetic spectrum. However, the light (and thus the segment of the electromagnetic spectrum) will vary based on factors such as temperature, age, and brightness. Unlike traditional lighting technologies, the multi-channel light source can be handled such that the output of each channel is known at all times. Using this information, a controller112can compensate for the above-mentioned factors by (i) adjusting the current provided to each channel and/or (ii) adjusting the ratios of the channels to compensate for spectral shifts and maintain the desired segment of the electromagnetic spectrum. One example of a controller112is a central processing unit (also referred to as a “processor”). In some embodiments, the multi-channel light source100is able to produce colored light by separately driving the appropriate color channel(s). For example, a controller112may cause the multi-channel light source100to produce a colored light by driving a single color channel (e.g., a red color channel to produce red light) or multiple color channels (e.g., a red color channel and an amber color channel to produce orange light). As noted above, the controller112may also cause the multi-channel light source100to produce white light having a desired CCT by simultaneously driving each color channel. In particular, the controller112may determine, based on a color mixing model, operating parameters required to achieve the desired CCT. The operating parameters may specify, for example, the driving current to be provided to each color channel. By varying the operating parameters, the controller can tune the CCT of the white light as necessary. Although the illuminants102are illustrated as an array of LEDs positioned on a substrate104, other arrangements are also possible. In some cases, a different arrangement may be preferred due to thermal constraints, size constraints, color mixing constraints, etc. For example, the multi-channel light source100may include a circular arrangement, grid arrangement, or cluster arrangement of LEDs. FIG.1Bdepicts a side view of the multi-channel light source100illustrating how, in some embodiments, the illuminants102reside within a housing. The housing can include a base plate106that surrounds the illuminants102and/or a protective surface108that covers the illuminants102. While the protective surface108shown here is in the form of a dome, those skilled in the art will recognize that other designs are possible. For example, the protective surface108may instead be arranged in parallel relation to the substrate104. Moreover, the protective surface108may be designed such that, when the multi-channel light source100is secured within an electronic device, the upper surface of the protective surface108is substantially co-planar with the exterior surface of the electronic device. The protective substrate108can be comprised of a material that is substantially transparent, such as glass, plastic, etc. The substrate104can be comprised of any material able to suitably dissipate heat generated by the illuminants102. A non-metal substrate, such as one comprised of woven fiberglass cloth with an epoxy resin binder (e.g., FR4), may be used rather than a metal substrate. For example, a substrate104composed of FR4 may more efficiently dissipate the heat generated by multiple color channels without experiencing the retention issues typically encountered by metal substrates. Note, however, that some non-metal substrates cannot be used in combination with high-power illuminants that are commonly used for photography and videography, so the substrate104may be comprised of metal, ceramic, etc. The processing components necessary for operating the illuminants102may be physically decoupled from the light source100. For example, the processing components may be connected to the illuminants102via conductive wires running through the substrate104. Examples of processing components include drivers110, controllers112, power sources114(e.g., batteries), etc. Consequently, the processing components need not be located within the light source100. Instead, the processing components may be located elsewhere within the electronic device in which the light source100is installed. As further discussed below, the multi-channel light source100is designed to operate in conjunction with an image sensor. Accordingly, the multi-channel light source100could be configured to emit light responsive to determining that an image sensor has received an instruction to capture an image of a scene. The instruction may be created responsive to receiving input indicative of a request that the image be captured. As shown inFIG.1C, an image sensor (here, a camera152) may be housed within the same electronic device as a multi-channel light source. The request may be provided in the form of tactile input along the surface of a touch-sensitive display or a mechanical button accessible along the exterior of the electronic device. In some embodiments, the multi-channel light source is designed such that it can be readily installed within the housing of an electronic device.FIG.1Cdepicts an electronic device150that includes a rear-facing camera152and a multi-channel light source154configured to illuminate the ambient environment. The multi-channel light source154may be, for example, the multi-channel light source100of FIGS.1A-B. The rear-facing camera152is one example of an image sensor that may be configured to capture images in conjunction with light produced by the light source100. Here, the electronic device150is a mobile phone. However, those skilled in the art will recognize that the technology described herein could be readily adapted for other types of electronic devices, such as tablet computers and digital cameras. The camera152is typically one of multiple image sensors included in the electronic device150. For example, the electronic device100may include a front-facing camera that allows an individual to capture still images or video while looking at the display. The rear- and front-facing cameras can be, and often are, different types of image sensors that are intended for different uses. For example, the image sensors may be capable of capturing images having different resolutions. As another example, the image sensors could be paired with different light sources (e.g., the rear-facing camera may be associated with a stronger flash than the front-facing camera, or the rear-facing camera may be disposed in proximity to a multi-channel light source while the front-facing camera is disposed in proximity to a single-channel light source). FIG.2depicts an example of an array200of illuminants202. If the illuminants202are LEDs, the array200may be produced using standard dies (also referred to as “chips”). A die is a small block of semiconducting material on which the diode located. Typically, diodes corresponding to a given color are produced in large batches on a single wafer (e.g., comprised of electronic-grade silicon, gallium arsenide, etc.), and the wafer is then cut (“diced”) into many pieces, each of which includes a single diode. Each of these pieces may be referred to as a “die.” As shown inFIG.2, the array200includes multiple color channels configured to produce light of different colors. Here, for example, the array200includes five color channels—blue, cyan, lime, amber, and red. Each color channel can include one or more illuminants. Here, for example, three color channels (i.e., blue, lime, and red) include multiple illuminants, while two color channels (i.e., cyan and amber) include a single illuminant. The number of illuminants in each color channel, as well as the arrangement of these illuminants within the array200, may vary based on the desired output characteristics, such as maximum CCT, minimum CCT, maximum temperature, etc. The array200is generally capable of producing light greater than 1,000 lumens, though some embodiments are designed to produce light less than 1,000 lumens (e.g., 700-800 lumens during a flash event). In some embodiments, the illuminants202are positioned in the array200in a highly symmetrical pattern to improve spatial color uniformity. For example, when the array200is designed to produce white light through simultaneous driving of the multiple color channels, the illuminants corresponding to those color channels may be arranged symmetrically to facilitate mixing of the colored light. The array200may be designed such that it can be installed within the housing of an electronic device (e.g., electronic device150ofFIG.10) in addition to, or instead of, a conventional flash component. For example, some arrays designed for installation within mobile phones are less than 4 mm in diameter, while other arrays designed for installation within mobile phones are less than 3 mm in diameter. The array200may also be less than 1 mm in height. In some embodiments, the total estimated area necessary for the array may be less than 3 mm2prior to installation and less than 6 mm2after installation. Such a design enables the array200to be positioned within a mobile phone without requiring significant repositioning of components within the mobile phone. One advantage of a compact array of dies is that it can achieve good color mixing and adequate field of view (FOV) without the use of a collimator, diffuser, or lens. However, a collimator204(also referred to as a “mixing pipe”) designed to ensure proper spatial color uniformity of light produced by the illuminants202could be placed around the array200. At a high level, the collimator204may promote more uniform color mixing and better control of the FOV of light emitted by the illuminants202. The collimator204can be comprised of an inflexible material (e.g., glass) or a flexible material (e.g., silicone). The collimator204may be in the form of a tubular body. In some embodiments the egress aperture of the tubular body is narrower than the array (e.g., the egress aperture may have a diameter of 2.5 mm, 3 mm, or 3.5 mm), while in other embodiments the egress aperture of the tubular body is wider than the array (e.g., the egress aperture may have a diameter of 4.5 mm, 5 mm, or 5.5 mm). Thus, the tubular body may have a sloped inner surface that either focuses or disperses light produced by the illuminants202. The array200may be used instead of, or in addition to, conventional flash technologies that are configured to generate a flash in conjunction with the capture of an image. Thus, an electronic device (e.g., electronic device150ofFIG.10) may include a single-channel light source and/or a multi-channel light source. While embodiments may be described in terms of LEDs, those skilled in the art will recognize that other types of illuminants could be used instead of, or in addition to, LEDs. For example, embodiments of the technology may employ lasers, quantum dots (“QDs”), organic LEDs (“OLEDs”), resonant-cavity LEDs (“RCLEDs”), vertical-cavity surface-emitting lasers (“VCSELs”), superluminescent diodes (“SLDs” or “SLEDs”), blue “pump” LEDs under phosphor layers, up-conversion phosphors (e.g., microscopic ceramic particles that provide a response when excited by infrared radiation), nitride phosphors (e.g., CaAlSiN, SrSiN, KSiF), down-conversion phosphors (e.g., KSF:Mn4+, LiAlN), rubidium zinc phosphate, yttrium-aluminum-garnet (YAG) phosphors, lutetium-aluminum-garnet (LAG) phosphors, SiAlON phosphors, SiON phosphors, or any combination thereof. For example, the array200may include phosphor-converted colors such as a lime color that is created by a YAG phosphor coating on a blue LED. In such an embodiment, the highly efficient blue LED pumps the YAG phosphor coating with photons that are nearly entirely absorbed and then reemitted in the broader yellow-green band. This could also be done to create other colors such as red, amber, green, cyan, etc. As another example, multiple VCSELs and/or multiple QDs could be arranged in a given pattern on a substrate such that when the substrate is installed within the housing of an electronic device, the VCSELs and/or QDs emit electromagnetic radiation outward. The type of illuminants used to illuminate a scene may impact the schedule of illumination events. Said another way, some illuminants may need to be accommodated from a timing perspective. For example, phosphor-based illuminants generally exhibit delayed excitation and delayed de-excitation, so phosphor-based illuminants may be activated (e.g., strobed) in an early-on, early-off manner to avoid overlaps (i.e., where a first phosphor-based illuminant is still emitting some light when a second phosphor-based illuminant is activated). Overview of Image Sensor An image sensor is a sensor that detects information that constitutes an image. Generally, an image sensor accomplishes this by converting the variable attenuation of light waves (e.g., as they pass through or reflect off objects) into electrical signals, which represent small bursts of current that convey the information. Examples of image sensors include semiconductor-charge-coupled devices (CCDs) and complementary metal-oxide-semiconductor sensors (CMOS) sensors. Both types of image sensor accomplish the same task, namely, converting captured light into electrical signals. However, because CMOS sensors are generally cheaper, smaller, and less power-hungry than CCDs, many electronic devices use CMOS sensors for image capture. Image sensors can also differ in their separation mechanism. One of the most common separation mechanisms is a filter array that passes light of different colors to selected pixel sensors. For example, each individual sensor element may be made sensitive to either red light, green light, or blue light by means of a color gel made of chemical dye. Because image sensors separate incoming light based on color, they may be said to have multiple sensor channels or multiple color channels. Thus, an image sensor that includes multiple sensor channels corresponding to different colors may be referred to as a “multi-channel image sensor.” FIG.3depicts an example of a separation mechanism302arranged over an image sensor304. Here, the separation mechanism302is a Bayer filter array that includes three different types of color filters designed to separate incoming light into red light, green light, or blue light on a per-pixel basis. The image sensor304, meanwhile, may be a CMOS sensor. Rather than use photochemical film to capture images, the electronic signal generated by the image sensor304is instead recorded to a memory for subsequent analysis. After a recording function is initiated (e.g., responsive to receiving input indicative of a request to capture an image), a lens focuses light through the separation mechanism602onto the image sensor604. As shown inFIG.3, the image sensor304may be arranged in a grid pattern of separate imaging elements. Generally, the image sensor304determines the intensity of incoming light rather than the color of the incoming light. Instead, color is usually determined through the use of the separation mechanism302that only allows a single color of light into each imaging element. For example, a Bayer filter array includes three different types of color filter that can be used to separate incoming light into three different colors (i.e., red, green, and blue), and then average these different colors within a two-by-two arrangement of imaging elements. Each in a given image may be associated with such an arrangement of imaging elements. Thus, each pixel could be assigned separate values for red light, green light, and blue light. Another method of color identification employs separate image sensors that are each dedicated to capturing part of the image (e.g., a single color), and the results can be combined to generate the full color image. Overview of Characterization Module FIG.4depicts an example of a communication environment400that includes a characterization module402programmed to improve the fidelity of colors in images. The term “module” refers broadly to software components, firmware components, and/or hardware components. Accordingly, aspects of the processes described below could be implemented in software, firmware, and/or hardware. For example, these processes could be executed by a software program (e.g., a mobile application) executing on the electronic device (e.g., the mobile phone) that includes a multi-channel image sensor and a multi-channel light source, or these processes could be executed by an integrated circuit that is part of the multi-channel image sensor. As shown inFIG.4, the characterization module402may obtain data from different sources. Here, for example, the characterization module402obtains first data404generated by a multi-channel image sensor408(e.g., camera152ofFIG.1C) and second data406generated by a multi-channel light source410(e.g., light source154ofFIG.1C). The first data404can specify, on a per-pixel basis, an appropriate value for each sensor channel. For example, if the multi-channel image sensor408includes three sensor channels (e.g., red, green, and blue), then each pixel will be associated with at least three distinct values (e.g., a red value, a green value, and a blue value). The second data406can specify characteristics of each channel of the multi-channel light source410. For example, the second data406may specify the driving current for each color channel during an illumination event (also referred to as a “lighting event”), the dominant wavelength of each color channel, the illuminance profile of each color channel, etc. In some embodiments, the multi-channel image sensor408and the multi-channel light source410are housed within the same electronic device. In other embodiments, the multi-channel image sensor408and the multi-channel light source410reside within separate housings. For example, in the context of professional photography or videography, multiple multi-channel image sensors and multiple multi-channel light sources may be positioned in various arrangements to capture/illuminate different parts of a scene. FIG.5illustrates a network environment500that includes a characterization module502. Individuals can interface with the characterization module502via an interface504. As further discussed below, the characterization module502may be responsible for improving the fidelity of colors in images generated by a multi-channel image sensor. The characterization module502may also be responsible for creating and/or supporting the interfaces through which an individual can view the improved images, initiate post-processing operations, manage preferences, etc. The characterization module502may reside in a network environment500as shown inFIG.5. Thus, the characterization module502may be connected to one or more networks506a-b. The networks506a-bcan include personal area networks (PANs), local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), cellular networks, the Internet, etc. Additionally or alternatively, the characterization module502can be communicatively coupled to electronic device(s) over a short-range communication protocol, such as Bluetooth® or Near Field Communication (NFC). In some embodiments, the characterization module502resides on the same electronic device as the multi-channel image sensor and the multi-channel light source. For example, the characterization module502may be part of a mobile application through which a multi-channel image sensor of a mobile phone can be operated. In other embodiments, the characterization module502is communicatively coupled to the multi-channel image sensor and/or the multi-channel light source across a network. For example, the characterization module502may be executed by a network-accessible platform (also referred to as a “cloud platform”) residing on a computer server. The interface504is preferably accessible via a web browser, desktop application, mobile application, or over-the-top (OTT) application. Accordingly, the interface504may be viewed on a mobile phone, tablet computer, personal computer, game console, music player, wearable electronic device (e.g., a watch or fitness accessory), network-connected (“smart”) electronic device, (e.g., a television or home assistant device), virtual/augmented reality system (e.g., a head-mounted display), or some other electronic device. Some embodiments of the characterization module502are hosted locally. That is, the characterization module502may reside on the same electronic device as the multi-channel image sensor or the multi-channel light source. For example, the characterization module502may be part of a mobile application through which a multi-channel image sensor of a mobile phone can be operated. Other embodiments of the characterization module502are executed by a cloud computing service operated by Amazon Web Services® (AWS), Google Cloud Platform™, Microsoft Azure®, or a similar technology. In such embodiments, the characterization module502may reside on a host computer server that is communicatively coupled to one or more content computer servers508. The content computer server(s)508can include color mixing models, items necessary for post-processing such as heuristics and algorithms, and other assets. While embodiments may be described in the context of network-connected electronic devices, the characterization module502need not necessarily be continuously accessible via a network. For example, an electronic device may be configured to execute a self-contained computer program that only accesses a network-accessible platform while completing a pre-deployment configuration procedure. In such embodiments, the self-contained computer program may download information from, or upload information to, the network-accessible platform at a single point in time. Following deployment of the electronic device (e.g., after the electronic device has been packaged for sale), the self-contained computer program may not communicate with the network-accessible platform. White Balance with Reference Illuminant Spectrums Modern digital cameras have been designed to automatically perform white balancing when images are generated. For instance, an electronic device may have an AWB setting that, when selected, causes the scene to be assessed by algorithms designed to identify the brightest part as the white point and then attempt to balance the color of digital images based on that reference. Such an approach tends to be fairly effective in producing high-fidelity colors when a known reflectance spectrum is included in the scene that can be used as a reference. For example, professional photographers/videographers will normally add gray cards or color checkers to scenes that can serve as a reference for the AWB algorithms. When a scene does not include a known reflectance spectrum, the AWB algorithms can (and often do) produce digital images that are visibly incorrect in terms of color. For example, if the scene is predominantly red, the AWB algorithms may mistake this for a color cast induced by a warm illuminant and then try to compensate by making the average color closer to neural. But this will create a bluish color cast that may be quite noticeable. Since adding a known reflectance spectrum is simply not practical in many scenarios, a better approach to producing images with high-fidelity color is necessary. Introduced here are computer programs and associated computer-implemented techniques for achieving high-fidelity color reproduction in the absence of any known reflectance spectrums. That is, high-fidelity color reproduction can be achieved without portable references, such as gray cards and color checkers. To accomplish this, a new reference spectrum—the “reference illuminant spectrum”—is introduced into scenes to be imaged by image sensors. The reference illuminant spectrum is created by a multi-channel light source whose spectral properties are known. FIG.6depicts a flow diagram of a process600for calibrating an electronic device that includes a multi-channel image sensor and a multi-channel light source prior to deployment. The process600may be initiated by the manufacturer of the electronic device before packaging it for sale. Initially, the manufacturer selects a scene that includes at least one known reflectance spectrum (step601). Normally, the manufacturer accomplishes this by choosing and/or creating a scene that includes one or more portable references. One example of a portable reference is a flat surface of a neutral gray color that derives from a flat reflectance spectrum—referred to as a “gray card.” Another example of a portable reference is a flat surface with a series of different colors corresponding to different reflectance spectrums—referred to as a “color checker.” Generally, the manufacturer selects the scene such that a variety of different reflectance spectra are included. These different reflectance spectra could be provided by a single portable reference (e.g., a color checker) or multiple portable references (e.g., a color checker and gray card). Then, a set of images of the scene are captured in rapid succession. For example, the electronic device may capture a first image of the scene over a first exposure interval with an automatic white balance (AWB) setting (step602) and a second image of the scene over a second exposure interval with a fixed white balance (FWB) setting (step603). Neither the first image nor the second image is taken in conjunction with an illumination event performed by the multi-channel light source. Said another way, the first and second images are captured in conjunction with the same ambient light but different white balance settings. Normally, the AWB setting is applied by the electronic device by default to automatically correct color casts. The FWB setting, meanwhile, may be either a custom white balance or one of the preset white balances offered by the electronic device that correspond to different color temperatures. Examples of preset white balances include tungsten, fluorescent, daylight, flash, cloudy, and shade. Generally, the first exposure interval is different than the second exposure interval. For example, the second exposure interval may be 10, 20, 30, or 50 percent of the first exposure interval. The electronic device may also capture a series of differentially illuminated images of the scene with the FWB setting (step604). That is, the electronic device may capture a series of images in conjunction with a series of different illuminant spectrums. As further discussed below with respect toFIG.9, the series of different illuminant spectrums can be produced by addressing each color channel of a multi-channel light source such that a series of flashes are produced in which all color channels are illuminated with a single color channel at a higher intensity. Thus, the number of differentially illuminated images may correspond to the number of color channels that the multi-channel light source has. Thereafter, a characterization module can compute a calibration matrix based on analysis of the set of images, namely, the first image, the second image, and the series of differentially illuminated images (step605). As further discussed below with respect toFIG.7, each entry in the calibration matrix may include a vector of coefficients calculated from the corresponding pixel in the set of images. The characterization module can then store the calibration matrix in a memory accessible to the electronic device (step606). Normally, the calibration matrix is stored in the local memory of the electronic device for quicker callback during future imaging operations. However, the calibration matrix could be stored in a remote memory accessible to the electronic device via a network instead of, or in addition to, the local memory. FIG.7depicts a flow diagram of a process700for computing a calibration matrix based on a first image captured with an AWB setting, a second image captured with an FWB setting, and a series of differentially illuminated images captured with the FWB setting. At a high level, a characterization module can generate, for each pixel, an ambient-subtracted chromaticity fingerprint (or simply “chromaticity fingerprint”) that can be used to populate the corresponding entry in the calibration matrix. Initially, a characterization module can create a series of altered images by subtracting the red, green, and blue values of each pixel in the second image from the red, green, and blue values of the corresponding pixels in each of the series of differentially illuminated images (step701). This will result in altered images in which each pixel has had the red, green, and blue channels reduced by the amount in the non-illuminated second image. Then, the characterization module converts the series of altered images into the CIELAB color space so that each pixel is represented by a series of a* values and a series of b* values (step702). In the CIELAB color space (also referred to as the “CIE L*a*b* color space” or “Lab color space”), color is expressed as three values: L* for the lightness from black (0) to white (11), a* from green (−) to red (+), and b* from blue (−) to yellow (+). The chromaticity fingerprint is comprised of these a* and b* values. Assume, for example, that the multi-channel light source includes five color channels. In such a scenario, the series of altered images will include five images, and the chromaticity fingerprint (F) for each pixel can be represented in vector form as follows: F=[a*1b*1a*2b*2a*3b*3a*4b*4a*5b*5], where each value pairing (a*ib*i) is associated with the corresponding pixel in one of the altered images. Similarly, the characterization module can convert the first image into the CIELAB color space so that each pixel is represented as a reference a* value and a reference b* value (step703). The reference a* and b* values, which represent the ground truth answer, can be represented in vector form as follows: [a*rb*r]. For each pixel, the characterization module can form a system of linear equations for a* and b* with a vector of coefficients (C) as follows: C·[a*1b*1a*2b*2a*3b*3a*4b*4a*5a*5]xy=a*r,xy C·[a*1b*1a*2b*2a*3b*3a*4b*4a*5b*5]xy=b*r,xy where C=[c1c2c3c4c5] and xy is the coordinates of the pixel. Thus, the color value of a given pixel in the first image is defined as the dot product between the vector of coefficients and the chromaticity fingerprint of that pixel as determined from the series of altered images. At a high level, each system of linear equations represents (i) a first linear equation based on the reference a* value for a given pixel and the series of a* values for the given pixel (step704) and (ii) a second linear equation based on the reference b* value for the given pixel and the series of b* values for the given pixel (step705). Thereafter, the characterization module can perform, for each pixel, a least squares optimization on the system of linear equations to produce the vector of coefficients (step706). Said another way, the characterization module can perform a least squares optimization to establish the coefficients. Then, the characterization module can populate a data structure representative of the calibration matrix with the vectors of coefficients (step707). Each entry in the calibration matrix may include the vector of coefficients established for the corresponding pixel. FIG.8depicts a flow diagram of a process800for employing a calibration matrix produced for an electronic device during a pre-deployment calibration process (e.g., by completing process600ofFIG.6). By employing the calibration matrix, the electronic device can achieve high-fidelity color reproduction without portable references needing to be in the scene. Initially, a set of images of a scene are captured in rapid succession. For example, the electronic device may capture a first image of the scene over a first exposure interval with an AWB setting (step801) and a second image of the scene over a second exposure interval with an FWB setting (step802). Neither the first image nor the second image is taken in conjunction with an illumination event performed by the multi-channel light source. Generally, the first exposure interval is different than the second exposure interval. For example, the second exposure interval may be 10, 20, 30, or 50 percent of the first exposure interval. The electronic device may also capture a series of differentially illuminated images of the scene with the FWB setting (step803). That is, the electronic device may capture a series of images in conjunction with a series of different illuminant spectrums. As further discussed below with respect toFIG.9, the series of different illuminant spectrums can be produced by addressing each color channel of a multi-channel light source such that a series of flashes are produced in which all color channels are illuminated with a single color channel at a higher intensity. Thus, the number of differentially illuminated images may correspond to the number of color channels that the multi-channel light source has. A characterization module can then create a series of altered images by subtracting the red, green, and blue values of each pixel in the second image from the red, green, and blue values of the corresponding pixels in each of the series of differentially illuminated images (step804). This will result in altered images in which each pixel has had the red, green, and blue channels reduced by the amount in the non-illuminated second image. Then, the characterization module can generate, for each pixel, a chromaticity fingerprint based on the series of altered images (step805). Step805ofFIG.8is substantially similar to step702ofFIG.7. These chromaticity fingerprints can be multiplied by a calibration matrix to obtain a calibrated a* value and a calibrated b* value for each pixel (step806). More specifically, each chromaticity fingerprint can be multiplied by the corresponding entry in the calibration matrix to obtain the calibrated a* and b* values. Generally, the calibration matrix is stored in a local memory of the electronic device. However, the calibration matrix could be retrieved from a remote memory to which the electronic device is connected across a network. Thereafter, the characterization module can convert the first image into the CIELAB color space so that each pixel is represented as an L* value, a* value, and b* value (step807). Rather than use these a* and b* values, however, the characterization module can produce a calibrated image by replacing them with the calibrated a* and b* values obtained in step806. This calibrated image may express colors using the L* values from the first image and the calibrated a* and b* values derived from the series of altered images. Unless contrary to physical possibility, it is envisioned that the steps described above may be performed in various sequences and combinations. For example, multiple instances of the process600ofFIG.6could be executed on multiple sets of images. Other steps may also be included in some embodiments. For example, the electronic device may cause display of the calibrated image on an interface for review. The electronic device could display the calibrated image near the first image so that an individual is able to review the differences, or the electronic device could enable the individual to alternate between the calibrated image and the first image. FIG.9illustrates how a multi-channel light source (or simply “light source”) may perform a series of illumination events to differentially illuminate a scene. Differential illumination may be key to the calibration process described with respect toFIG.6and the optimization process described with respect toFIG.8. Assume, for example, that the light source includes five different color channels yielding five different illuminant spectra. Each color channel may be separately driven with current. More specifically, each color channel may be addressable at a number of intermediate states between an off state (e.g., where the output level is 0) and a full intensity state (e.g., where the output level is 255). As an example, the full intensity state may correspond to an output current of roughly one amp while every intermediate state may correspond to some fraction of an amp. To generate different illuminant spectra, the driving currents of the different color channels can be varied.FIG.9includes examples of five different combinations of channel spectra that are shown along with the corresponding composite illuminant spectrum. Here, each composite illuminant spectrum is produced by driving four color channels at an output level of 100 and one color channel at an output level of 150. However, those skilled in the art will recognize that these numbers have been provided solely for the purpose of illustration. Generally, the output levels themselves are not important as long as one output level is higher than the others, though it may be desirable to drive all color channels with enough current to produce a white light. As can be seen inFIG.9, each composite illuminant spectrum has a different spectral power distribution (SPD). Evidence of Need for High-Fidelity Color Reproduction FIGS.10-16illustrate why the reference illuminant white balance (RIWB) approach described herein for reproducing colors with high fidelity is necessary. In each drawing, the first image in the upper-left corner was taken of a scene having no known reflectance spectrums with an AWB setting, while the second image in the upper-right corner was taken of the same scene having known reflectance spectrums (here, provided by a color checker) with the AWB setting. At a high level, the second image can be thought of as the “ground truth” for what the first image should actually look like. Beneath these images, there are two rows. The first row includes a comparison of the average color of pixels within a segmented portion of each image, and the second row visually includes a comparison of the average color of those pixels following a lightness scaling operation. These average color values provide an indication of how closely pixels in the first image corresponds to the second image. Note that these average color values can be quite far apart. In fact, these average color values are readily distinguishable from one another in most cases as indicated by the delta E (dE) values which correspond to color error. Many of these delta E values exceed 5.0, which means the visual difference between the first and second images should be (and is) quite noticeable. By employing the RIWB approach described herein, delta E values of less than 1.0 can be achieved. In most instances, a pair of colors whose delta E value is less than 1.0 will be visually indistinguishable from one another. The second row is largely identical to the first row except that that the segmented portions are adjusted to have the same lightness. That is, the pixels in the segmented portions are scaled to have the same L* value in the CIELAB color space. This is done because, when making the known reflectance spectrum versus no known reflectance spectrum comparison, it may be that the introduction of the color checker affected the exposure, thereby causing the second image to be either lighter or darker. As can be seen in the second row, the color is still significantly off despite having made the brightness level equal between the segmented portions of the first and second images (i.e., by brightening the darker of these images and/or darkening the brighter of these images). Computing System FIG.17is a block diagram illustrating an example of a computing system1700in which at least some operations described herein can be implemented. For example, some components of the computing system1700may be part of an electronic device (e.g., electronic device150ofFIG.1) that includes a multi-channel light source and/or a multi-channel image sensor. The computing system1700may include one or more central processing units (also referred to as “processors”)1702, main memory1706, non-volatile memory1710, network adapter1712(e.g., network interface), video display1718, input/output devices1720, control device1722(e.g., keyboard and pointing devices), drive unit1724including a storage medium1726, and signal generation device1730that are communicatively connected to a bus1716. The bus1716is illustrated as an abstraction that represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus1716, therefore, can include a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as “Firewire”). The computing system1700may share a similar computer processor architecture as that of a personal computer, tablet computer, mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), virtual/augmented reality systems (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by the computing system1700. While the main memory1706, non-volatile memory1710, and storage medium1726(also called a “machine-readable medium”) are shown to be a single medium, the term “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions1728. The term “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computing system1700. In general, the routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions1704,1708,1728) set at various times in various memory and storage devices in a computing device. When read and executed by the one or more processors1702, the instruction(s) cause the computing system1700to perform operations to execute elements involving the various aspects of the disclosure. Moreover, while embodiments have been described in the context of fully functioning computing devices, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms. The disclosure applies regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Further examples of machine-readable storage media, machine-readable media, or computer-readable media include recordable-type media such as volatile and non-volatile memory devices1710, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS), Digital Versatile Disks (DVDs)), and transmission-type media such as digital and analog communication links. The network adapter1712enables the computing system1700to mediate data in a network1714with an entity that is external to the computing system1700through any communication protocol supported by the computing system1700and the external entity. The network adapter1712can include a network adaptor card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, bridge router, a hub, a digital media receiver, and/or a repeater. The network adapter1712may include a firewall that governs and/or manages permission to access/proxy data in a computer network and tracks varying levels of trust between different machines and/or applications. The firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications (e.g., to regulate the flow of traffic and resource sharing between these entities). The firewall may additionally manage and/or have access to an access control list that details permissions including the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand. The techniques introduced here can be implemented by programmable circuitry (e.g., one or more microprocessors), software and/or firmware, special-purpose hardwired (i.e., non-programmable) circuitry, or a combination of such forms. Special-purpose circuitry can be in the form of one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc. REMARKS The foregoing description of various embodiments of the claimed subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated. Although the Detailed Description describes certain embodiments and the best mode contemplated, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by the specification. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments. The language used in the specification has been principally selected for readability and instructional purposes. It may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of the technology be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the technology as set forth in the following claims.
62,679
11943547
DESCRIPTION OF THE EMBODIMENTS Hereinafter, embodiments of the invention will be described with reference to the drawings. First Embodiment An information processing system according to a first embodiment of the present invention includes a multiplexing encoding processing unit that embeds additional information in image information and a multiplexing decoding processing unit that extracts the additional information from a captured image. Hereinafter, the basic configuration of the information processing system and the characteristic configuration of the information processing system (particularly, the configuration of the multiplexing decoding processing unit) will be separately described. (1) Basic Configuration (1-1) Hardware of Multiplexing Encoding Processing Unit FIGS.1A and1Bare diagrams illustrating examples of the configuration of multiplexing encoding hardware (multiplexing encoding processing unit) that embeds additional information (also referred to as “multiplexed information” and “embedded information”) in image information in the information processing system. The hardware obtains image data (image information) A and additional data (additional information) B and generates a printed matter C in which the information items A and B have been embedded. The hardware illustrated inFIG.1Ais configured such that a process of embedding the additional information B in the image data A is performed by an apparatus different from a printer (printing apparatus). The hardware illustrated inFIG.1Bis configured such that the process of embedding the additional information B in the image data A is performed in the printer (printing apparatus). In the configuration illustrated inFIG.1A, the image data A input from an input terminal100is multi-gradation image data including a color component. The additional information B input from an input terminal101is, for example, text document data, audio data, moving image data, data obtained by compressing text document information, audio information, an image, and moving image information, and other binary data. An additional information multiplexing apparatus102performs a process of embedding the additional information B in the image data A (also referred to as a “multiplexing process” and an “embedment process”), which will be described below. A printer (printing apparatus)103performs a printing operation based on the image data A having the additional information B embedded therein to generate the printed matter C. In the configuration illustrated inFIG.1B, an additional information multiplexing unit105corresponding to the additional information multiplexing apparatus102illustrated inFIG.1Ais included in the printer103. Similarly to the configuration illustrated inFIG.1A, the image data A is input from the input terminal100and the additional information B is input from the input terminal101. The additional information multiplexing unit105in the printer103performs a process of embedding the additional information B in the image data A. In addition, a printing unit106provided in the printer103performs a printing operation based on the image data A having the additional information B embedded therein to generate the printed matter C. As such, the process of generating the printed matter C on the basis of the image data A having the additional information B embedded therein is referred to as a “multiplexing encoding process”. FIG.2is a diagram illustrating an example of the configuration of multiplexing decoding hardware (multiplexing decoding processing unit) in the information processing system for extracting the additional information B from the image information printed on the printed matter C. The hardware captures an image of the printed matter C subjected to the multiplexing encoding process, using an image capturing apparatus such as a camera, and analyzes the captured image to extract the additional information B embedded in the image (also referred to as a “reading process”, a “separation process”, and an “extraction process”). InFIG.2, a camera-equipped mobile terminal (information processing apparatus)201having an image sensor202has a function of capturing (imaging) the image of the printed matter C. An additional information separation device203analyzes the image captured by the image sensor202to extract the additional information B as described later. A central processing unit (CPU)204executes an information processing method according to a program and a ROM205stores programs executed by the CPU204. A RAM206functions as a memory for temporarily storing various kinds of information in a case where the CPU204executes a program. A secondary storage device207, such as a hard disk, stores, for example, a database including image files and the analysis results of images. A display208presents, for example, the processing results of the CPU204to a user. A key input device209receives a process command and characters input by a touch panel operation using the display208with a touch panel function. A wireless local area network (LAN)210is connected to an Internet and accesses a site connected to the Internet such that a screen of the site is displayed on the display208. The wireless LAN210is also used to transmit and receive data. A speaker211outputs a sound in a case where the extracted additional information is audio data or moving image data with a sound. In addition, in a case where the connection destination of the Internet has moving image data, a sound is output at the time of the reproduction of the moving image data. The camera-equipped mobile terminal201is not limited to the configuration including the image sensor202. For example, an apparatus different from the camera-equipped mobile terminal201may control the image sensor202such that the captured image is transmitted to the additional information separation device203. For example, a digital camera and a video camera may be used as the image sensor202. For example, a personal computer and a smart phone may be used as the additional information separation device203to extract the additional information B from the printed matter C. Hereinafter, a method for extracting the additional information B from the printed matter C is referred to as a “multiplexing decoding process”. (1-2) Firmware Configuration for Multiplexing Encoding Process FIG.3is a block diagram illustrating a basic firmware configuration for the multiplexing encoding process. Image data is converted by the following process into data with resolution and a gradation value that can be received by a print engine connected to a print head, and is transmitted to the print engine. (1-2-1) Accessory Information Obtaining Unit An accessory information obtaining unit301obtains various parameters which are used to compress image data. The obtained various parameters are transmitted to an image data recovery unit302and are used to extract the image data from a compressed image. In addition, the obtained various parameters are used for a process for calculating the degree of compression. For example, an input image is irreversible image data obtained by compressing document data in a JPEG format and is printed on a print medium. The irreversible image data includes a quantization table and an image data size used during compression. The obtained image data size information and quantization table are transmitted to the image data recovery unit302. (1-2-2) Image Data Recovery Unit The image data recovery unit302decodes the encoded image data to extract the image data. In the following description, it is assumed that the input image is a JPEG image. FIG.4is a flowchart illustrating a process for decoding the encoded image data. For example, it is assumed that an image compressed in a JPEG data format is divided into N 8-pixel square blocks (blocks of 8×8 pixels). First, Huffman coding is performed for a first block in the unit of 8-pixel square blocks (S41and S42) and inverse quantization is performed for the first block using the obtained quantization table (S43). Then, inverse DCT is performed for the first block (S44). This process is performed for all of the N blocks of a target screen (S45and S46). Since a JPEG decoding process uses a known method, Huffman coding, inverse quantization, and inverse DCT will not be described in detail below, but will be described in brief. Huffman coding is a compression method which allocates a code with a small number of bits to data with a high frequency to reduce the total number of bits. Huffman decoding defines a Huffman code in the specifications in advance and decodes data into the original data. In inverse quantization, development to image data is performed by inverse quantization using the quantization table (a quantization table used to compress image data) obtained by the accessory information obtaining unit301. Inverse DCT is a process which performs inverse transformation for returning the image data which has been transformed into a direction-current component (DC component) and an alternating-current component (AC component) by DCT to data with the original image density component. JPEG compression is generally performed in a YCbCr (Y: brightness, Cb and Cr: color difference) format. In this case, data processed by inverse DCT is also converted into the YCbCr format. A value in the YCbCr format is converted into an image signal value in an RGB format by the following Expression 1. R=Y+1.402×Cr; G=Y−0.344×Cb−0.714×Cr; and B=Y+1.772×Cb.[Expression 1] (1-2-3) Image Correction Unit An image correction unit303performs an image correction process for the RGB data complexed by the image data recovery unit302. Examples of the image correction includes brightness adjustment that increases or decreases the brightness of all colors, contrast adjustment, color balance adjustment, and backlight correction and red-eye correction on the assumption that a photograph is printed. The image correction unit303performs these correction processes in an integrated manner to achieve a process that does not depend on the printing apparatus. (1-2-4) Resolution Conversion Unit A resolution conversion unit304converts the image data into a resolution corresponding to the printing apparatus. The resolution conversion unit304performs a reduction or enlargement process on the basis of a variable magnification ratio that is calculated according to the input mage data and the resolution of the printing apparatus. For example, a nearest neighbor interpolation method, a bilinear interpolation method, and a bicubic interpolation method are used as a magnification change process. The magnification change process may be appropriately selected considering the characteristics of the process and a processing speed. (1-2-5) Color Correction Unit A color correction unit305performs a conversion process for the image data such that an image printed by the printing apparatus has appropriate colors. For example, in a case where the image displayed on the display apparatus is printed, the color reproduction ranges of the display apparatus and the printing apparatus are not necessarily matched with each other. For a given color, the reproduction range of the printing apparatus is narrower than that of the display apparatus. For another color, the reproduction range of the printing apparatus is wider than that of the display apparatus. Therefore, it is necessary to minimize the deterioration of image quality and to appropriately compress and decompress colors. In this example, these processes are performed in an RGB format. That is, an RGB value input to the color correction unit305is converted into an RGB value for the printing apparatus (hereinafter, referred to as “printing apparatus RGB”) in consideration of the reproducibility of the printing apparatus. This conversion may be performed by, for example, matrix calculation. In general, a three-dimensional color correction table311is used. In a case where an input RGB value is 8 bits for each color (256 grayscale levels), it is not practical to store all combinations in terms of storage capacity. For this reason, a table thinned at a predetermined interval is used as the color correction table311. FIG.5is a diagram illustrating an example of the color correction table311. In this example, in the color correction table311, 256 grayscale levels of each color are represented by 17 grid points and printing apparatus RGB values corresponding to the grid points are described (17*17*17=4913 grid points). A value between the grid points is calculated by an interpolation process. An interpolation method can be selected from some methods and the selected method can be used. In this example, a tetrahedron interpolation method is used. The tetrahedron interpolation method is linear interpolation which has a tetrahedron as a division unit of a three-dimensional space and uses four grid points. In the tetrahedron interpolation method, first, as illustrated inFIG.6A, a three-dimensional space is divided into tetrahedrons and a tetrahedron to which a target point P belongs is determined from among the divided tetrahedrons. Four vertices of the tetrahedron are p0, p1, p2, and p3 and the tetrahedron is further divided into small tetrahedrons as illustrated inFIG.6B. In a case in which conversion values for each point are f(p0), f(p1), f(p2), and f(p3), an interpolation value f(p) can be calculated by the following Expression 2. f⁡(p)=∑i=03wi×f⁡(pi)=[w⁢0,w⁢1,w⁢2,w⁢3][f⁡(p⁢0)f⁡(p⁢1)f⁡(p⁢2)f⁡(p⁢3)][Expression⁢2] Here, w0, w1, w2, and w3 are the volume ratio of the small tetrahedrons at positions opposite to each vertex pi. As such, a printing apparatus RGB value corresponding to a target RGB value is calculated. In this case, an output may be equal to or greater than 8 bits in consideration of gradation. In addition, the color correction table depends on the color reproduction range of the printing apparatus. For example, in a case where different print sheets (print media) are used for printing, it is preferable to prepare the tables corresponding to different print sheets. (1-2-6) Ink Color Conversion Unit An ink color conversion unit306converts the printing apparatus RGB value converted by the color correction unit305into an ink color value. A color separation table312in which a combination of the printing apparatus RGB values and ink color values are associated with each other in advance is used for this conversion. Similarly to the color correction unit305, a table of 17 grid points is used in the ink color conversion unit306. FIG.7is a diagram illustrating an example of the color separation table312. In this example, four colors, that is, cyan (C), magenta (M), yellow (Y), and black (K) are assumed as ink colors and the values of four colors corresponding to each grid point are described in the color separation table312. These values are determined such that ink does not overflow on a printing surface of a print sheet (print medium) and blur does not occur in a case where ink droplets are adjacent to each other. Therefore, in a case where different print sheets (print media) are used for printing, it is preferable to prepare the color separation tables312corresponding to different print sheets. In addition, similarly to the color correction unit305, the ink color values corresponding to the printing apparatus RGB values can be interpolated by the tetrahedron interpolation process. (1-2-7) Density Correction Unit307 In an ink-jet printing apparatus, as the amount of ink that is given in order to form dots on a print sheet (print medium) increases, the overlap between the dots increases. As a result, the print density of an image is less likely to increase. The density correction unit307corrects the density in order to uniformize a density response. The density correction makes it easy to ensure the accuracy of creating the color correction table311and the color separation table312. In a printing apparatus using C (cyan), M (magenta), Y (yellow), and K (black) inks, density correction is performed for the ink colors. In this example, a one-dimensional density correction table313is used. A table corresponding to an 8-bit (256 grayscale levels) input for each ink color may be prepared as the density correction table313. In particular, it is possible to use a table in which an input signal value and a corrected output signal value are associated with each other, without using a thinning process. (1-2-8) Gradation Conversion Unit308 A gradation conversion unit308converts multi-bit data which has been converted for each ink color and then subjected to density correction into the number of grayscale levels that can be printed by the printing apparatus. In this example, data is converted into two grayscale levels (1 bit), that is, printing “1” and non-printing “0”. An error diffusion method that excludes a low-frequency component of an image and can reproduce gradation suitable for visual perception is used as a gradation conversion method. In addition, 8-bit data from 0 to 255 is assumed as an input signal. FIG.8is a diagram illustrating an error distribution method in the error diffusion method. A signal value L of a target pixel is compared with a threshold value TH. In this example, a threshold value is set to 127 in order to binarize 0 to 255 and it is determined whether the target pixel is “1” (printing) or “0” (non-printing) as follows.L>TH . . . 1 (printing)L≤TH . . . 0 (non-printing) A quantization representative value V is determined on the basis of the determination result as follows.1 (printing) . . . 2550 (non-printing) . . . 0 In a case where the quantization representative value V is set in this way, an error E (=L−V) that occurs is distributed to neighboring pixels on the basis of distribution coefficients illustrated inFIG.8. A value La obtained by adding the distributed error Ea to the signal value L of the next target pixel is compared with a threshold value TH and it is determined whether the target pixel is “1” (printing) or “0” (non-printing) as follows.La>TH . . . 1 (printing)La≤TH . . . 0 (non-printing) The above-mentioned process is performed for all of the pixels and all of the ink colors C, M, Y, and K. In this way, printable 1-bit print data for each ink color is obtained. (1-2-9) Additional Information Additional information309is the additional information B that is embedded in the image data A in the additional information multiplexing apparatus102illustrated inFIG.1Aor the additional information multiplexing unit105illustrated inFIGS.1B and1s, for example, text document data. The text document data is, for example, numerical data obtained by allocating numbers and characters to values using known character codes. The numerical data is transmitted as the additional information309to an additional information multiplexing unit310. Specifically, text document data corresponding to characters “hello” will be described. It is assumed that the text document data is numerical data, for example, so-called binary data. The binary data is information “0” or “1” and a sequence of the information items “0” or “1” has a specific meaning. The correspondence between binary data and a character is defined by a “character code”. In the case of “shift JIS” which is one of the character codes, “h” corresponds to binary data “01101000”. Similarly, “e” corresponds to binary data “01100101”, “1” corresponds to binary data “01101100”, and “o” corresponds to binary data “01101111”. Therefore, the characters “hello” can be represented by binary data “0110100001100101011011000110110001101111”. Conversely, in a case in which the binary data “0110100001100101011011000110110001101111” can be obtained, the characters “hello” can be obtained. The additional information309corresponds to numerical data converted into the binary data. (1-2-10) Additional Information Multiplexing Unit The additional information multiplexing unit310receives the image data converted by the resolution conversion unit304and the additional information309and embeds the additional information309in the image data. In this embedment process (multiplexing process), the additional information309is embedded in the image data such that the additional information309can be read from a print image of the image data in which the additional information309(for example, a text document converted into binary data “0” and “1”) has been embedded. For example, a masking process is performed for the image data to embed the information items “0” and “1” corresponding to the binary data such that the binary data “0” and “1” of the additional information309can be read. In this example, the masking process is performed for the image data to give different periodicities corresponding to the binary data “0” and “1” to image data in a predetermined area. FIGS.9A and9Bare diagrams illustrating mask data corresponding to the binary data “0” and “1”. The mask data has a size corresponding to an area with a size of 5 px (pixels)×5 px (pixels). By the mask data, the patterns with different periodicities illustrated inFIG.9AorFIG.9Bare combined with the image to embed the binary data “0” and “1” in the image data corresponding to an area of 5 px (pixels)×5 px (pixels). In a case where a print image is read, the periodicity corresponding to the binary data “0” and “1” is recognized by, for example, read data frequency analysis to read the binary data “0” and “1”. The additional information multiplexing unit310gives the periodicity corresponding to the binary data “0” and “1” to the image data on the basis of the binary data (numerical data) of the additional information309to embed the additional information309. As an example of a method for embedding the additional information309in the image data, a method will be described which regards the image data as gray image data of one color and embeds the binary data “0” and “1” in the entire image data. It is assumed that the size of a multiplexed image is a vertical width of 640 px and a horizontal width of 480 px and the mask data has a size of 5 px×5 px as illustrated inFIGS.9A and9B. It is assumed that the binary data “0” is represented by the mask data illustrated inFIG.9Aand the binary data “1” is represented by the mask data illustrated inFIG.9B. In the mask data illustrated inFIGS.9A and9B,5×5 pixel block is classified into a black block1101, a white block1102, and a hatched block1103. The black block1101corresponds to a value “+2”, the white block1102corresponds to a value “0”, and the hatched block1103corresponds to a value “−1”. In a case in which “maskA” is “true”, the mask data illustrated inFIG.9Acorresponding to “0” is used. In a case in which “maskA” is “false”, the mask data illustrated inFIG.9Bcorresponding to “1” is used. Pseudo codes for applying the values corresponding to the black, white, and hatched blocks illustrated inFIGS.9A and9Bto the entire image data are as follows: Pseudo codes:1int i, j, k, l;2int width = 640, height = 480;3unsigned char*data = image data;4int**maskA = mask data;5bool isMaskA = true;6for (j = 0; j < height; j+=5){7for (i = 0; i < width; i+=5) {8for (k = 0; k < 5; k++) {9for (l = 0; l < 5; l++) {if (isMaskA = = true) {10data[(i+k)+(j+l)*width] += maskA[k][l];}11}12}13}14} As represented by the pseudo codes, the entire image is divided into blocks of 5 px×5 px and the data of “maskA” is added to each block to form the patterns illustrated inFIGS.9A and9B. In some cases, a pattern that is less likely to be visually recognized by the eyes is desired to be formed as the pattern corresponding to the binary data (numerical data) of the additional information309. In a case where the image data is gray image data of one color as in this example, the pattern needs to be formed by a brightness component. Therefore, the pattern is likely to be visually recognized by the eyes. A change in color component tends to be less likely to be visually recognized than a change in brightness component, which depends on the shape of the pattern and a frequency component. For example, a color image having RGB components is converted into a color space, such as YCbCr, Lab, or Yuv, and is divided into a brightness component and a color component. Then, the mask data is not applied to the brightness component, but is applied to the color component to form the pattern that is not likely to be visually recognized. In addition, for example, in a case where a red color is dominant in an area of 5 px×5 px to which the pattern is given in a color image having RGB components, it is preferable that the pattern is given by the red component. In this example, the mask data is divided into blocks of 5 px×5 px and is then added to the image data. However, the unit of the block size and the shape of the mask are not particularly limited. A method for embedding the mask data in the image data may be combinations of, for example, addition, subtraction, multiplication, and division. In a case where an image of a printed matter having the additional information309embedded therein is captured, any method may be used as long as it can distinguish the pattern of the mask data. Therefore, the additional information multiplexing unit310is a processing unit for embedding the additional information309in the image data such that the additional information309can be extracted in a case where the image of a printed matter having the additional information309embedded therein is captured. (1-3) Multiplexing Encoding Process FIG.10is a flowchart illustrating a multiplexing encoding process in this example. First, the accessory information obtaining unit301and the image data recovery unit302illustrated inFIG.3obtain the image data A for printing (Step S11). For example, the image data A is data whose image has been captured by a camera-equipped mobile terminal (smart phone) in advance and then stored in a JPEG format in a memory of the mobile terminal. The obtained JPEG image data is decompressed to generate a still image of 8-bit RGB image data of three colors. In addition, the image correction unit303illustrated inFIG.3performs correction and processing for the obtained image data if necessary. Then, the additional information309to be embedded in the image data A is obtained (Step S12). For example, text document data which has been input through keys of the smart phone is obtained. It is assumed that the text document data is, for example, numerical data obtained by allocating numbers and characters to values using known character code shift JIS. The numerical data is transmitted as the additional information309to the additional information multiplexing unit310. Then, the resolution conversion process is performed for the obtained image data A on the basis of a selected paper size (the size of a print medium) and the resolution of the printing apparatus (Step S13). For example, in a case where the selected paper size is 2L, the resolution of the image data A is converted according to the number of pixels of the input resolution in the printing apparatus. Specifically, in a case where the input resolution in the printing apparatus is 600 dots per inch (dpi), the number of pixels of a paper size of 2L is set to 3000 pixels×4000 pixels. In this case, resolution conversion is performed for the image data A in which the number of pixels is 1500 pixels×2000 pixels such that the numbers of pixels in the vertical direction and the horizontal direction are doubled. In a case where the aspect ratio of an input image is not desired to be changed, resolution conversion is performed such that reduction and enlargement ratios in the vertical direction and the horizontal direction are equal to each other. Then, the additional information multiplexing unit310illustrated inFIG.3performs the additional information multiplexing process for embedding the additional information309in the image data A (Step S14).FIG.11is a block diagram illustrating the firmware configuration of the additional information multiplexing unit310in this example. Next, each processing unit of the additional information multiplexing unit310will be described. (1-3-1) Color Space Conversion Unit A color space conversion unit501is a processing unit that converts a color space of the image data whose size has been changed by the resolution conversion unit304into a color space for information multiplexing. For example, the color space for information multiplexing is U of YUV and an RGB color space of the image data is converted into a YUV color space by the following Expression 3. Y=0.299×R+0.587×G+0.114×B U=−0.169×R−0.331×G+0.500×B V=0.500×R−0.419×G−0.081×B[Expression 3] (1-3-2) Block Position Setting Unit In this example, the image data is divided into a plurality of block areas and the density of pixels in each block is modulated to form the patterns corresponding to the mask data illustrated inFIGS.9A and9B, thereby embedding the additional information309. A block position setting unit502obtains the image data subjected to the color space conversion and sets the positional coordinates of the blocks in a plane image of a designated color according to the size of a designated block. For example, it is assumed that the size of a plane image of U of YUV is a vertical width of 640 px and a horizontal width of 480 px and the block size is a vertical width of 5 px and a horizontal width of 5 px. In this case, the number of vertical blocks is 128 (=640÷5), the number of horizontal blocks is 96 (=480÷5), and the total number of blocks is 12288 (=128×96). For example, the coordinates of the upper left corner of each block can be set as the position of the block. (1-3-3) Quantification Unit A quantification unit503converts the received additional information309into quantified data. For example, it is assumed that the additional information309is a shift JIS character string. In this case, a conversion map in which characters and numbers are associated with each other in a shift JIS format is stored in advance and a character string is converted into a sequence of numbers using the conversion map. For example, in the case of a character string “hello”, the character string is converted into a sequence of numbers “0110100001100101011011000110110001101111”. (1-3-4) Pattern Selection Unit The mask patterns for modulating the density of each pixel in each block are registered in a pattern selection unit504. The pattern selection unit504selects a mask pattern to be applied to the additional information309quantified by the quantification unit503. FIGS.12A and12Bare diagrams illustrating patterns obtained by quantifying the patterns with different frequency characteristics illustrated inFIG.9AandFIG.9B. As described above, the patterns illustrated inFIGS.9A and12Acorrespond to the binary data “0” in the additional information309and the patterns illustrated inFIGS.9B and12Bcorrespond to the binary data “1” in the additional information309. (1-3-5) Information Multiplexing Unit An information multiplexing unit505obtains the image data subjected to the color space conversion by the color space conversion unit501, the position of each block set by the block position setting unit502, and the mask pattern selected by the pattern selection unit504. The information multiplexing unit505applies the mask pattern to the image data to generate image data from the obtained information. As described above, in a case where an image size is a vertical width of 640 px and a horizontal width of 480 px and the size of one block is a vertical width of 5 px and a horizontal width of 5 px, the total number of blocks is 12288. In a case where a print image of the printed matter C is captured, the entire image is not necessarily captured. Therefore, the same additional information is embedded at a plurality of positions of the print image such that the additional information can be extracted only by capturing a portion of the print image of the printed matter C. For example, in a case where 96 blocks form one additional information item, the same additional information is embedded in 128 (=12288÷96) areas with respect to a total of 12288 blocks. Therefore, the image data is divided into 128 areas and the additional information formed by 96 blocks, each of which has a vertical width of 5 px and a horizontal width of 5 px, is embedded in one of the 128 areas. Since 96 blocks are treated as one additional information items, 96-bit additional information can be set. However, 8-bit additional information “11111111” which is not represented as a character in shift JIS is included in the head such that a start position of 96 bits is known. Therefore, 88-bit (=96-8) data is the additional information. Data included in 96 bits is a sequence of numbers “0” and “1” in the additional information quantified by the quantification unit503. A value is defined for each block of 5 px×5 px and a mask pattern corresponding to the value is selected. A mask pattern corresponding to the additional information is embedded in a block of 5 px×5 px in the image data. For example, it is assumed that image data is a U plane of YUV, each block (5 px×5 px) of the image data is processed, and the values of the mask patterns illustrated inFIGS.12A and12Bare applied to the value of the U plane of YUV. For example, the value (U value) of the U plane of YUV is added or subtracted according to the values of the mask patterns and it is assumed that a reference value for the addition or subtraction process is 10, as illustrated in the following Expression 4. (Uvalue after application)=(Uvalue ofYUV)+(reference value)×(values of mask patterns).  [Expression 4] For example, in a case where the U value of one pixel in one block is “20” and the value of the mask pattern to be applied is “0”, the U value is processed as illustrated in the following Expression 5. (Uvalue after application)=20+10×0=20.  [Expression 5] In a case where the U value of one pixel in one block is “30” and the value of the mask pattern to be applied is “2”, the U value is processed as illustrated in the following Expression 6. (Uvalue after application)=30+10×2=50.  [Expression 6] As such, in this example, the product of the value of the mask pattern applied to each pixel and the reference value is added to achieve multiplexing. A method for applying the mask pattern is not limited to the method according to this example as long as it can embed the mask pattern in the U plane. For example, the U value of YUV may be multiplied by the value of the mask pattern. Such a multiplexing encoding process is performed by the additional information multiplexing apparatus102illustrated inFIG.1Aor the additional information multiplexing unit105illustrated inFIG.1B. The multiplexing encoding process may not be included in the printer103or may be included in the printer103. The image data subjected to the multiplexing encoding process which has been generated in the additional information multiplexing apparatus102or the additional information multiplexing unit105is transmitted to the printer103or the printing unit106. (1-4) Image Data Printing Process FIG.13is a flowchart illustrating an image data printing process after the multiplexing encoding process. First, the additional information multiplexing unit310illustrated inFIG.3obtains the image data having the additional information embedded therein (multiplexed image data) (Step S31). Then, the color correction unit305illustrated inFIG.3performs appropriate color correction for the multiplexed image data (Step S32). Then, the ink color conversion unit306, the density correction unit307, and the gradation conversion unit308illustrated inFIG.3convert the color-corrected image data into an ink color value, correct the density of the image data, and convert the density-corrected image data into a gradation value to generate print data (Step S33). The print data is transmitted to the print engine illustrated inFIG.3and the print engine gives each color ink to a print medium on the basis of the print data to generate the printed matter C. (1-5) Basic Firmware of Multiplexing Decoding Process FIG.14Ais a diagram illustrating the basic firmware configuration of the multiplexing decoding process according to this example and the multiplexing decoding process extracts the additional information embedded in the print image of the printed matter C. The image sensor202(seeFIG.2) according to this example includes an image capturing unit801and a color adjustment unit802. The additional information separation device203(seeFIG.2) according to this example includes a multiplexing position detection unit803, an additional information separation unit804, and an extracted data analysis unit805. In this example, quantified additional information data, such as text document data, audio data, or moving image data, is embedded in the print image of the printed matter C. In the following description, it is assumed that the same additional information is repeatedly embedded at each predetermined area in the entire print image of the printed matter C. (1-5-1) Image Capturing Unit The image capturing unit801captures the print image of the printed matter C using an imaging element of the image sensor202and converts the image into image data. FIG.14Bis a diagram illustrating a case in which the camera-equipped mobile terminal201captures the print image of the printed matter C. The image subjected to the multiplexing encoding process is printed in a print area902of a print medium901corresponding to the printed matter C. An area904is an area whose image is captured by an apparatus903corresponding to the camera-equipped mobile terminal201illustrated inFIG.2. The image capturing unit801captures the image of the area904in the print area902of the print medium901using the apparatus903. A CCD can be used as the imaging element of the image capturing unit801. The CCD senses light using a photodiode (light receiving element) and converts the light into a voltage. At that time, light can be converted into color data by, for example, an RGB or CMY color filter that is provided for each imaging element. A signal detected by the photodiode is transmitted to the color adjustment unit802. (1-5-2) Color Adjustment Unit In the color adjustment unit802, the output data of the photodiode in the image capturing unit801is converted into image data with RGB 8-bit per one pixel. Before the output data is converted into the image data, for example, an RGB color interpolation process is performed for the output data of the photodiode according to a light source during capturing. In a case where capturing is performed by, for example, a digital camera and the camera-equipped mobile terminal201, the interpolation process performs adjustment such that a captured image of a white object looks white. Since the image capturing unit801detects light which has been emitted from a light source, such as the sun or a light, and then reflected from the object using the photodiode, the color of the image varies depending on the light source. Therefore, the color adjustment unit802performs the interpolation process corresponding to the light source. As a general interpolation method, there is a method using Kelvin (K) which is the unit of quantification of the color of light represented by a color temperature indicating the color of a light source. In general, sunlight in the daytime is 5500 K and an incandescent lamp is 3000 K. In a case the color temperature is high, light looks blue. In a case the color temperature is low, light looks red. Therefore, the color of a captured image varies depending on the light source. In general, a digital camera, the camera-equipped mobile terminal201, and the like have a so-called auto white balance adjustment function which detects a color temperature using a sensor during capturing and automatically adjusts a white balance such that a captured image of a white object looks white. In addition, it is possible to manually adjust the white balance according to the light source such as sunlight or an incandescent lamp. The color adjustment unit802adjusts the white balance of the output data of the photodiode to generate image data. The image data is transmitted to the additional information separation device203. (1-5-3) Multiplexing Position Detection Unit The multiplexing position detection unit803receives the image data whose color has been adjusted by the color adjustment unit802and determines the frequency characteristics of the image data to detect the position (multiplexing position) where the additional information is embedded. FIG.15Ais a diagram illustrating a difference in frequency characteristics in a two-dimensional frequency domain. InFIG.15A, the horizontal axis indicates a frequency in the horizontal direction, the vertical axis indicates a frequency in the vertical direction, and the origin as the center indicates a direct-current component. As the distance from the origin increases, the frequency increases. In this example, the frequency characteristics are changed by the multiplexing process. For example, as described above, a large power spectrum is generated on a straight line1201illustrated inFIG.15Aby a change in the frequency characteristics in a case where the mask pattern illustrated inFIG.9Ais applied. In addition, a large power spectrum is generated on a straight line1202illustrated inFIG.15Aby a change in the frequency characteristics in a case where the mask pattern illustrated inFIG.9Bis applied. In a case where the additional information is separated, a frequency vector in which the large power spectrum is generated is detected in order to determine a multiplexed signal. Therefore, it is necessary to individually enhance each frequency vector and to extract the frequency vector. Therefore, it is possible to use high-pass filters (HPF) having the same frequency characteristics as the mask patterns illustrated inFIGS.12A and12B. A space filter corresponding to the mask pattern illustrated inFIG.12Acan enhance the frequency vector on the straight line1201illustrated inFIG.15A. A space filter corresponding to the mask pattern illustrated inFIG.12Bcan enhance the frequency vector on the straight line1202illustrated inFIG.15A. For example, it is assumed that a large power spectrum is generated on the frequency vector on the straight line1201illustrated inFIG.15Aby the quantization condition in which the mask pattern illustrated inFIG.12Bis applied. In this case, the amount of change in the power spectrum is amplified by the space filter corresponding to the mask pattern illustrated inFIG.12A, but is hardly amplified by the space filter corresponding to the mask pattern illustrated inFIG.12B. That is, in a case where filtering is performed by a plurality of space filters arranged in parallel, the power spectrum is amplified only by the space filters in which the frequency vectors are matched with each other and is hardly amplified by the other filters. Therefore, the frequency vector on which a large power spectrum is generated can be determined by specifying the space filter amplifying the power spectrum. As such, the determination of the frequency characteristics makes it possible to extract the additional information. At that time, in a case where the extraction position of the additional information deviates, it is difficult to accurately extract the additional information. FIG.15Bis a diagram illustrating the print area of the printed matter C. A print medium1501as the printed matter C includes an area1502in which multiplexing is performed in each of a plurality of blocks and the additional information is embedded in the area1502divided into the blocks such that the area has specific frequency characteristics. FIGS.16A and16Bare diagrams illustrating the relationship between the multiplexed block and a frequency characteristic determination area. The print medium1501is multiplexed in four blocks. InFIG.16A, a determination area602for determining the frequency characteristics of each block deviates from the position of the block. InFIG.16B, a determination area603for determining the frequency characteristics of each block is aligned with the position of the block. InFIG.16A, it is possible to accurately determine the frequency characteristics in the determination area602. In contrast, inFIG.16B, in the determination area603, the power spectrum of a specific frequency vector is reduced and it is difficult to accurately determine the frequency characteristics. The multiplexing position detection unit803determines the frequency characteristics of each block in which the additional information has been multiplexed, using the space filter. At that time, it is necessary to specify the position of the block in which the additional information has been multiplexed. The position of the block in which the additional information has been multiplexed can be specified on the basis of the intensity of the power spectrum of a specific frequency vector. Therefore, the multiplexing position detection unit803detects frequency characteristics in a captured image while shifting the frequency characteristic determination area for each block and determines the frequency characteristics to specify the position of the block in which the additional information has been multiplexed. (1-5-4) Additional Information Separation Unit The frequency characteristics of each block is determined on the basis of the position of the block detected by the multiplexing position detection unit803and the additional information separation unit804extracts the multiplexed additional information on the basis of the determination result of the frequency characteristics of each block. As illustrated inFIG.15B, in a case where the total number of blocks in which the additional information has been multiplexed is 96 blocks (8 blocks in the horizontal direction×12 blocks in the vertical direction), the additional information items “0” and “1” are embedded in each block by the multiplexing encoding process. The additional information to be embedded in each block is determined on the basis of the frequency vector of each block. That is, the additional information to be embedded in a block in which the frequency vector of the straight line1201illustrated inFIG.15Ais greater than a predetermined threshold value is determined to be “0”. In addition, the additional information to be embedded in a block in which the frequency vector of the straight line1202illustrated inFIG.15Ais greater than the predetermined threshold value is determined to be “1”. The frequency characteristic determination area is shifted in the unit of blocks on the basis of the position of the block detected by the multiplexing position detection unit803to determine the frequency characteristics of a total of 96 blocks inFIG.15B. Therefore, it is possible to extract the additional information embedded in each block. In this case, since 1-bit additional information “0” or “1” can be extracted from each block, it is possible to extract a total of 96 bits of data from a total of 96 blocks. As such, it is possible to extract the multiplexed additional information from a plurality of blocks by determining the frequency characteristics while shifting the frequency characteristics determination area. (1-5-5) Extracted Data Analysis Unit The extracted data analysis unit805analyzes the sequence of numbers which has been separated as the additional information by the additional information separation unit804and converts the sequence of numbers into the original format of the additional information before embedment. For example, a character code of text document data which is the additional information to be multiplexed is quantified into a “shift JIS” code in advance. For a 1-byte shift JIS code (half-width character), conversion (quantification) corresponding to a number or a character can be performed by a combination of upper 4 bits and lower 4 bits. For example, in a case where the upper 4 bits are “0100” and the lower 4 bits are “0001”, the sequence of numbers is determined to be “A”. As such, a conversion map is stored in advance and the conversion map and the sequence of numbers are associated with each other to convert the sequence of numbers into a character. For example, the sequence of numbers separated as the additional information is temporarily stored in the RAM206illustrated inFIG.2and a “shift JIS” conversion map can be stored in the secondary storage device207in advance such that it can be referred. It is assumed that the sequence of numbers extracted as the additional information by the additional information separation unit804is “0110100001100101011011000110110001101111”. In this case, the sequence of numbers is converted by the conversion map as follows. A combination of upper 4 bits “0110” and lower 4 bits “1000” is converted into a character “h”. A combination of upper 4 bits “0110” and lower 4 bits “0101” is converted into a character “e”. A combination of upper 4 bits “0110” and lower 4 bits “1100” is converted into a character “1”. A combination of upper 4 bits “0110” and lower 4 bits “1100” is converted into a character “1”. A combination of upper 4 bits “0110” and lower 4 bits “1111” is converted into a character “o”. Therefore, the sequence of numbers is converted into a character string “hello”. For example, the character string extracted as the additional information can be displayed on the display208illustrated inFIG.2. In addition, in a case where the extracted character string is a uniform resource locator (URL), the information processing apparatus may be connected to a network by the wireless LAN210illustrated inFIG.2and display a screen of a URL destination on the display28using a browser. In a case where the URL is a moving image site, a moving image may be displayed on the display208and a sound may be output from the speaker211. (1-6) Multiplexing Decoding Process FIG.17is a flowchart illustrating the multiplexing decoding process according to this example. First, the image sensor of the image capturing unit801illustrated inFIG.14Ain the camera-equipped mobile terminal201(seeFIG.2) captures the print image of the printed matter C (Step S81). The captured light is converted into color data and is then transmitted to the color adjustment unit802illustrated inFIG.14A. The color adjustment unit802adjusts the white balance of the output data from the photodiode to generate image data (Step S82). The generated image data is transmitted to the additional information separation device203illustrated inFIGS.2and8or is stored in the secondary storage device207illustrated inFIG.2. The multiplexing position detection unit803illustrated inFIG.14Adetects the multiplexing position on the basis of the image data whose white balance has been adjusted (Step S83), as described above. In Step S84, it is determines whether the position of the block in which the additional information has been multiplexed has been detected by the multiplexing position detection unit803. In a case where the position has been detected, the process proceeds to a next process for separating the additional information (Step S85). In a case where the position has not been detected, the process returns to Step S81. In Step S85, the additional information separation unit804illustrated inFIG.14Adetermines the frequency characteristics of each block on the basis of the image data generated by the color adjustment unit802and the position of the block detected by the multiplexing position detection unit803. Then, the additional information separation unit804extracts the multiplexed additional information as the numerical data on the basis of the determination result. The extracted numerical data is transmitted to the extracted data analysis unit805illustrated inFIG.14A. Alternatively, the extracted numerical data is temporarily stored in the RAM206illustrated inFIG.2and is then notified to the extracted data analysis unit805illustrated inFIG.14A. Then, the extracted data analysis unit805illustrated inFIG.14Aanalyzes the numerical data extracted as the additional information and converts the numerical data into the additional information such as characters (Step S86), as described above. In Step S87, it is determined whether or not the conversion of all of the extracted numerical data into the additional information by the extracted data analysis unit805has been completed. In a case where the conversion has been completed, the multiplexing decoding process illustrated inFIG.17ends. In a case where the conversion has not been completed, the process returns to Step S81. The additional information from which, for example, a character has been extracted can be displayed on the display208illustrated inFIG.2. In addition, it is possible to access the network on the basis of the additional information. In a case where the additional information has not been completely extracted from the printed matter C, it is considered that this is because only a portion of the area in which the additional information has been embedded is included in the captured area of the printed matter C. In this case, since only a portion of the additional information can be extracted, it is necessary to capture the image of the printed matter C again. For example, in order to determine whether the additional information can be extracted, a value indicating the amount of data of the additional information may be included in the additional information in advance and the amount of data of the additional information may be determined from the value. In order to determine whether data as the additional information relates to the amount of data or character data, for example, combinations of the sequences of numbers are determined in advance and several bits before and after the sequence of numbers is used as data related to the amount of data. In addition, in a case where only a portion of the additional information can be extracted, for example, only the extracted content may be stored in the secondary storage device207illustrated inFIG.2and the extracted additional information may be combined with a portion of the stored additional information by the subsequent process. As such, the additional information may be extracted a plurality of numbers of times. In addition, the additional information extracted a plurality of numbers of times may be sequentially displayed on, for example, the display208illustrated inFIG.2. (2) Characteristic Configuration In this embodiment, a characteristic configuration is added to the above-mentioned basic configuration of the multiplexing decoding processing unit. FIG.18is a diagram illustrating the multiplexing decoding processing unit characterized by this embodiment. In this configuration, the multiplexing decoding processing unit with the basic configuration illustrated inFIG.14Afurther includes a white balance setting unit1701. Specifically, the white balance setting unit1701is connected to the color adjustment unit802of the image sensor202. Next, the process of the white balance setting unit1701and the color adjustment unit802will be described. In order to capture an image of the printed matter C on which the image subjected to the multiplexing encoding process has been printed, a multiplexing decoding processing application installed in the camera-equipped mobile terminal201(seeFIG.2) is run to operate the image sensor202. In a case where an application is run in a general camera, the white balance is automatically adjusted by the auto white balance adjustment function. In this embodiment, the multiplexing decoding processing application adjusts the white balance, without depending on the auto white balance adjustment function. Hereinafter, white balance adjustment will be described and other processes will not be described since they have the same basic configuration as described above. (2-1) White Balance Setting Unit The white balance setting unit1701is a processing unit that is provided in the camera-equipped mobile terminal201and sets a white balance adjustment value in a case where an image of the printed matter C is captured. In a case where the printed matter C in which an image, in which a specific color is dominant, has been printed is captured and an “auto” (auto white balance adjustment) mode for automatically adjusting the white balance is set, there is a concern that the erroneous determination that “color fogging” has occurred will be made. In this case, the color of the captured image is changed. For example, a case in which a color component of a YUV-U component is modulated by mask data and the like, and the image of the printed matter C subjected to the multiplexing encoding process is captured is assumed. The RGB values of two pixels A1 and A2 in the captured image are as follows.Pixel A1 (R, G, B)=(100, 39, 254)Pixel A2 (R, G, B)=(100, 60, 148) It is assumed that pixels obtained by converting the RGB value of the two pixels A1 and A2 into YUV values are the following pixels B1 and B2.Pixel B1 (Y, U, V)=(82, 97, 13)Pixel B2 (Y, U, V)=(82, 37, 13) A difference in YUV-U between the pixel B1 and the pixel B2 is 60 (=97-37) and the additional information is determined on the basis of the difference between the U components. In a case where the “auto” mode for automatically adjusting the white balance (auto white balance adjustment) is set, it is determined that “red fogging” has occurred. As a result, the R values of the pixels A1 and A2 are decreased and the G and B values of the pixels A1 and A2 are increased. It is assumed that the pixels whose R, G, and B values have been changed are pixels C1 and C2. The RGB values of the pixels C1 and C2 are as follows.Pixel C1 (R, G, B)=(68, 58, 255)Pixel C2 (R, G, B)=(68, 90, 255) It is assumed that pixels obtained by converting the RGB values of the two pixels C1 and C2 into YUV values are the following pixels D1 and D2.Pixel D1 (Y, U, V)=(84, 96, −11)Pixel D2 (Y, U, V)=(102, 86, −24) A difference in YUV-U between the pixel D1 and the pixel D2 is 10 (=96-86) and is less than 60 that is the difference in YUV-U between the pixel B1 and the pixel B2. As such, in a case where the white balance is automatically adjusted (auto white balance adjustment), there is a concern that the amount of modulation of a necessary color will not be obtained in some pixels during the multiplexing decoding process. For this reason, in this embodiment, the white balance is adjusted during the multiplexing decoding process, on the basis of an adjustment value corresponding to the light source during the multiplexing decoding process which is assumed in advance during the multiplexing encoding process, so as to correspond to the light source. The adjustment value corresponding to the assumed light source may be set as a default value. In general, in a case where an image is captured by a camera, the type of light source is automatically recognized and the white balance is automatically adjusted (auto white balance adjustment) according to the type of light source. In this embodiment, the auto white balance adjustment function is not used during white balance adjustment in the multiplexing decoding process. In general white balance adjustment, a color temperature which represents the color of light with temperature is used and a Kelvin value is used as the color temperature. For example, in a case where the light source is sunlight under the clear sky, the color temperature is set to 6500 Kelvin (K). In a case where the light source is sunlight at sunset, the color temperature is set to 3500 K. In general auto white balance adjustment, a light source is automatically estimated and the Kelvin value as the color temperature is set according to the light source. In white balance adjustment, the RGB values are corrected on the basis of the amount of gain corresponding to the set Kelvin value. In this embodiment, for example, it is assumed that the color temperature of the light source in a case where the image of the printed matter C is captured is 5500 Kelvin (K). When the printed matter C is subjected to the multiplexing encoding process, the white balance setting unit1701sets the color temperature to 5500 Kelvin (K). That is, the white balance is adjusted on the basis of the color temperature corresponding to the light source during the multiplexing decoding process which is assumed during the multiplexing encoding process. In general, the printed matter C subjected to the multiplexing encoding process is used in association with an application for performing the multiplexing decoding process for the printed matter C. In addition, light source information related to the light source used during the capture of the image of the printed matter C may be stored in the camera-equipped mobile terminal201performing the multiplexing decoding process in advance, and the white balance setting unit1701may set the color temperature using the stored light source information. The color temperature set by the white balance setting unit1701is transmitted to the color adjustment unit802illustrated inFIG.18. (2-2) Color Adjustment Unit The color adjustment unit802illustrated inFIG.18adjusts the gain of the RGB values of the captured image of the printed matter C on the basis of the color temperature received from the white balance setting unit1701. In recent years, as the camera-equipped mobile terminal201, there has been a mobile terminal having a function which enables the user to manually set the type of light source and adjusts the gain of the RGB values of a captured image according to the set type of light source to adjust the white balance. In this embodiment, in a case where the multiplexing decoding process is performed using the function of the camera-equipped mobile terminal201, it is possible to adjust the white balance on the basis of the color temperature set by the white balance setting unit1701. (2-3) Effect of This Embodiment In a case where the auto white balance adjustment function is used during the multiplexing decoding process, it may be difficult to read the additional information. In this embodiment, it is possible to reliably read the additional information by adjusting the white balance during the multiplexing decoding process, on the basis of the color temperature corresponding to the light source during the multiplexing decoding process which is assumed during the multiplexing encoding process. In the case of the camera-equipped mobile terminal201with a function of automatically adjusting the white balance (auto white balance adjustment function), the auto white balance adjustment function is turned off during the multiplexing decoding process. In a case where the image of the printed matter is captured with the auto white balance adjustment function turned off, it is possible to prevent a change in the color of the captured image and to stabilize an additional information reading operation. Second Embodiment In the first embodiment, it is possible to reliably read the additional information by adjusting the white balance during the capture of the image of the printed matter C so as to correspond to the light source during the multiplexing decoding process which is assumed during the multiplexing encoding process. In the first embodiment, a case where an image, in which a specific color is dominant, has been printed in the printed matter C is assumed. In this case, the white balance is not automatically adjusted (auto white balance adjustment is not performed). However, when the image of the printed matter C is captured, the capture of the image may be affected by both the color of the print image and a capturing environment. In this case, it is necessary to adjust the white balance according to the light source in the capturing environment. For example, in a case where the image of the printed matter C is captured under a light source that emits red light, the entire captured image looks red. In this case, there is a concern that the entire print image will remain red even in a case where the white balance is adjusted. For this reason, in this embodiment, in a case where the image of the printed matter C is captured in the multiplexing decoding process, the printed matter C is actively illuminated. In recent years, as the camera-equipped mobile terminal201, there has been a mobile terminal which has an LED light as a light source212illuminating an object, as illustrated inFIG.2. In general, the color temperature of the light source212is about 5500 Kelvin (K) and is set to a value between the color temperature of light from a white fluorescent lamp and the color temperature of sunlight under the clear sky. In addition, in recent years, as the camera-equipped mobile terminal201, there has been a mobile terminal which includes an LED light of a plurality of colors as the light source212. In this case, it is possible to control illumination light. Such a light source which is controlled by a light control function is assumed in the multiplexing encoding process in advance. In the multiplexing decoding process, it is possible to capture the image of the printed matter C under the assumed light source and to adjust the white balance according to the light source. In this case, as the color temperature corresponding to the assumed light source212, for example, 5500 Kelvin (K) may be set as the default value. In addition, the light source212provided in the camera-equipped mobile terminal201can be operated by the user at any time. When a camera mechanism in the camera-equipped mobile terminal201is used, it is possible automatically illuminate the object. As such, in this embodiment, during the multiplexing decoding process, the image of the printed matter C is captured while the printed matter C is actively illuminated by a predetermined light source. The predetermined light source is assumed in advance and the white balance is adjusted so as to correspond to the light source. In this way, it is possible to reduce the influence of a capturing environment. Therefore, it is possible to appropriately adjust the white balance and to reliably read the additional information. Other Embodiments In the above-described embodiments, the white balance is adjusted on the basis of the set value of the color temperature corresponding to the assumed light source. However, the invention is not limited thereto. For example, in a case where a light source assumed during the multiplexing encoding process has a given color temperature range, it is not necessary to uniquely set the color temperature of the light source. For example, the color temperature of the light source may be set in the range of 5000 Kelvin (K) to 6000 K. In recent years, as the camera-equipped mobile terminal201, there has been a mobile terminal which can designate the color temperature range of the light source in order to adjust the white balance during capturing. The white balance may be adjusted in the designated color temperature range of the light source. In addition, the version of software for performing the multiplexing encoding process is likely to be different. In this case, the configuration in which the color temperature of the light source is uniquely set as a white balance adjustment value does not have versatility. Therefore, the white balance adjustment value may be switched under a predetermined condition. For example, the color temperature as the adjustment value may be switched according to the version of the software for performing the multiplexing encoding process. In addition, in a case where the image of the printed matter C is captured, the white balance adjustment value may be switched for each version of the software for performing the multiplexing decoding process. Furthermore, for example, a marker may be read from the printed matter C, the version of the software for performing the multiplexing decoding process may be determined on the basis of the reading result, and the white balance adjustment value may be switched according to the version. In addition, the white balance adjustment value may be switched using the analysis information of the characteristics of the print image of the printed matter C. In the present invention, in order to capture an image in which the additional information has been embedded by color modulation and to extract the additional information from the captured image with high accuracy, the white balance of the captured image may be adjusted on the basis of an adjustment value which is associated with the image having the additional information embedded therein. The adjustment value may be a predetermined default value for the image having the additional information embedded therein (additional-information-embedded image) or may vary depending on the modulation conditions of a color component in the additional-information-embedded image. In addition, an allowable adjustment range of the white balance of the additional-information-embedded image may be set on the basis of the adjustment value associated with the image having the additional information embedded therein and the white balance may be automatically adjusted by an auto white balance adjustment function in the allowable adjustment range. Furthermore, the white balance of the additional-information-embedded image may be adjusted on the basis of a plurality of adjustment values that can be selected. In this case, the adjustment value can be selected on the basis of at least one of the version of an application for generating the image data in which at least a color component has been modulated according to the additional information and the analysis result of the characteristics of the additional-information-embedded image. The version of the application can be determined on the basis of, for example, information printed as a marker in the printed matter together with the additional-information-embedded image. The above-described embodiments are examples of the configuration for obtaining the effect of the present invention and structures that can obtain the same effect as described above using other similar methods or different parameters are also included in the scope of the present invention. In addition, the present invention can be applied to a system including a plurality of apparatuses (for example, a host computer, an interface device, a reader, and a printer) and an apparatus including one device (for example, a printer, a copier, and a facsimile). The object of the present invention can be achieved by the following configuration. First, a storage medium (or a printing medium) in which a program code of software for implementing the functions of the above-described embodiments has been stored is provided to a system or an apparatus. Then, a computer (a CPU or an MPU) of the system or the apparatus reads and executes the program code stored in the storage medium. In this case, the program code read from the storage medium implements the functions of the above-described embodiments, and the program code and the storage medium storing the program code form the present invention. In addition, the present invention is not limited to the configuration in which the computer executes the read program code to implement the functions of the above-described embodiments. For example, the present invention includes a case in which an operating system (OS) that operates in the computer executes some or all of the actual processes on the basis of an instruction from the program code and the functions of the above-described embodiments are implemented by the processes. In addition, the object of the present invention can be achieved by the following configuration. First, the program code read from the storage medium is written to a memory that is provided in a function expansion card inserted into the computer or a memory that is provided in a function expansion unit connected to the computer. Then, for example, a CPU that is provided in the function expansion card or the function expansion unit executes some or all of the actual processes on the basis of an instruction from the program code, and the functions of the above-described embodiments are implemented by the processes. Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2017-126764, filed Jun. 28, 2017, which is hereby incorporated by reference wherein in its entirety.
74,810
11943548
DETAILED DESCRIPTION The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations. The terms “component” and “element” or their plural form may be used interchangeably where combined with other words (e.g. “sensor”) and refer to the same element(s). The symbol “Ag/Cl” is intended to mean “Silver Chloride”. The acronym “Correlated Double Sampling” is intended to mean “CDS”. The acronym “DC” is intended to mean “Direct Current”. The acronym “FDMA” is intended to mean “Frequency-Division Multiple Access”. The acronym “GFET” is intended to mean “Graphene-based solution Gate Field Effect Transistor”. The acronym “OpAmp” is intended to mean “Operational Amplifier”. The acronym “TDMA” is intended to mean “Time Division Multiple Access”. The term “mobile device” may be used interchangeably with “portable device” and “device with wireless capabilities”. The following terms have the following meanings when used herein and in the appended claims. Terms not specifically defined herein have their art recognized meaning. “SU-8” is a commonly used epoxy-based negative photoresist. A “lock-in amplifier” is a type of amplifier that can extract a signal with a known carrier wave from an extremely noisy environment. “Modulation” is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a modulating signal that typically contains information to be transmitted. “Demodulation” is the process of extracting the original information-bearing signal from a carrier wave. “Multiplexing” is a method by which multiple analog or digital signals are combined into one signal over a shared medium. “Demultiplexing” is a method by which the original analog or digital signals are extracted from a combined (or multiplexed) signal containing the original signals. “Lift-off process” in microstructuring/microelectronics technology is a method of creating structures (i.e. patterning) of a target material on the surface of a substrate (e.g. a wafer) using a sacrificial material (e.g. Photoresist). It is an additive technique as opposed to more traditional subtracting technique like etching. The scale of the structures can vary from the nanoscale up to the centimeter scale or further, but are typically of micrometric dimensions. The disclosures and/or solutions hereof allow the multiplexing of sensors in sensor arrays while eliminating the need to use switching elements (like in Time Division Multiple Access (TDMA) solutions), which increase the complexity and cost of design and fabrication of sensor arrays, while introducing noise and artifacts that can seriously affect the accuracy of the sensor signals especially where very low voltage signals need to be multiplexed. As a consequence, the disclosures and/or solutions also allow the simplification of sensor array design by eliminating the need to add complex auxiliary circuits (e.g. for de-noising) or for handling larger voltages and can be used in modern portable devices where low energy consumption is a pre-requisite. FIG.2illustrates an exemplary sensor array with logic for multiplexing and demultiplexing sensor signals. Sensor array100is a bi-dimensional array of sensors101, where the electrical resistance of each sensor in the array is modulated by the signals to be measured102. By example, the sensor array may be an image sensing array like the one used in digital camera. An artisan skilled in related art understands that the sensor array is not limited to this example but may be of any type, any shape, and contain sensor elements of any type making use of any technology available. In one aspect the sensing elements in the sensor array may all be of identical type and characteristics, while in another aspect the sensing elements may be of various types and characteristics. Each sensing element100has two output terminals, which terminals are connected to a row line103and a column line104, respectively. The arrangement of sensors101inside array100ensures that there is a unique combination of columns and rows associated to each individual sensor101. For example, the sensor highlighted with the numeral101in theFIG.2is associated with the 1stcolumn (counting from the left) and with the 2ndrow (counting from the top). Similarly for all the other sensors in array100. Independent voltage sources105are connected to each column104of array100in order to stimulate continuous-time harmonic voltage waveforms106(Vi,j, where j is the column number in array100). The amplitudes, frequencies and phases of voltage waveforms Vi,j106can be selected independently so as enable supplying a voltage waveform of precisely know characteristics to each column of array100. In one aspect each column is fed with a continuous-time harmonic voltage waveforms106which has at least one of its three characteristics (amplitude, frequency and phase) different from the characteristics of the continuous-time harmonic voltage waveform106supplied to any other column of array100. Row lines103of array100are connected to the corresponding analog inputs of frontend circuits107. As a result, frontend circuits107can read the summation of currents108(Io,i, where i is the row number in array100) collected by each row line103in parallel. The principle of operation for the multiplexing and read-out of each signal to be measured102is as follows. For each sensor101of array100, the corresponding column harmonic voltage waveform Vi,j106is locally mixed at sensor101with the signal to be measured102by the variable-resistance behavior of sensor101. As a result, a modulated current waveform is permanently conducted by each sensor101, which modulated current waveform incorporates the harmonic components of the corresponding column voltage waveform Vi,j106modulated in amplitude by its particular signal to be measured102. All the individually modulated currents of each sensor101connected to the same row line103are summed in continuous time to generate the corresponding row output current Io,i108. Indeed, each row current waveform Io,i108contains the entire collection of harmonics used in every column voltage waveform Vi,j106, where each column voltage waveform Vi,jis modulated in amplitude by the individual signal to be measured102of corresponding sensor101. The multiplexing mechanism of sensor array100is, in essence, a Frequency-Division Multiple Access (FDMA) mechanism. Frontend circuits107read in parallel all row output FDMA current waveforms Io,i108and de-multiplex each individual sensor signal to be measured102by lock-in demodulation according to the amplitudes, frequencies and phases employed in harmonic voltage waveform Vi,j106of each column line104of the entire array100. FIG.3illustrates an exemplary implementation of a sensor array with logic subtracting common signal components from row output currents for demultiplexing measured signals. Sensor array100, sensors101, and voltage sources105used for signal modulation inFIG.3are identical and identically interconnected as those inFIG.2. In the alternative exemplary implementation ofFIG.3, the amplitudes of the signals to be measured102by sensors101are very weak compared to the amplitudes of column harmonic voltage waveforms Vi,j106. For this reason frontend circuits107are designed to subtract201all the common signal components Ic202(that have not been modulated by individual signals102at each sensor101) from every row output FDMA current Io,i108before the de-multiplexing process. This subtraction operation prior to the demultiplexing of row output FDMA current Io,i108allows the removal of noise from Io,i and alleviates the need for complex circuits (e.g. bandpass filters, etc.) inside frontend circuits107. The parallel read-out of the row output FDMA current waveforms Io,i108can be performed in different ways by frontend circuits107. FIG.4illustrates an exemplary implementation of a sensor array with logic using constant voltages for demultiplexing measured signals. Sensor array100, sensors101, and voltage sources105used for signal modulation inFIG.4are identical and identically interconnected as those inFIG.2. In the exemplary implementation ofFIG.4, a constant voltage301is applied at each row of array100. For any row line103, current Io,I108demanded by all sensors101in row line103is drained or sourced through frontend circuits107. In one aspect, constant voltage301applied to each row of103of array100is the same for all rows103. In another aspect, a first constant voltage is applied to the first row, a second constant voltage is applied to the second row, and so on. FIG.5illustrates an exemplary implementation of a sensor array with logic using capacitors for demultiplexing measured signals. Sensor array100, sensors101, and voltage sources105used for signal modulation inFIG.5are identical and identically interconnected as those inFIG.2. In the exemplary implementation ofFIG.5a capacitor401is connected to each row of array100, so that the FDMA current summation Io,I108is integrated and converted into an output row voltage signal Vo,I402that is measured by frontend circuits107. In one aspect, capacitors401of the same type and characteristics are connected to each row103of array100. In another aspect, a first capacitor of a first type and first characteristics is connected to the first row, a second capacitor of a second type and second characteristics is connected to the second row, and so on. Concerning column harmonic voltage waveforms106, several configuration profiles can be chosen for their amplitude, frequency and phase. In one exemplary implementation, regarding the amplitude of column harmonic voltage waveforms106, high-amplitude levels are selected for allowing more output signal gain at row FDMA output current Io,I108for the same input signal to be measured102. However, high amplitude voltages bring the penalty of having more power consumption dissipated in array100. This is a serious problem as battery levels in portable devices may drop faster than is acceptable, electronics in the devices hosting array100and in array100, itself, get heated up resulting in damage over time and risk of burnout. If high amplitude voltages are used, electronics in array100must be designed to withstand higher voltage, resulting in higher complexity and design and manufacturing costs, while at the same time noise may be introduced in the multiplexed modulated currents Io,I108, which, in turn, necessitates the use of additional denoising logic, which increases complexity, and costs, and limits the amount of sensor elements101that can be packed per cm2of silicon chip real estate. In another exemplary implementation, the amplitude parameter may be used for disabling all sensors101connected to a given column line104by simply keeping that column voltage Vi,j106at a fixed Direct Current (DC) value. Different frequency values may be allocated for each column harmonic voltage waveform Vi,j106in order to ensure proper FDMA operation, with enough channel frequency spacing to avoid crosstalk between the spectra of the sensor signals to be measured102. As for the phase of each column harmonic voltage waveform Vi,j106, 90-degree orthogonal configurations with in-phase (I) and quadrature (Q) separated in two column waveforms106may be selected for reusing the same frequency voltage waveform. In yet another exemplary implementation, a special case of the above configuration profiles for column harmonic voltage waveforms Vi,j106may be used. This exemplary implementation uses the differential reading between two signals to be measured102of the same row103by using the same amplitude, the same frequency and inverting the phase of one voltage waveform Vi,j106when stimulating two respective columns104. Sensor Element Design and Manufacturing Sensor array100may be designed to contain any type of sensor elements according to the desired sensing application and to the choice of sensing principle of operation and sensing technology. By example and without limiting the scope of use or of implementation of the present exemplary developments, a Graphene-based solution Gate Field Effect Transistor (GFET) device may be used. GFETs may be used to obtain a current waveform modulated by a signal applied at the gate terminal (i.e. an electrode immersed in an electrolyte), thereby, allowing the GFET to operate as mixer. FIG.6shows a schematic representation of a GFET device. Such a GFET device500may be fabricated by a series of steps, commencing with depositing a flexible polyimide substrate polymer layer510on a sacrificial silicon wafer505(i.e. a silicon wafer that will be removed, e.g. by an etching or other process). To improve the resistivity of the contacts of sensor500, a first layer of metallic tracks523,533is defined by a lift-off technique. Then a single graphene layer540is transferred from a growth substrate onto polyimide substrate layer510. In order to define the shape of the transistor channel, graphene layer540is etched by oxygen plasma. A second metal layer is deposited and defined into tracks525,535by lift-off techniques. Metal tracks523and525are in direct contact with each other and form the transistor's source520, while metal tracks533,535are in direct contact with each other and form the transistor's drain530. External source527and drain537electrodes may also be added in exemplary implementations. The active area on the transistor channel is defined by an SU-8 insulation layer defined into two insulator parts550,560deposited on the transistor's source520and drain530. GFET fabrication is finished by defining polyimide substrate510by reactive ion etching, depositing an electrolyte570on the transistor's side opposite substrate510, and positioning an Ag/Cl electrode580inside electrolyte570. Electrode580forms the base of transistor500. FIG.7shows an exploded view of the fabrication layers of a GFET device. GFET device600is made of a polymer substrate layer610, a layer of metal contacts620forming source520and drain530, and an insulation layer640. In between metal contacts620is positioned grapheme channel630. Example GFET Connection for Lock-in Demodulation FIG.8shows an electronic circuit diagram example for demodulating modulated signals from a GFET transistor device. Demodulation circuit contains, among other components, GFET device710, and lock-in amplifier720. Lock-in amplifier720is composed of an Operational Amplifier (OpAmp)730whose positive input733is grounded, and negative feedback is achieved by connecting OpAmp's730output740to OpAmp's730negative input736via a resistor745. OpAmp's730output740is also connected to a mixer/multiplier device750. Voltage carrier source device770produces a carrier signal. Voltage carrier source device770is grounded at one end and connected at the other end to GFET's710source terminal717and to a phase shifter device760. Phase shifter device760outputs an in-phase (I) signal763(i.e. the carrier signal) and a quadrature (Q) signal766(i.e. the carrier signal shifted by 90°). Both (I) and (Q) signals are fed to mixer/multiplier device750, which mixer/multiplier device750produces an “X” signal (at 90°)756and a “Y” signal (at 0°)753. The “X” signal (at 0°)756is fed to a LowPass Filter (LPF)780which LPF780rejects high frequencies from its input signal and produces output signal790. GFET device710has its source terminal717fed with the carrier signal from voltage carrier source device770, and its electrolytic gate715fed with the signal to be measured719through an Ag/Cl electrode. GFET710modulates the carrier signal (that is applied to its source terminal717) with the signal to be measured (that is applied to its electrolytic gate terminal715) and outputs the modulated signal (via its drain terminal713, which is virtually grounded) to OpAmp's730positive terminal736. Output signal790is substantially identical to the signal to be measured719. In an example and in no way limiting the scope of the present disclosures, developments and/or solutions, a voltage carrier signal may be 50 mVrms at 100 kHz and a signal to be measured may be 100 μVrms at 10 Hz. In an alternative exemplary implementation, parts of the multiplexing and/or demultiplexing logic may be implemented in software in combination with Analogue-to-Digital (ND) and Digital-to-Analogue (D/A) circuit elements. The software implementing these functions may be written in any high-level, or low-level, or intermediate programming languages. Compared to known sensor array signal demultiplexing solutions, the present developments do not require any switching elements in array100to perform sensor multiplexing. Also, the generation of artifacts is strongly reduced by its continuous-time waveform operation compared to discrete-time TDMA scanning techniques. Moreover, the use of frequency and phase lock-in demodulation during FDMA de-multiplexing strongly reduces the equivalent noise bandwidth of the overall read-out system. Furthermore, the low-frequency noise contributions added by read-out circuits107can be filtered out through the appropriate selection of frequencies for each column line104of array100. The result of these improvements are reflected onto lower complexity, improved performance, lower design and manufacturing costs, and higher density of sensor elements per cm2of silicon chip. The above exemplary implementation descriptions are simplified and do not include hardware and software elements that are used in the implementations but are not part of the current developments, are not needed for the understanding of the implementations and/or developments, and are obvious to any user of ordinary skill in related art. Furthermore, variations of the described method, system architecture, and software architecture are possible, where, for instance, method steps, and hardware and software elements may be rearranged, omitted, or added. Various implementations of the developments are described above in the Detailed Description. While these descriptions directly describe the above implementations, it is understood that those skilled in the art may conceive modifications and/or variations to the specific implementations shown and described herein. Any such modifications or variations that fall within the purview of this description are intended to be included therein as well. Unless specifically noted, the words and phrases in the specification and claims are to be given the ordinary and accustomed meanings to those of ordinary skill in the applicable art(s). The foregoing description of preferred implementations and best mode of the developments known to the applicant at this time of filing the application have been presented and are intended for the purposes of illustration and description. It is not intended to be exhaustive or limit the developments to the precise form disclosed and many modifications and variations are possible in the light of the above teachings. The implementations were chosen and described in order to best explain the principles of the developments hereof and relative practical applications and to enable others skilled in the art to best utilize the developments hereof in various implementations and with various modifications as are suited to the particular use contemplated. Therefore, it is intended that the developments not be limited to the particular implementations disclosed for carrying out these developments, but that the developments hereof will include all implementations falling within the scope of the appended claims. In one or more exemplary implementations, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer or any other device or apparatus operating as a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The previous description of the disclosed exemplary implementations is provided to enable any person skilled in the art to make or use the present developments. Various modifications to these exemplary implementations will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of the developments. Thus, the present developments are not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
22,169
11943549
MODE FOR CARRYING OUT THE INVENTION Embodiments of the present technology are described below with reference to the drawings. Note that the description is given in the order below.1. First Embodiment: Pixel structure and drive control thereof2. Second embodiment: Another drive control3. Third embodiment: Another drive control4. Fourth embodiment: Another drive control5. Fifth embodiment: Configuration including correction processing6. Sixth embodiment: Read function of pixel7. Seventh embodiment: Another pixel structure and read function thereof8. Variation9. Configuration of electronic equipment10. Example of use of solid-state imaging element11. Application examples to mobile objects12. Application example to endoscopic surgery system 1. First Embodiment (Configuration Example of Solid-State Imaging Element) FIG.1is a block diagram illustrating an example of a configuration of an embodiment of a solid-state imaging element to which the present technology has been applied. A solid-state imaging element10ofFIG.1is configured as, for example, a CMOS image sensor using complementary metal oxide semiconductor (CMOS). The solid-state imaging element10takes in incident light (image light) from a subject through an optical lens system (not illustrated), converts the light amount of the incident light formed on an imaging surface into an electric signal on a pixel-by-pixel basis, and outputs the electric signal as a pixel signal. InFIG.1, the solid-state imaging element10includes a pixel array unit11, a vertical drive circuit12, a column signal processing circuit13, a horizontal drive circuit14, an output circuit15, a control circuit16, and an input/output terminal17. In the pixel array unit11, a plurality of pixels100is arranged in a two-dimensional form (matrix form). The vertical drive circuit12is configured by, for example, a shift register, selects a predetermined pixel drive line21, supplies a drive signal (pulse) for driving the pixels100to the selected pixel drive line21, and drives the pixels100in the unit of rows. That is, the vertical drive circuit12sequentially selectively scans each pixel100of the pixel array unit11in the unit of rows in the vertical direction, and supplies a pixel signal based on electric charges (signal charges) generated corresponding to the received light amount in the photodiode (photoelectric conversion unit) of each pixel100to the column signal processing circuit13through a vertical signal line22. The column signal processing circuit13is arranged for each column of the pixels100, and performs signal processing such as noise removal on the signals output from the pixels100of one row with respect to each pixel column. For example, the column signal processing circuit13performs signal processing such as correlated double sampling (CDS) for removing fixed pattern noise peculiar to pixels and analog digital (AD) conversion. The horizontal drive circuit14includes, for example, a shift register, sequentially outputs horizontal scanning pulses to sequentially select each of the column signal processing circuits13, and causes each of the column signal processing circuits13to output a pixel signal to a horizontal signal line23. The output circuit15performs signal processing on the signals sequentially supplied from each of the column signal processing circuits13through the horizontal signal line23, and outputs the processed signals. Note that the output circuit15can be, for example, only buffered, or can be subjected to black level adjustment, column variation correction, various digital signal processing, and the like. The control circuit16controls the operation of each unit of the solid-state imaging element10. Furthermore, the control circuit16generates a clock signal or a control signal that serves as a reference for operations of the vertical drive circuit12, the column signal processing circuit13, the horizontal drive circuit14, and the like on the basis of a vertical synchronization signal, a horizontal synchronization signal, and a master clock signal. The control circuit16outputs the generated clock signal or control signal to the vertical drive circuit12, the column signal processing circuit13, the horizontal drive circuit14, and the like. The input/output terminal17exchanges signals with the outside. The solid-state imaging element10configured as described above is a CMOS image sensor that employs a system called a column AD system in which the column signal processing circuit13that performs the CDS processing and the AD conversion processing is arranged for each pixel column. Furthermore, the solid-state imaging element10can be, for example, a backside illumination-type CMOS image sensor. (Example of Pixel Structure) FIG.2is a view illustrating an example of a structure of a dual PD-type pixel. A ofFIG.2illustrates a plan view of pixels100in two rows and two columns (2×2) arranged in a predetermined imaging region when viewed from the light incident side among the plurality of pixels100arranged in the pixel array unit11. Furthermore, B ofFIG.2illustrates a part of the X-X′ cross-section of the pixel100illustrated in A ofFIG.2. As illustrated inFIG.2, the pixel100includes a structure in which a photodiode112A and a photodiode112B are provided in one on-chip lens111(hereinafter, also referred to as a dual PD-type structure). Note that the dual PD-type pixel100can be said to be a pixel portion including a left pixel100A having a left photodiode112A and a right pixel100B having a right photodiode112B (first pixel portion or second pixel portion). Furthermore, the on-chip lens is also called a microlens. In the dual PD-type pixel100, a pixel signal (A+B signal) generated by summing the electric charges accumulated in the photodiodes112A and112B is used as a signal for image acquisition and the pixel signal (A signal) obtained from the electric charges accumulated in the photodiode112A and the pixel signal (B signal) obtained from the electric charges accumulated in the photodiode112B can be independently read and used as a signal for phase difference detection. As described above, the pixel100has a dual PD-type structure, and can be used for both purposes: a pixel for image acquisition (hereinafter referred to as an image acquisition pixel) and a pixel for phase difference detection (hereinafter referred to as a phase difference detection pixel). Note that, although details will be described later, even a pixel signal obtained from a phase difference detection pixel can be used as a signal for image acquisition by being subjected to correction processing. Furthermore, as illustrated in the cross-sectional view of B ofFIG.2, the pixel100includes a color filter113below the on-chip lens111, and is configured as an R pixel100, a G pixel100, or a B pixel100depending on the wavelength component transmitted by the color filter113. Note that the R pixel100is a pixel that generates an electric charge corresponding to a red (R) component light from the light that has passed through an R color filter113that transmits a red (R: Red) wavelength component. Furthermore, the G pixel100is a pixel that generates an electric charge corresponding to a green (G) component light from the light that has passed through a G color filter113that transmits a green (G: Green) wavelength component. Moreover, the B pixel100is a pixel that generates an electric charge corresponding to a blue (B) component light from the light that has passed through a B color filter113that transmits a blue (B: Blue) wavelength component. In the pixel array unit11, the R pixels100, the G pixels100, and the B pixels100can be arranged in an arrangement pattern such as a Bayer arrangement. For example, in the plan view of A ofFIG.2, among the 2×2 pixels100, the upper left and lower right are G pixels100, the lower left is the R pixel100, and the upper right is the B pixel100. By the way, as a structure of the phase difference detection pixel, there is a shield-type structure. The shield-type pixel includes a structure in which a light-shielding portion including a metal such as tungsten (W) or aluminium (Al) is provided under the on-chip lens and this light-shielding portion shields the light for the left side region or the right side region when viewed from the light incident side. Then, in the pixel array unit, by disposing the left light-shielding pixels and the right light-shielding pixels having such a structure in a scattered manner, a left light-shielding pixel signal and a right light-shielding pixel signal are obtained as signals for phase difference detection. Here, in the dual PD-type pixel illustrated inFIG.2, in a case where either one of the electric charge accumulated in the photodiode112A and the electric charge accumulated in the photodiode112B is independently read, a pixel signal similar to the shield-type pixel can be obtained. That is, the pixel signal (A signal) corresponding to the right light-shielding pixel signal is obtained from the electric charge generated by the photodiode112A, and the pixel signal (B signal) corresponding to the left light-shielding pixel signal is obtained from the electric charge generated by the photodiode112B. In the present technology, by utilizing the feature of such a dual PD-type pixel, some of the pixels100arranged in the pixel array unit11are configured to independently read at least one electric charge of the electric charge accumulated in the photodiode112A or the electric charge accumulated in the photodiode112B, and thus, as compared with the case where the reading configuration disclosed in Patent Document 1 described above is adopted, lower power consumption can be achieved. Specifically, with the configuration disclosed in Patent Document 1 described above, in order to obtain a signal for phase difference detection, in part, the A signal and the B signal from the photodiodes A and B need to be separately read (reading twice). However, with the configuration of the present technology, one-time reading of the photodiodes112A and112B suffices, and thus it is possible to achieve lower power consumption because of the reduced number of times of reading. However, the dual PD-type pixel has improved performance in low illuminance as compared with the shield-type pixel, but the accuracy of phase difference detection is lower than that in high illuminance. Therefore, for example, it is necessary to enable the signals for phase difference detection to be eventually obtained in all the pixels arranged in the pixel array unit11in association with a gain set in the solid-state imaging element10. For example, as illustrated inFIG.3, in a case where the gain set in the solid-state imaging element10is smaller than a first threshold value Th1, a drive control A is selected as a drive control method, and among all the pixels arranged in the pixel array unit11, one of the photodiode112A and the photodiode112B is independently read in 3% of the pixels100. At this time, 3% of the pixels100are used as the phase difference detection pixels, and in the remaining 97% of the pixels100, both the photodiodes112A and112B are read and used as the image acquisition pixels. Furthermore, for example, as illustrated inFIG.3, in a case where the gain is larger than the first threshold value Th1 and smaller than a second threshold value Th2, a drive control B is selected, and among all the pixels arranged in the pixel array unit11, one of the photodiode112A and the photodiode112B is independently read in 6% of the pixels100. At this time, 6% of the pixels100are used as the phase difference detection pixels, and the remaining 94% of the pixels100are used as the image acquisition pixels. Moreover, for example, as illustrated inFIG.3, in a case where the gain is larger than the second threshold value Th2, a drive control C is selected, and in all the pixels arranged in the pixel array unit11, the photodiodes112A and112B are read and signals for phase difference detection and signals for image acquisition are obtained. At this time, all the pixels (100% of the pixels) are used as the phase difference detection pixels and the image acquisition pixels. Here, the gain set in the solid-state imaging element10is determined by detecting the average luminance on the basis of the output signal output from the solid-state imaging element10. That is, the gain has a smaller value as the illuminance in the imaging region of the pixel array unit11increases. For example, when the illuminance is 1.25, 10, 20, 80 lux (1×), the gains are set to 60, 42, 36, and 24 dB, respectively, and when the illuminance exceeds 1280 lux (1×), the gain is set to 0 dB. As described above, in the present technology, threshold value determination with respect to the gain set in the solid-state imaging element10is performed, and on the basis of results of the determination, among the pixels100arranged in the pixel array unit11, in a predetermined density (for example, 3%, 6%, or the like) of the pixels100, one of the photodiode112A and the photodiode112B is set to an independently read target. Note that, for example, in a case where the gain is larger than the second threshold value Th2, the photodiodes112A and112B are read in all the pixels (100% of the pixels), but at normal illuminance, the gain will not be larger than the second threshold value Th2 (the second threshold value Th2 is set accordingly), the photodiodes112A and112B will not be read in all the pixels. That is, under normal illuminance conditions (for example, illuminance of 10 lux or more or 20 lux or more), the photodiode112only needs to be read once to obtain a signal for phase difference detection, and thus lower power consumption can be achieved. Furthermore, as illustrated inFIG.3, for example, comparing the drive control C with the drive control A and B, the power consumption is increased by 50%, but under normal illuminance conditions (e.g., illuminance of 10 lux or more or 20 or more), lower power consumption can be achieved. Moreover, as illustrated inFIG.3, maximum frame rates in a case where driving by the drive control A, the drive control B, and the drive control C is performed are 1000 fps, 1000 fps, and 500 fps, respectively. However, for example, in a case where driving by the drive control A and the drive control B is performed, as compared with the case where driving by the drive control C is performed, the frame rate can be increased and thus slow motion of 1000 fps or the like can be achieved. Note that, inFIG.3, an example is illustrated in which two threshold values (first threshold value Th1 and second threshold value Th2) are set and the pixels100arranged in the pixel array unit11are driven by the drive control methods in three stages (drive control A, B, C), but the number of threshold values is not limited thereto, and, for example, one threshold value may be provided and drive control may be performed in two stages or three or more threshold values may be provided and drive control may be performed in four or more stages. Moreover, the density of the phase difference detection pixels such as 3% and 6% is an example, and an arbitrary density (for example, a density that increases according to the gain) can be set for each drive control method at each stage. Furthermore, in the above description, an example is illustrated in which in a case where the pixel100is used as a phase difference detection pixel, one of the photodiode112A and the photodiode112B is independently read, but the photodiode112A and the photodiode112B may be independently read. For example, in the drive control B illustrated inFIG.3, in a case where the density of the phase difference detection pixels is low such as when the density of the pixels100used as the phase difference detection pixels is 6%, the photodiode112A and the photodiode112B may be independently read and the pixel signals (A signal and B signal) may be obtained. However, these reading methods can be changed for each drive control method at each stage. Furthermore, in a case where imaging is performed a plurality of times, a hysteresis may be provided for the threshold value, and for example, in comparing the set gain and the threshold value, even when the threshold value is changed a bit at a stage where the gain first exceeds the threshold value and the gain obtained thereafter varies to some extent, the gain may be kept above the threshold value. By providing such hysteresis, it is possible to prevent excessive switching of the distance measurement methods. (Performance Difference Between Types of Pixel) FIG.4illustrates a graph illustrating evaluation results of the performance for each type of pixel. InFIG.4, the horizontal axis represents gain (unit: dB), and the value increases from the left side to the right side in the drawing. Furthermore, the vertical axis represents variation (σ) (unit: μm) in phase difference detection, meaning that the variation increases toward the upper side in the drawing. InFIG.4, a curve C1 illustrates the characteristics of a dual PD-type pixel, and a curve C2 illustrates the characteristics of a shield-type pixel. Here, for example, focusing on a high illuminance region where the illuminance is greater than 80 lux (1×), under such high illuminance conditions, in the dual PD-type pixel and the shield-type pixel, the values of the variation (o) for phase difference detection are almost the same, and almost the same performance can be obtained. On the other hand, for example, focusing on a low illuminance region near 10 lux (1×), under such low illuminance conditions, the value of the variation (o) for phase difference detection is larger in the shield-type pixel than in the dual PD-type pixel, and therefore the performance is low. As described above, under high illuminance conditions, AF performance (performance of phase difference detection) does not deteriorate even with a pixel structure having a low density such as a shield-type pixel. Then, in the present technology, by utilizing the characteristics of such a pixel, under a high illuminance condition, some of the pixels100arranged in the pixel array unit11are configured to independently read at least one electric charge of the electric charge accumulated in the photodiode112A or the electric charge accumulated in the photodiode112B. Therefore, with the configuration of the present technology, even in a case where the dual PD-type pixel is used, the photodiodes112A and112B need only be read once to obtain the signal for phase difference detection. Therefore, it is possible to reduce power consumption under high illuminance without reducing the distance measurement performance. Furthermore, since it is not necessary to separately read the pixel signals (A signal and B signal) from the photodiodes112A and112B (read twice), as compared with the case where the reading configuration disclosed in Patent Document 1 described above is adopted, it is possible to achieve higher speed. In other words, with the configuration of the present technology, in the case of low illuminance, signals for phase difference detection are detected in all the pixels100, and in the case of high illuminance, signals for phase difference detection are detected in the discrete pixels100. Therefore, even if the illuminance is low, the distance measurement performance can be maintained, and in the case of the high illuminance, the power consumption can be suppressed without deteriorating the distance measurement performance as compared with the case where the signals for phase difference detection are obtained in all the pixels. 2. Second Embodiment By the way, since the accuracy of the phase difference detection depends on shot noise, the drive control of the pixel100may be linked with not only the gain, but also the luminance level. Therefore, in the second embodiment, the driving of the pixels100arranged in the pixel array unit11is controlled on the basis of the gain set in the solid-state imaging element10and the luminance level. (Configuration Example of the Imaging Apparatus) FIG.5is a block diagram illustrating an example of a configuration of the imaging apparatus according to the second embodiment. InFIG.5, an imaging apparatus1A includes the solid-state imaging element10(FIG.1) and a control unit200A. The control unit200A includes, for example, a control circuit such as a microcontroller. The control unit200A includes a drive control unit211, an AE unit212, and a luminance level detection unit213. The AE unit212performs processing related to the auto exposure (AE) function on the basis of the output signal output from the solid-state imaging element10. For example, the AE unit212detects the average luminance on the basis of the output signal from the solid-state imaging element10, and determines the gain according to the detection result. The AE unit212supplies the determined gain to the solid-state imaging element10and the drive control unit211. Note that this gain is, for example, for controlling the shutter speed of the solid-state imaging element10and can be said to be exposure information. The luminance level detection unit213detects the luminance level in a screen on the basis of the output signal output from the solid-state imaging element10, and supplies the detection result to the drive control unit211. Note that the luminance level in the screen is, for example, the luminance level of a captured image displayed in the screen in a case where the imaging apparatus1A has a display screen, that is, the luminance level of a target region (local region) in a target image frame. The drive control unit211is supplied with the gain from the AE unit212and the luminance level from the luminance level detection unit213. The drive control unit211generates a drive control signal for controlling the drive of the pixels100arranged in the pixel array unit11of the solid-state imaging element10on the basis of the gain and the luminance level supplied thereto, and supplies the drive control signal to the solid-state imaging element10. Here, in the AE unit212, gain control is performed on the basis of the detected average luminance (entire luminance), that is, the exposure amount obtained from an image frame preceding the target image frame. For example, in this gain control, the control for increasing the gain is performed in the case of being dark (low illuminance), and the control for reducing the gain is performed in the case of being bright (high illuminance). Therefore, it can be said that the AE unit212corresponds to an illuminance detection unit that detects the illuminance in the imaging region of the pixel array unit11on the basis of the exposure amount obtained from the previous image frame. Furthermore, the luminance level detection unit213obtains the luminance level in a target region (local region). Then, when the screen (in the target image frame) is captured in a local region, because there are bright regions and dark regions, the drive control unit211can control the drive of the pixels100in association with not only the illuminance used for gain control, but also the luminance level in the screen. For example, in the screen, a white subject has a high luminance level, while a black subject has a low luminance level, but shot noise increases when the luminance level is low (variation in phase difference detection is large). Therefore, the drive control unit211causes more phase difference detection pixels to be used. The solid-state imaging element10controls the shutter speed on the basis of the gain supplied from the AE unit212of the control unit200A. Furthermore, the solid-state imaging element10drives the pixels100arranged in the pixel array unit11on the basis of the drive control signal supplied from the drive control unit211of the control unit200A. (Example of Driving Pixels) FIG.6illustrates an example of pixel drive control according to the second embodiment. For example, the drive control unit211calculates the following formula (1) on the basis of the gain and the luminance level supplied thereto, and performs a threshold value determination on the calculation result. Then, the drive control unit211controls the drive of the pixels100arranged in the pixel array unit11of the solid-state imaging element10on the basis of the determination result. Gain+1/Luminance level  (1) Here, in Formula (1), the “Gain” of the first term becomes a smaller value as the illuminance in the imaging region of the pixel array unit11becomes larger, and the value of the calculation result also becomes smaller, and the gain becomes a larger value as the illuminance in the imaging region becomes smaller, and the value of the calculation result also becomes larger. Furthermore, in Formula (1), the second term is expressed by “+1/Luminance level”. Therefore, the value of the calculation result becomes larger as the luminance level in the target region (local region) in the screen (in the target image frame) is lower. For example, in a case where the calculation result of Formula (1) is smaller than the first threshold value Th1, the drive control unit211follows the drive control A to control the drive of the pixels100such that, among all the pixels arranged in the pixel array unit11, 3% of the pixels100operate as phase difference detection pixels and the remaining 97% of the pixels100operate as image acquisition pixels. At this time, in the solid-state imaging element10, in the pixels100(3% of the pixels) that operate as phase difference detection pixels, the electric charges accumulated in one of the photodiode112A of the left pixel100A and the photodiode112B of the right pixel100B are read independently. Furthermore, for example, when the calculation result of Formula (1) is larger than the first threshold value Th1 and smaller than the second threshold value Th2, the drive control unit211follows the drive control B to control the drive of the pixels100such that, among all the pixels arranged in the pixel array unit11, 6% of the pixels100operate as phase difference detection pixels and the remaining 94% of the pixels100operate as image acquisition pixels. At this time, in the solid-state imaging element10, in the pixels100(6% of the pixels) that operate as phase difference detection pixels, the electric charges accumulated in one of the photodiode112A of the left pixel100A and the photodiode112B of the right pixel100B are read independently. Moreover, for example, in a case where the calculation result of Formula (1) is larger than the second threshold value Th2, the drive control unit211follows the drive control C to control the drive of the pixels100such that all the pixels (100% of the pixels) arranged in the pixel array unit11operate as both pixels: phase difference detection pixels and image acquisition pixels. At this time, in the solid-state imaging element10, in all the pixels (100% of the pixels), the electric charges accumulated in the photodiode112A of the left pixel100A and the photodiode112B of the right pixel100B are read. As described above, in the second embodiment, threshold value determination with respect to the calculation result of Formula (1) using the gain and the luminance level is performed, and on the basis of results of the determination, among the pixels100arranged in the pixel array unit11, in a predetermined density (for example, 3%, 6%, or the like) of the pixels100, one of the photodiode112A and the photodiode112B is set to an independently read target. That is, in a case where the dual PD-type pixels100are arranged in the pixel array unit11, in a case where the calculation result of Formula (1) is larger than a predetermined threshold value (for example, the second threshold value Th2), all the pixels100operate as phase difference detection pixels, but in a case where the calculation result of Formula (1) is smaller than a predetermined threshold value (for example, the first threshold value Th1 or the second threshold value Th2), only the specific pixels100arranged in a scattered manner (in a repeating pattern) operate as phase difference detection pixels. When such driving is performed, in a case where the accuracy of phase difference detection is low, for example, at the time of low illuminance or due to low luminance level of the target region, signals for phase difference detection are detected in more number of pixels100, and, in a case where the accuracy of phase difference detection is high, for example, at the time of high illuminance or due to high luminance level of the target region, signals for phase difference detection are detected in the discrete pixels100. Therefore, it is possible to achieve lower power consumption at high illuminance and high speed without reducing the distance measurement performance. Note that, also inFIG.6, the number of threshold values used in the threshold value determination is arbitrary, and furthermore a hysteresis may be provided for the threshold value. Furthermore, the above Formula (1) is an example of an arithmetic expression using the gain and the luminance level, and another arithmetic expression to which a function such as logarithm is applied may be used, for example. Furthermore, in the imaging apparatus1A of the second embodiment illustrated inFIG.5, the configuration excluding the luminance level detection unit213corresponds to the configuration described in the first embodiment described above, that is, the configuration for controlling the drive of the pixels100arranged in the pixel array unit11on the basis of results of the threshold value determination using the gain. Furthermore, inFIG.6, similarly toFIG.3described above, for example, in the drive control B, in a case where the density of the pixels100used as the phase difference detection pixels is 6%, the photodiode112A and the photodiode112B are independently read so that pixel signals (A signal and B signal) can be obtained and used as signals for phase difference detection. 3. Third Embodiment Furthermore, as described above, since the accuracy of the phase difference detection depends on shot noise, the drive control of the pixel100may be linked with the number of pixels100that have operated as phase difference detection pixels in addition to the gain and the luminance level. Therefore, in the third embodiment, the driving of the pixels100arranged in the pixel array unit11is controlled on the basis of the gain set in the solid-state imaging element10, the luminance level, and the number of phase difference detection pixels. (Configuration Example of the Imaging Apparatus) FIG.7is a block diagram illustrating an example of a configuration of the imaging apparatus according to the third embodiment. InFIG.7, an imaging apparatus1B includes the solid-state imaging element10(FIG.1) and a control unit200B. In comparison to the control unit200A (FIG.5), the control unit200B further includes a phase difference detection unit214and a counting unit215in addition to the drive control unit211, the AE unit212, and the luminance level detection unit213. The phase difference detection unit214detects the phase difference on the basis of the output signal (signal for phase difference detection) output from the solid-state imaging element10, and outputs the detection result to a circuit (not illustrated) in a subsequent stage. Furthermore, the phase difference detection unit214supplies the information associated with the effective phase difference detection pixel obtained at the time of the phase difference detection (hereinafter, referred to as effective phase difference pixel information) to the counting unit215. The counting unit215, on the basis of the effective phase difference pixel information supplied from the phase difference detection unit214, among the pixels100that have operated as the phase difference detection pixels, counts the number of effective phase difference detection pixels, and supplies the count result (the number of effective phase difference detection pixels) to the drive control unit211. The drive control unit211is supplied with the count result from the counting unit215in addition to the gain from the AE unit212and the luminance level from the luminance level detection unit213. The drive control unit211generates a drive control signal for controlling the drive of the pixels100on the basis of the gain, the luminance level, and the number of effective phase difference detection pixels supplied thereto, and supplies the drive control signal to the solid-state imaging element10. Here, since the phase difference detection unit214cannot effectively detect the phase difference unless, for example, the edge of a subject image can be discriminated, the counting unit215counts the number of phase difference detection pixels used for effective phase difference detection on the basis of the effective phase difference pixel information. Then, the drive control unit211can control the drive of the pixels100in association with not only the illuminance used for gain control and the luminance level in the screen, but also the number of effective phase difference detection pixels. For example, when the number of effective phase difference detection pixels is small, the variation in phase difference detection increases, and therefore the drive control unit211uses more phase difference detection pixels. (Example of Driving Pixels) FIG.8illustrates an example of pixel drive control according to the third embodiment. For example, the drive control unit211calculates the following formula (2) on the basis of the gain, the luminance level, and the number of effective phase difference detection pixels supplied thereto, and performs a threshold value determination on the calculation result. Then, the drive control unit211controls the drive of the pixels100arranged in the pixel array unit11of the solid-state imaging element10on the basis of the determination result. Gain+1/Luminance level+1/Number of effective phase difference detection pixels  (2) Here, in Formula (2), the first term and the second term are similar to Formula (1) described above, and the third term is represented by “1/Number of effective phase difference detection pixels”, and therefore the smaller the number of effective phase difference detection pixels, the larger the value of the calculation result. For example, in a case where the calculation result of Formula (2) is smaller than the first threshold value Th1, the drive control unit211follows the drive control A to control the drive of the pixels100such that, among all the pixels arranged in the pixel array unit11, 3% of the pixels100operate as phase difference detection pixels. Furthermore, for example, in a case where the calculation result of Formula (2) is larger than the first threshold value Th1 and smaller than the second threshold value Th2, the drive control unit211follows the drive control B to control the drive of the pixels100such that, among all the pixels arranged in the pixel array unit11, 6% of the pixels100operate as phase difference detection pixels. Moreover, for example, in a case where the calculation result of Formula (2) is larger than the second threshold value Th2, the drive control unit211follows the drive control C to control the drive of the pixels100such that all the pixels (100% of the pixels) arranged in the pixel array unit11operate as both pixels: phase difference detection pixels and image acquisition pixels. As described above, in the third embodiment, threshold value determination with respect to the calculation result of Formula (2) using the gain, the luminance level, and the number of effective phase difference detection pixels is performed, and on the basis of results of the determination, among the pixels100arranged in the pixel array unit11, in a predetermined density (for example, 3%, 6%, or the like) of the pixels100, one of the photodiode112A and the photodiode112B is set to an independently read target. That is, in a case where the dual PD-type pixels100are arranged in the pixel array unit11, in a case where the calculation result of Formula (2) is larger than a predetermined threshold value (for example, the second threshold value Th2), all the pixels100operate as phase difference detection pixels, but in a case where the calculation result of Formula (2) is smaller than a predetermined threshold value (for example, the first threshold value Th1 or the second threshold value Th2), only the specific pixels100arranged in a scattered manner (in a repeating pattern) operate as phase difference detection pixels. When such driving is performed, in a case where the accuracy of phase difference detection is low, for example, due to a small number of effective phase difference detection pixels, signals for phase difference detection are detected in more number of pixels100, and, in a case where the accuracy of phase difference detection is high, for example, due to a large number of effective phase difference detection pixels, signals for phase difference detection are detected in the discrete pixels100. Therefore, it is possible to achieve lower power consumption at high illuminance and high speed without reducing the distance measurement performance. Note that, also inFIG.8, the number of threshold values used in the threshold value determination is arbitrary, and furthermore a hysteresis can be provided for the threshold value. Furthermore, the above Formula (2) is an example of an arithmetic expression using the gain, the luminance level, and the number of phase difference detection pixels, and another arithmetic expression to which a function such as logarithm is applied may be used, for example. Moreover, in the above Formula (2), it is described that the calculation using the gain, the luminance level, and the number of phase difference detection pixels is performed, but calculation using at least one calculation target among these calculation targets may be performed. Furthermore, inFIG.8, similarly toFIG.3described above, for example, in the drive control B, in a case where the density of the pixels100used as the phase difference detection pixels is 6%, the photodiode112A and the photodiode112B are independently read so that pixel signals (A signal and B signal) can be obtained and used as signals for phase difference detection. 4. Fourth Embodiment Furthermore, as described above, since the accuracy of the phase difference detection depends on shot noise, the drive control of the pixel100may be linked with (the number of pixels included in) a ROI area corresponding to the AF area corresponding to the phase difference detection pixels in addition to the gain and the luminance level. Therefore, in the fourth embodiment, the driving of the pixels100arranged in the pixel array unit11is controlled on the basis of the gain set in the solid-state imaging element10, the luminance level, and the ROI area. (Configuration Example of the Imaging Apparatus) FIG.9is a block diagram illustrating an example of a configuration of the imaging apparatus according to the fourth embodiment. InFIG.9, an imaging apparatus1C includes the solid-state imaging element10(FIG.1) and a control unit200C. In comparison to the control unit200A (FIG.5), the control unit200C further includes a ROI setting unit216in addition to the drive control unit211, the AE unit212, and the luminance level detection unit213. The ROI setting unit216sets a region of interest (ROI). The ROI setting unit216acquires information associated with the ROI area (hereinafter referred to as ROI area information) on the basis of the setting information of the ROI, and supplies it to the drive control unit211. Note that the ROI area is the size of a region of interest (ROI) in the target image frame. The drive control unit211is supplied with the ROI area information from the ROI setting unit216in addition to the gain from the AE unit212and the luminance level from the luminance level detection unit213. The drive control unit211generates a drive control signal for controlling the drive of the pixels100on the basis of the gain, the luminance level, and the ROI area information supplied thereto, and supplies the drive control signal to the solid-state imaging element10. Here, for example, in a case where the imaging apparatus1C has a function (touch AF function) for a user to touch a screen with a finger to select a subject to be focused, the ROI setting unit216acquires the ROI area corresponding to the area of the AF area for the subject selected by the user. Then, the drive control unit211can control the drive of the pixels100in association with not only the illuminance used for gain control and the luminance level in the screen, but also the ROI area. For example, when the ROI area is small, the variation in phase difference detection increases, and therefore the drive control unit211uses more phase difference detection pixels. (Example of Driving Pixels) FIG.10illustrates an example of pixel drive control according to the fourth embodiment. For example, the drive control unit211calculates the following formula (3) on the basis of the gain, the luminance level, and the ROI area information supplied thereto, and performs a threshold value determination on the calculation result. Then, the drive control unit211controls the drive of the pixels100arranged in the pixel array unit11of the solid-state imaging element10on the basis of the determination result. Gain+1/Luminance level+1/ROIarea  (3) Here, in Formula (3), the first term and the second term are similar to Formula (1) described above, and the third term is represented by “1/ROI area”, and therefore the smaller the (size of) the ROI area, the larger the value of the calculation result. For example, in a case where the calculation result of Formula (3) is smaller than the first threshold value Th1, the drive control unit211follows the drive control A to control the drive of the pixels100such that, among all the pixels arranged in the pixel array unit11, 3% of the pixels100operate as phase difference detection pixels. Furthermore, for example, in a case where the calculation result of Formula (3) is larger than the first threshold value Th1 and smaller than the second threshold value Th2, the drive control unit211follows the drive control B to control the drive of the pixels100such that, among all the pixels arranged in the pixel array unit11, 6% of the pixels100operate as phase difference detection pixels. Moreover, for example, in a case where the calculation result of Formula (3) is larger than the second threshold value Th2, the drive control unit211follows the drive control C to control the drive of the pixels100such that all the pixels (100% of the pixels) arranged in the pixel array unit11operate as both pixels: phase difference detection pixels and image acquisition pixels. As described above, in the fourth embodiment, threshold value determination with respect to the calculation result of Formula (3) using the gain, the luminance level, and the ROI area is performed, and on the basis of results of the determination, among the pixels100arranged in the pixel array unit11, in a predetermined density (for example, 3%, 6%, or the like) of the pixels100, one of the photodiode112A and the photodiode112B is set to an independently read target. That is, in a case where the dual PD-type pixels100are arranged in the pixel array unit11, in a case where the calculation result of Formula (3) is larger than a predetermined threshold value (for example, the second threshold value Th2), all the pixels100operate as phase difference detection pixels, but in a case where the calculation result of Formula (3) is smaller than a predetermined threshold value (for example, the first threshold value Th1 or the second threshold value Th2), only the specific pixels100arranged in a scattered manner (in a repeating pattern) operate as phase difference detection pixels. When such driving is performed, in a case where the accuracy of phase difference detection is low, for example, due to a small ROI area, signals for phase difference detection are detected in more number of pixels100, and, in a case where the accuracy of phase difference detection is high, for example, due to a large ROI area, signals for phase difference detection are detected in the discrete pixels100. Therefore, it is possible to achieve lower power consumption at high illuminance and high speed without reducing the distance measurement performance. Furthermore, by using the ROI area for the threshold value determination, for example, it becomes possible to control the drive according to the illuminance of the region to be focused on within the entire screen, and therefore distance measurement performance can be further increased. Note that, also inFIG.10, the number of threshold values used in the threshold value determination is arbitrary, and furthermore a hysteresis can be provided for the threshold value. Furthermore, the above Formula (3) is an example of an arithmetic expression using the gain, the luminance level, and the ROI area, and another arithmetic expression to which a function such as logarithm is applied may be used, for example. Moreover, in the above Formula (3), it is described that the calculation using the luminance level, the gain, and the ROI area is performed, but calculation using at least one calculation target among these calculation targets may be performed. Furthermore, the number of effective phase difference detection pixels may be used together with the ROI area in addition to the gain and the luminance level by combining Formulae (2) and (3). Furthermore, inFIG.10, similarly toFIG.3described above, for example, in the drive control B, in a case where the density of the pixels100used as the phase difference detection pixels is 6%, the photodiode112A and the photodiode112B are independently read so that pixel signals (A signal and B signal) can be obtained and used as signals for phase difference detection. 5. Fifth Embodiment By the way, in a case where the drive control of the pixels100is performed on the basis of the drive control A and the drive control B described above, and partially, the photodiode112A of the left pixel100A or the photodiode112B of the right pixel100B is independently read, the pixel signals independently read from one of the photodiode112A and the photodiode112B cannot be used as they are for a captured image, and thus correction is needed. Therefore, in the fifth embodiment, a configuration of the case where correction processing is performed on an output signal output from the solid-state imaging element10will be described. (Configuration Example of the Imaging Apparatus) FIG.11is a block diagram illustrating an example of a configuration of the imaging apparatus according to the fifth embodiment. InFIG.11, an imaging apparatus1A includes the solid-state imaging element10(FIG.1), the control unit200A (FIG.5), and a signal processing unit300. In comparison to the configuration illustrated inFIG.5, the imaging apparatus1A ofFIG.11further includes the signal processing unit300in addition to the solid-state imaging element10and the control unit200A. The signal processing unit300includes a pixel correction unit311, a selector312, and an image signal processing unit313. The output signal (pixel signal) output from the solid-state imaging element10is supplied to each of the pixel correction unit311and the selector312. The pixel correction unit311performs pixel correction processing on the pixel signal from the solid-state imaging element10, and supplies the resultant corrected pixel signal (corrected pixel signal) to the selector312. For example, in this pixel correction processing, in a case where the pixel signal (A signal) from the photodiode112A of the left pixel100A constituting the pixel100as the phase difference detection pixel is supplied, the correction processing for obtaining a signal corresponding to the pixel signal (B signal) from the photodiode112B of the corresponding right pixel100B is performed, and a pixel signal that can be used for a captured image is obtained. To the selector312, the pixel signal output from the solid-state imaging element10and the corrected pixel signal supplied from the pixel correction unit311are input as input signals, and the drive control signal output from the drive control unit211of the control unit200A is input as a selection control signal. The selector312selects one of the pixel signals from the pixel signal from the solid-state imaging element10and the corrected pixel signal from the pixel correction unit311on the basis of the drive control signal from the drive control unit211, and supplies the pixel signal to the image signal processing unit313. Here, the drive control signal is based on the drive control method (drive control A, B, C) determined by the threshold value determination on the calculation result of Formula (1), and the position and density of the phase difference detection pixels are linked with the gain (illuminance) and the luminance level. Therefore, by inputting the drive control signal as the selection control signal of the selector312, the position and density of the phase difference detection pixel can be linked with the pixel signal that needs to be corrected by the pixel correction unit311. For example, in a case where the pixels100arranged in the pixel array unit11are driven according to the drive control A (FIG.6) determined by the threshold value determination with respect to the calculation result of the Formula (1), a pixel signal (A signal or B signal) independently read from one of the photodiode112A and the photodiode112B of the pixels100(3% of the pixels) that operate as the phase difference detection pixel is input to and corrected by the pixel correction unit311. On the other hand, the pixel signal (A+B signal) read from both the photodiode112A and the photodiode112B of the pixels100(97% of the pixels) that operate as the image acquisition pixels does not need to be corrected and is input to the selector312as it is. Furthermore, for example, in a case where the pixels100arranged in the pixel array unit11are driven according to the drive control C (FIG.6), the pixel signal (A+B signal) read from the photodiode112A and the photodiode112B of the pixels100(100% of the pixels) that operate as both pixels: the phase difference detection pixel and the image acquisition pixel does not need to be corrected and is input to the selector312as it is. The image signal processing unit313performs predetermined image signal processing on the basis of the pixel signal supplied from the selector312, and outputs the resultant pixel signal to the circuit at a subsequent stage. As the image signal processing here, for example, signal processing such as demosaic, noise removal, gradation correction, color correction, image compression/expansion, and the like is performed. Furthermore, although illustration is omitted, the signal for phase difference detection is output to the phase difference detection unit and used in the processing for detecting the phase difference there. Note that, inFIG.11, the case where the imaging apparatus1A includes the solid-state imaging element10(FIG.1), the control unit200A (FIG.5), and the image signal processing unit300has been described, but in the imaging apparatus1A, instead of the control unit200A, the control unit200B (FIG.7) or the control unit200C (FIG.9) may be included. 6. Sixth Embodiment Next, the read function of the pixels100arranged in the pixel array unit11will be described. Note that, here, the configuration of the read function of the present technology is illustrated inFIGS.14and15, and the configuration of the current read function is illustrated inFIGS.12and13, and description will be made by comparing the read function of the present technology with the current read function. (Configuration of the Read Function) FIGS.12to15illustrate a partial region of the imaging region in the pixel array unit11, comparators151and a DAC152in the column signal processing circuit13. InFIGS.12to15, it is assumed that the circles described on the photodiodes112A and112B constituting the pixels100represent contacts C, and the rhombuses described every four pixels in the column direction represent floating diffusion regions FD. In the pixel array unit11, the plurality of pixels100arranged two-dimensionally is arranged in a Bayer arrangement. In the pixel array unit11, the pixels100arranged in the column direction share the floating diffusion region FD. Furthermore, the drive signals (TRG, SEL) with respect to a transfer transistor TR-Tr or a selection transistor SEL-Tr are supplied from the vertical drive circuit12(FIG.1). Each pixel100includes the left pixel100A and the right pixel100B. The left pixel100A has a transfer transistor TR-Tr-A in addition to the photodiode112A. Furthermore, the right pixel100B has a transfer transistor TR-Tr-B in addition to the photodiode112B. In each pixel100, the transfer transistors TR-Tr-A and TR-Tr-B connected to the photodiodes112A and112B perform an on/off operation according to the drive signal TRG input to their gates such that electric charges (signal charges) photoelectrically converted by the photodiodes112A and112B are transferred to the floating diffusion region FD. The floating diffusion region FD is formed at a connection point between the transfer transistors TR-Tr-A and TR-Tr-B of the pixels100, which are the share pixels, and a reset transistor RST-Tr and an amplification transistor AMP-Tr shared by the share pixels. The reset transistor RST-Tr performs an on/off operation according to the drive signal RST input to its gate such that the electric charge accumulated in the floating diffusion region FD is discharged. The floating diffusion region FD has a function of accumulating the electric charge transferred by the transfer transistors TR-Tr-A and TR-Tr-B of the pixels100, which are the share pixels. The potential of the floating diffusion region FD is modulated according to the accumulated electric charge amount. The amplification transistor AMP-Tr operates as an amplifier that turns the potential variation of the floating diffusion region FD connected to its gate as an input signal, and the output signal voltage is output to the vertical signal line (VSL)22via the selection transistor SEL-Tr. The selection transistor SEL-Tr performs an on/off operation according to the drive signal SEL input to its gate and outputs a voltage signal from the amplification transistor AMP-Tr to the vertical signal line (VSL)22. In this way, the pixels100arranged in the pixel array unit11are share pixels in the column direction, and the left pixel100A of each pixel100of the share pixels has the photodiode112A and the transfer transistor TR-Tr-A, and the right pixel100B has the photodiode112B and the transfer transistor TR-Tr-B. Furthermore, in the share pixels, the floating diffusion region FD is shared, and as the pixel circuit of the share pixel, the reset transistor RST-Tr, the amplification transistor AMP-Tr, and the selection transistor SEL-Tr are shared as the shared transistors. The signal voltage output to the vertical signal line (VSL)22is input to the comparators151in the column signal processing circuit13. A comparator151-1compares a signal voltage (Vx) from a vertical signal line (VSL1)22-1with a reference voltage (Vref) of the ramp wave (ramp) from the DAC152, and outputs an output signal of a level according to the comparison result. Similarly, comparators151-2to151-4are similar to the comparator151-1except that the signal voltage to be compared with the reference voltage is changed to be a signal voltage from a vertical signal line (VSL3)22-3, a vertical signal line (VSLS)22-5, or a vertical signal line (VSL7)22-7, and an output signal of a level according to the comparison result is output. Then, in the column signal processing circuit13, the reset level or the signal level is counted on the basis of the output signal from the comparator151, thereby achieving AD conversion of the column AD method using correlated double sampling (CDS). (Contact Arrangement: Current Configuration) Here, regarding the arrangement of the contacts C of the pixel100, the arrangement is partially different between the configuration of the current read function illustrated inFIGS.12and13and the configuration of the read function of the present technology illustrated inFIGS.14and15. Note that, in the following description, the arrangement position of the pixels100of each row and the pixels100of each column will be described with reference to the upper left pixel100. Furthermore, in the following description, “SEL” and “TRG” in the drawings are used to distinguish drive lines and drive signals applied to the corresponding drive lines. That is, in the current configuration (FIGS.12and13), in the pixels100of the first row, the contacts C for the transfer transistors TR-Tr-A and TR-Tr-B connected to the photodiodes112A and112B are connected to drive lines TRG6 and TRG7, respectively. Furthermore, in the pixels100of the second row, the contacts C for the transfer transistors TR-Tr-A and TR-Tr-B connected to the photodiodes112A and112B are connected to drive lines TRG4 and TRG5, respectively. Furthermore, in the current configuration (FIGS.12and13), in the pixels100of the third row, the contacts C for the transfer transistors TR-Tr-A and TR-Tr-B are connected to drive lines TRG2 and TRG3, respectively, and in the pixels100of the fourth row, the contacts C for the transfer transistors TR-Tr-A and TR-Tr-B are connected to drive lines TRG0 and TRG1, respectively. Similarly, in the current configuration (FIGS.12and13), in the pixels100of the fifth to eighth rows, the contacts C for the transfer transistor TR-Tr-A are connected to the drive line TRG0, TRG2, TRG4, or TRG6, and the contacts C for the transfer transistor TR-Tr-B are connected to the drive line TRG1, TRG3, TRG5, or TRG7. (Contact Arrangement: Configuration of the Present Technology) On the other hand, in the configuration of the present technology (FIGS.14and15), the pixels100of the first to fifth rows and the seventh to eighth rows are similar to the configuration indicated by the current configuration (FIGS.12and13) such that the contacts C for the transfer transistor TR-Tr-A are connected to the drive line TRG0, TRG2, TRG4, or TRG6, and the contacts C for the transfer transistor TR-Tr-B are connected to the drive line TRG1, TRG3, TRG5, or TRG7. Here, in the configuration of the present technology (FIGS.14and15), focusing on the pixels100of the sixth row, a drive line TRG10 is added between the drive line TRG4 and the drive line TRG5. Then, among the pixels100of the sixth row, the pixels100of the first column, the third column, and the fourth column are similar to the configuration indicated by the current configuration (FIGS.12and13) such that the contacts C for transfer transistor TR-Tr-A are connected to the drive line TRG4, and the contacts C for the transfer transistor TR-Tr-B are connected to the drive line TRG5. Furthermore, in the pixels100of the sixth row, a pixel100-62of the second column is such that a contact C-62A for the left transfer transistor TR-Tr-A is connected to the drive line TRG4, but a contact C-62B for the right transfer transistor TR-Tr-B is connected to the added drive line TRG10. That is, in a case where attention is paid to the pixel100-62, in the configuration indicated by the configuration (FIGS.14and15) of the present technology, as compared with the configuration indicated by the current configuration (FIGS.12and13), the configurations are identical in that the contact C-62A is connected to the drive line TRG4, but are different in that the contact C-62B is connected to the drive line TRG10, not the drive line TRG5. In other words, it can be said that the pixel array unit11includes a first drive line (e.g., drive line TRG4) connected to a first photoelectric conversion unit (e.g., photodiode112A) of a first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)) and the second pixel portion (e.g., the pixel100-62of the second column), a second drive line (e.g., drive line TRG5) connected to a second photoelectric conversion unit (e.g., photodiode112B) of the first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)), and a third drive line (e.g., drive line TRG10) connected to the second photoelectric conversion unit (e.g., photodiode112B) of the second pixel portion (e.g., the pixel100-62of the second column). At this time, the second drive line (e.g., drive line TRG5) is nonconnected to the second photoelectric conversion unit (e.g., photodiode112B) of the second pixel portion (e.g., the pixel100-62of the second column). Furthermore, the third drive line (e.g., drive line TRG10) is nonconnected to the second photoelectric conversion unit (e.g., the photodiode112B) of the first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)). (Read Operation: Current Configuration) Next, a read operation in the case of having the above-described configuration will be described. Here, first of all, the current read operation will be described with reference toFIGS.12and13. InFIG.12, the drive signal SEL1 becomes an L level, and the selection transistor SEL-Tr shared by the share pixels including the pixels100of the first to fourth rows on the upper side is in an OFF state, while the drive signal SEL0 becomes an H level, and the selection transistor SEL-Tr shared by the share pixels including the pixels100of the fifth to eighth rows on the lower side is in an ON state. Therefore, the share pixel including the pixels100of the fifth to eighth rows on the lower side is selected. At this time, as illustrated inFIG.12, among the drive signals TRG0 to TRG7, only the drive signal TRG4 becomes an H level, and in each pixel100of the sixth row, the transfer transistor TR-Tr-A connected to the photodiode112A is in an ON state. Therefore, the electric charge accumulated in the photodiode112A of each pixel100of the sixth row, which is surrounded by the thick frame inFIG.12, is transferred to the floating diffusion region FD corresponding to each share pixel. Then, in the share pixel including each pixel100of the sixth row, in the amplification transistor AMP-Tr, the potential variation of the floating diffusion region FD is used as an input signal voltage to the gate, and the output signal voltage is output to the vertical signal line22via the selection transistor SEL-Tr. In this way, the electric charge accumulated in the photodiode112A of each pixel100of the sixth row is independently read, and the pixel signal (A signal) is obtained. Thereafter, as illustrated inFIG.13, while the drive signal SEL0 remains at an H level, the drive signals TRG4 and TRG5 become an H level, and in each pixel100of the sixth row, the transfer transistor TR-Tr-A connected to the photodiode112A and the transfer transistor TR-Tr-B connected to the photodiode112B simultaneously become an ON state. Therefore, the electric charges accumulated in both the photodiodes112A and112B of each pixel100of the sixth row, which are surrounded by the thick frame inFIG.13, are transferred to the floating diffusion region FD. Then, in the share pixel including each pixel100of the sixth row, by the amplification transistor AMP-Tr, the signal voltage depending on the potential variation of the floating diffusion region FD is output to the vertical signal line22via the selection transistor SEL-Tr. In this way, the electric charges accumulated in the photodiodes112A and112B of each pixel100of the sixth row are added up and read, and the pixel signal (A+B signal) is obtained. Then, in the current read operation, as illustrated inFIGS.12and13, the A signal is obtained as a signal for phase difference detection, and the A+B signal is obtained as a signal for image acquisition. Therefore, by performing the difference processing between the A+B signal and the A signal, a signal corresponding to the B signal can be acquired. Therefore, the A signal and the B signal are obtained as signals for phase difference detection. That is, the current read operation requires two read operations in order to acquire the signal for phase difference detection. (Read Operation: Configuration of the Present Technology) Next, the read operation of the present technology will be described with reference toFIGS.14and15. InFIG.14, the drive signal SEL0 becomes an H level, and the selection transistor SEL-Tr shared by the share pixels including the pixels100of the fifth to eighth rows on the lower side is in the ON state. Therefore, the share pixel including the pixels100of the fifth to eighth rows on the lower side is selected. At this time, as illustrated inFIG.14, among the drive signals TRG0 to TRG7 and TRG10, the drive signals TRG4 and TRG5 become an H level, and the transfer transistor TR-Tr-A connected to the photodiode112A and the transfer transistor TR-Tr-B connected to the photodiode112B of each pixel100of the sixth row (excluding the pixels100of the second column) simultaneously become an ON state. Therefore, in each pixel100of the sixth row (excluding the pixels100of the second column), as indicated by the thick frames inFIG.14, the electric charges accumulated in the photodiodes112A and112B are added up and read, and the pixel signal (A+B signal) is obtained. Here, among the pixels100of the sixth row, focusing on the pixel100-62of the second column, as described above, a contact C-62B connected to the photodiode112B is connected to the drive line TRG10, and since the drive signal TRG10 applied thereto is at an L level, only the left transfer transistor TR-Tr-A becomes an ON state. Therefore, in the pixel100-62, as indicated by the thick frame inFIG.14, the electric charge accumulated in the left photodiode112A is independently read, and the pixel signal (A signal) is obtained. Furthermore, although illustration is omitted, the pixels100arranged in the pixel array unit11include pixels100in which the electric charge accumulated in the right photodiode112B is independently read and the pixel signal (B signal) can be acquired in contrast to the pixel100-62. For example, if the pixel100-62described above is the pixel100capable of acquiring the B signal, it is only required to connect the contact C-62A to the drive line TRG10 instead of the drive line TRG4, and connect the contact C-62B to the drive line TRG5. That is, the pixels100arranged in the pixel array unit11include pixels100capable of acquiring the A+B signal as the image acquisition pixel, and pixels100capable of acquiring the A signal and pixels100capable of acquiring the B signal as the phase difference detection pixel. Here, as indicated in the above-described first to fourth embodiments, the density of the pixels100operating as the phase difference detection pixels is determined on the basis of the gain, the luminance level, and the like (for example, 3% or the like in the case of the drive control A), and the pixel100according to the density operates as the phase difference detection pixel for obtaining the A signal or the B signal. Then, in the read operation of the present technology, as illustrated inFIG.14, the A signal and the B signal are obtained as signals for phase difference detection, and the A+B signal is obtained as a signal for image acquisition. Therefore, in order to acquire a signal for phase difference detection, it is only necessary to perform the read operation once. That is, in the above-described current read operation, it was necessary to perform reading twice in order to acquire the signal for phase difference detection, but in the read operation of the present technology, it is possible to reduce the number of times of read operation to one. Note that in a case where the pixel100-62is caused to operate as the image acquisition pixel, as illustrated inFIG.15, the drive signal SEL0 is set to an H level state, and moreover the drive signals TRG4 and TRG5 and the drive signal TRG10 are set to an H level. Therefore, in the pixel100-62, similarly to each of the other pixels100of the sixth row, the transfer transistors TR-Tr-A and TR-Tr-B are simultaneously set to an ON state, and as indicated by the thick frame inFIG.15, the electric charges accumulated in the photodiodes112A and112B are added up and read, and the pixel signal (A+B signal) is obtained. In other words, in the read operation of the present technology, it can be said that, in a case where the illuminance in the imaging region of the pixel array unit11(or, for example, the calculation result of Formula (1), (2), or (3)) is smaller than a predetermined threshold value (e.g., the first threshold value Th1 or the second threshold value Th2), in the first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)), a pixel signal corresponding to the first photoelectric conversion unit (e.g., the photodiode112A) and a pixel signal corresponding to the second photoelectric conversion unit (e.g., the photodiode112B) are generated using the first drive line (e.g., the drive line TRG4) and the second drive line (e.g., the drive line TRG5), in a case where the illuminance is larger than the predetermined threshold value, in the second pixel portion (e.g., the pixel100-62of the second column), a pixel signal corresponding to the first photoelectric conversion unit (e.g., the photodiode112A) and a pixel signal corresponding to the second photoelectric conversion unit (e.g., the photodiode112B) are generated using the first drive line (e.g., the drive line TRG4) and the third drive line (e.g., the drive line TRG10), and meanwhile in the first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)), a pixel signal corresponding to the first photoelectric conversion unit (e.g., the photodiode112A) and a pixel signal corresponding to the second photoelectric conversion unit (e.g., the photodiode112B) are added up and generated. Furthermore, in the read operation of the present technology, it can also be said that, in a case where the illuminance in the imaging region of the pixel array unit11(or, for example, the calculation result of Formula (1), (2), or (3)) is smaller than a predetermined threshold value (e.g., the first threshold value Th1 or the second threshold value Th2), in the first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)) and the second pixel portion (e.g., the pixel100-62of the second column), a pixel signal from the first photoelectric conversion unit (e.g., the photodiode112A) and a pixel signal from the second photoelectric conversion unit (e.g., the photodiode112B) are read, in a case where the illuminance is larger than the predetermined threshold value, in the second pixel portion (e.g., the pixel100-62of the second column), a pixel signal from the first photoelectric conversion unit (e.g., the photodiode112A) and a pixel signal from the second photoelectric conversion unit (e.g., the photodiode112B) are read, and meanwhile in the first pixel portion (e.g., the pixel100of the sixth row (excluding the pixel100-62of the second column)), a pixel signal from the first photoelectric conversion unit (e.g., the photodiode112A) and a pixel signal from the second photoelectric conversion unit (e.g., the photodiode112B) are added up and read. 7. Seventh Embodiment By the way, in the above-described embodiment, the dual PD-type structure in which the two photodiodes112A and112B are provided for one on-chip lens111has been described, but another structure may be adopted. Here, for example, a structure in which four photodiodes112A,112B,112C, and112D are provided for one on-chip lens111(hereinafter, also referred to as 2×2 OCL structure) can be adopted. Therefore, a case where the 2×2 OCL structure is adopted will be described below as the seventh embodiment. (Example of the 2×2 OCL Structure) FIG.16is a diagram illustrating an example of a structure of pixels having the 2×2 OCL structure. A ofFIG.16illustrates a plan view of pixels120of 8 rows and 8 columns (8×8) arranged in a predetermined imaging region when viewed from the light incident side among a plurality of pixels120arranged in the pixel array unit11. Furthermore, B ofFIG.16illustrates an X-X′ cross-section of the pixel120illustrated in A ofFIG.16. As illustrated inFIG.16, the pixel120includes a 2×2 OCL structure in which four photodiodes112A to112D are provided for one on-chip lens111. It can also be said that the pixel120having the 2×2 OCL structure is a pixel portion (first pixel portion or second pixel portion) including an upper left pixel120A having an upper left photodiode112A, an upper right pixel120B having an upper right photodiode112B, a lower left pixel120C having a lower left photodiode112C, and a lower right pixel120D having a lower right photodiode112D. In the pixel120having the 2×2 OCL structure, a signal obtained from the electric charges accumulated in the photodiodes112A to112D is used as a signal for image acquisition, and a signal obtained from the electric charges accumulated in each of the photodiodes112A to112D can be used as a signal for phase difference detection. As described above, the pixel120has a structure of the 2×2 OCL structure and can be used as both an image acquisition pixel and a phase difference detection pixel. Furthermore, as illustrated in the cross-sectional view of B ofFIG.16, the pixel120includes a color filter113below the on-chip lens111, and is configured as an R pixel120, a G pixel120, or a B pixel120depending on a wavelength component transmitted by the color filter113. In the pixel array unit11, the R pixels120, the G pixels120, and the B pixels120can be arranged in an arrangement pattern such as a Bayer arrangement. Next, the read function in the case where the 2×2 OCL structure is adopted as the structure of the pixels120arranged in the pixel array unit11will be described. Note that, here, the configuration of the read function of the present technology is illustrated inFIGS.19to22, and the configuration of the current read function is illustrated inFIGS.17and18, and a difference between the read function of the present technology and the current read function will be described. However, as the read function of the present technology, a first configuration (FIGS.19and20) in a case where the left or right photodiode112is independently read and a second configuration (FIGS.21and22) in a case where the upper or lower photodiode112is independently read will be described. (Configuration of the Read Function) Similarly toFIGS.12to15described above,FIGS.17to22illustrate a partial region of the imaging region in the pixel array unit11, comparators151and a DAC152in the column signal processing circuit13. FIGS.17to22are different fromFIGS.12to15in that, in the pixel array unit11, instead of the pixel100having the dual PD-type structure (FIG.2), the pixel120having the 2×2 OCL structure (FIG.16) is arranged. That is, inFIGS.17to22, the pixels120arranged in the pixel array unit11are share pixels in the column direction, and in each pixel120of the share pixels, the upper left pixel120A has a photodiode112A and a transfer transistor TR-Tr-A, and the upper right pixel120B has a photodiode112B and a transfer transistor TR-Tr-B. Furthermore, in each pixel120of the share pixels, the lower left pixel120C has a photodiode112C and a transfer transistor TR-Tr-C, and the lower right pixel120D has a photodiode112D and a transfer transistor TR-Tr-D. Moreover, in the share pixels, the floating diffusion region FD is shared, and as the pixel circuit of the share pixel, the reset transistor RST-Tr, the amplification transistor AMP-Tr, and the selection transistor SEL-Tr are shared as the shared transistors. (Contact Arrangement: Current Configuration) Here, regarding the arrangement of contacts C of the pixels120, the arrangement is partially different between the configuration of the current read function illustrated inFIGS.17and18and the read configuration of the present technology illustrated inFIGS.19to22. That is, in the current configuration (FIGS.17and18), in the pixels120of the first and second rows on the upper side, the contacts C for the transfer transistor TR-Tr-A connected to the photodiode112A of the upper left pixel120A are connected to drive lines TRG2 and TRG6, and the contacts C for the transfer transistor TR-Tr-B connected to the photodiode112B of the upper right pixel120B are connected to drive lines TRG3 and TRG7. Furthermore, in the pixels120of the first and second rows on the upper side, the contacts C for the transfer transistor TR-Tr-C connected to the photodiode112C of the lower left pixel120C are connected to drive lines TRG0 and TRG4, and the contacts C for the transfer transistor TR-Tr-D connected to the photodiode112D of the lower right pixel120D are connected to drive lines TRG1 and TRG5. Similarly, in the current configuration (FIGS.17and18), also in the third and fourth rows on the lower side, the contacts C for the transfer transistor TR-Tr-A are connected to the drive lines TRG2 and TRG6, the contacts C for the transfer transistor TR-Tr-B are connected to the drive lines TRG3 and TRG7, the contacts C for the transfer transistor TR-Tr-C are connected to the drive lines TRG0 and TRG4, and the contacts C for the transfer transistor TR-Tr-D are connected to the drive lines TRG1 and TRG5. (Contact Arrangement: First Configuration of the Present Technology) On the other hand, in the first configuration of the present technology (FIGS.19and20), the pixels120of the first, second, and fourth rows are similar to the configuration illustrated inFIGS.17and18such that the contacts C for the transfer transistor TR-Tr-A are connected to the drive lines TRG2 and TRG6, the contacts C for the transfer transistor TR-Tr-B are connected to the drive lines TRG3 and TRG7, the contacts C for the transfer transistor TR-Tr-C are connected to the drive lines TRG0 and TRG4, and the contacts C for the transfer transistor TR-Tr-D are connected to the drive lines TRG1 and TRG5. Here, in the first configuration of the present technology (FIGS.19and20), focusing on the pixels120of the third row, a drive line TRG20 is added between the drive line TRG4 and the drive line TRG5, and moreover a drive line TRG21 is added between the drive line TRG6 and the drive line TRG7. Then, among the pixels120of the third row, the pixels120of the first, second, and fourth columns are similar to the current configuration (FIGS.17and18) such that the contacts C for the transfer transistors TR-Tr are connected to the corresponding drive lines TRG. Furthermore, in the pixels120of the third row, a pixel120-33of the third column is such that contacts C-33A and C-33C for the left transfer transistors TR-Tr-A and TR-Tr-C are connected to the drive lines TRG6 and TRG4, respectively, but contacts C-33B and C-33D for the right transfer transistors TR-Tr-B and TR-Tr-D are connected to the added drive lines TRG21 and TRG20. That is, in a case where attention is paid to the pixel120-33, in the first configuration (FIGS.19and20) of the present technology, as compared with the current configuration (FIGS.17and18), the configurations are identical in that the contacts C-33A and C-33C are connected to the drive lines TRG6 and TRG4, but are different in that the contacts C-33B and C-33D are connected to the drive lines TRG21 and TRG20, not the drive lines TRG7 and TRG5. In other words, it can be said that the pixel array unit11includes a first drive line (e.g., drive lines TRG6 and TRG4) connected to a first photoelectric conversion unit (e.g., photodiodes112A and112C) of a first pixel portion (e.g., the pixel120of the third row (excluding the pixel120-33of the third column)) and a second pixel portion (e.g., the pixel120-33of the third column), a second drive line (e.g., drive lines TRG7 and TRG5) connected to a second photoelectric conversion unit (e.g., photodiodes112B and112D) of the first pixel portion (e.g., the pixel120of the third row (excluding the pixel120-33of the third column)), and a third drive line (e.g., drive lines TRG21 and TRG20) connected to the second photoelectric conversion unit (e.g., photodiodes112B and112D) of the second pixel portion (e.g., the pixel120-33of the third column). At this time, the second drive line (e.g., drive lines TRG7 and TRG5) is nonconnected to the second photoelectric conversion unit (e.g., photodiodes112B and112D) of the second pixel portion (e.g., the pixel120-33of the third column). Furthermore, the third drive line (e.g., drive lines TRG21 and TRG20) is nonconnected to the second photoelectric conversion unit (e.g., the photodiodes112B and112D) of the first pixel portion (e.g., the pixel120of the third row (excluding the pixel120-33of the third column)). In this way, by providing the two third drive lines (e.g., the drive lines TRG21 and TRG20), the pixel120having the 2×2 OCL structure can be operated independently as the phase difference detection pixel. (Contact Arrangement: Second Configuration of the Present Technology) Furthermore, in the second configuration of the present technology (FIGS.21and22), among the pixels120of the third row, focusing on the pixels120of the third row, a drive line TRG30 is added between the drive line TRG4 and the drive line TRG5, and moreover a drive line TRG31 is added between the drive line TRG6 and the drive line TRG7. Then, in the pixels120of the third row, a pixel120-33of the third column is such that contacts C-33A and C-33B for the upper transfer transistors TR-Tr-A and TR-Tr-B are connected to the drive lines TRG6 and TRG7, respectively, but contacts C-33C and C-33D for the lower transfer transistors TR-Tr-C and TR-Tr-D are connected to the added drive line TRG30. That is, in a case where attention is paid to the pixel120-33, in the second configuration (FIGS.21and22) of the present technology, as compared with the current configuration (FIGS.17and18), the configurations are identical in that the contacts C-33A and C-33B are connected to the drive lines TRG6 and TRG7, respectively, but are different in that the contacts C-33C and C-33D are connected to the drive line TRG30, not the drive lines TRG4 and TRG5. In other words, it can be said that the pixel array unit11includes a first drive line (e.g., drive lines TRG6 and TRG7) connected to a first photoelectric conversion unit (e.g., photodiodes112A and112B) of a first pixel portion (e.g., the pixel120of the third row (excluding the pixel120-33of the third column)) and a second pixel portion (e.g., the pixel120-33of the third column), a second drive line (e.g., drive lines TRG4 and TRG5) connected to a second photoelectric conversion unit (e.g., photodiodes112C and112D) of the first pixel portion (e.g., the pixel120of the third row (excluding the pixel120-33of the third column)), and a third drive line (e.g., drive line TRG30) connected to the second photoelectric conversion unit (e.g., photodiodes112C and112D) of the second pixel portion (e.g., the pixel120-33of the third column). At this time, the second drive line (e.g., drive lines TRG4 and TRG5) is nonconnected to the second photoelectric conversion unit (e.g., photodiodes112C and112D) of the second pixel portion (e.g., the pixel120-33of the third column). Furthermore, the third drive line (e.g., drive line TRG30) is nonconnected to the second photoelectric conversion unit (e.g., the photodiodes112C and112D) of the first pixel portion (e.g., the pixel120of the third row (excluding the pixel120-33of the third column)). (Read Operation: Current Configuration) Next, a read operation in the case of having the above-described configuration will be described. Here, first of all, the current read operation will be described with reference toFIGS.17and18. InFIG.17, the drive signal SEL0 becomes an H level, and the selection transistor SEL-Tr shared by the share pixels including the pixels120of the third and fourth rows on the lower side is in an ON state, and the share pixels are selected. At this time, as illustrated inFIG.17, among the drive signals TRG0 to TRG7, the drive signal TRG6 becomes an H level, and in the pixels120of the third row, the transfer transistor TR-Tr-A is in an ON state. Therefore, in the upper left pixel120A of each pixel120of the third row, as indicated by the thick frame inFIG.17, the electric charge accumulated in the photodiode112A is independently read, and the pixel signal (A signal) is obtained. Thereafter, as illustrated inFIG.18, the drive signal SEL0 remains at an H level, and the drive signals TRG4 to TRG7 become an H level, and in the pixels120of the third row, the transfer transistors TR-Tr-A to TR-Tr-D become an ON state. Therefore, in each pixel120of the third row, as indicated by the thick frames inFIG.18, the electric charges accumulated in the photodiodes112A to112D are added up and read, and the pixel signal (A+B+C+D signal) is obtained. Then, in the current read operation, as illustrated inFIGS.17and18, the A signal is obtained as a signal for phase difference detection, and the A+B+C+D signal is obtained as a signal for image acquisition. Therefore, in order to acquire a signal corresponding to the B signal, for example, as the signal for phase difference detection, further read operation or difference processing is required. (Read Operation: First Configuration of the Present Technology) Next, the read operation of the first configuration of the present technology will be described with reference toFIGS.19and20. InFIG.19, the drive signal SEL0 becomes an H level, and the selection transistor SEL-Tr shared by the share pixels including the pixels120of the third and fourth rows on the lower side is in an ON state, and the share pixels are selected. At this time, as illustrated inFIG.19, among the drive signals TRG0 to TRG7, TRG20, and TRG21, the drive signals TRG4 to TRG7 are at an H level, and in each pixel120of the third row (excluding the pixel120of the third column), the transfer transistors TR-Tr-A to TR-Tr-D become an ON state simultaneously. Therefore, in each pixel120of the third row (excluding the pixels120of the third column), as indicated by the thick frames inFIG.19, the electric charges accumulated in the photodiodes112A to112D are added up and read, and the pixel signal (A+B+C+D signal) is obtained. Here, among the pixels120of the third row, focusing on the pixel120-33of the third column, as described above, in the upper right pixel120B and the lower right pixel120D, the contacts C-33B and C-33D are connected to the drive lines TRG21 and TRG20, and the drive signals TRG21 and TRG20 applied to the drive lines are at an L level. Therefore, in the pixel120-33, only the left transfer transistors TR-Tr-A and TR-Tr-C become an ON state. Therefore, in the pixel120-33, as indicated by the thick frame inFIG.19, the electric charges accumulated in the left photodiodes112A and112C are independently read, and the pixel signal (A+C signal) is obtained. Furthermore, although illustration is omitted, the pixels120arranged in the pixel array unit11include pixels120in which only the electric charges accumulated in the right photodiodes112B and112D are read and the pixel signal (B+D signal) can be acquired in contrast to the pixel120-33. For example, if the pixel120-33described above is the pixel120capable of acquiring the B+D signal, it is only required to connect the contacts C-33A and C-33C to the drive lines TRG21 and TRG20 instead of the drive lines TRG6 and TRG4, and connect the contacts C-33B and C-33D to the drive lines TRG7 and TRG5. That is, the pixels120arranged in the pixel array unit11include pixels120capable of acquiring the A+B+C+D signal as the image acquisition pixel, and pixels120capable of acquiring the left A+C signal and pixels120capable of acquiring the right B+D signal as the phase difference detection pixel. Here, as indicated in the above-described first to fourth embodiments, the density of the pixels120operating as the phase difference detection pixels is determined on the basis of the gain, the luminance level, and the like (for example, 3% or the like in the case of the drive control A), and the pixel120according to the density operates as the phase difference detection pixel for obtaining the A+C signal or the B+D signal. Then, in the read operation of the present technology, as illustrated inFIG.19, the A+C signal and the B+D signal are obtained as signals for phase difference detection, and the A+B+C+D signal is obtained as a signal for image acquisition. Therefore, in order to acquire a signal for phase difference detection, it is only necessary to perform the read operation once. That is, in the above-described current read operation, it was necessary to perform reading a plurality of times in order to acquire the signal for phase difference detection, but in the read operation of the present technology, it is possible to reduce the number of times of read operation to one. Note that in a case where the pixel120-33is caused to operate as the image acquisition pixel, as illustrated inFIG.20, the drive signal SEL0 is set to an H level state, and moreover the drive signals TRG4 to TRG7 and the drive signals TRG20 and TRG21 are set to an H level. Therefore, in the pixel120-33, similarly to the other pixels120of the third row, the transfer transistors TR-Tr-A to TR-Tr-D are simultaneously set to an ON state, and as indicated by the thick frame inFIG.20, the electric charges accumulated in the photodiodes112A to112D are added up and read, and the pixel signal (A+B+C+D signal) is obtained. (Read Operation: Second Configuration of the Present Technology) Next, the read operation of the second configuration of the present technology will be described with reference toFIGS.21and22. InFIG.21, the drive signal SEL0 becomes an H level, and the selection transistor SEL-Tr shared by the share pixels including the pixels120of the third and fourth rows on the lower side is in an ON state, and the share pixels are selected. At this time, as illustrated inFIG.21, among the drive signals TRG0 to TRG7, TRG30, and TRG31, the drive signals TRG4 to TRG7 are at an H level, and in each pixel120of the third row (excluding the pixel120of the third column), the transfer transistors TR-Tr-A to TR-Tr-D become an ON state simultaneously. Therefore, in each pixel120of the third row (excluding the pixels120of the third column), as indicated by the thick frames inFIG.21, the electric charges accumulated in the photodiodes112A to112D are added up and read, and the pixel signal (A+B+C+D signal) is obtained. Here, among the pixels120of the third row, focusing on the pixel120-33of the third column, as described above, in the lower left pixel120C and the lower right pixel120D, the contacts C-33B and C-33D are connected to the drive line TRG30, and the drive signal TRG30 applied to the drive line is at an L level. Therefore, in the pixel120-33, only the upper transfer transistors TR-Tr-A and TR-Tr-B become an ON state. Therefore, in the pixel120-33, as indicated by the thick frame inFIG.21, the electric charges accumulated in the upper photodiodes112A and112B are independently read, and the pixel signal (A+B signal) is obtained. Furthermore, although illustration is omitted, the pixels120arranged in the pixel array unit11include pixels120in which only the electric charges accumulated in the lower photodiodes112C and112D are read and the pixel signal (C+D signal) can be acquired in contrast to the pixel120-33. If the pixel120-33described above is the pixel120capable of acquiring the C+D signal, it is only required to connect the contacts C-33A and C-33B to the drive line TRG31 together instead of the drive lines TRG6 and TRG7, and connect the contacts C-33C and C-33D to the drive lines TRG4 and TRG5. That is, the pixels120arranged in the pixel array unit11include pixels120capable of acquiring the A+B+C+D signal as the image acquisition pixel, and pixels120capable of acquiring the upper A+B signal and pixels120capable of acquiring the lower C+D signal as the phase difference detection pixel. Here, as indicated in the above-described first to fourth embodiments, the density of the pixels120operating as the phase difference detection pixels is determined on the basis of the gain, the luminance level, and the like (for example, 3% or the like in the case of the drive control A), and the pixel120according to the density operates as the phase difference detection pixel for obtaining the A+B signal or the C+D signal. Then, in the read operation of the present technology, as illustrated inFIG.21, the A+B signal and the C+D signal are obtained as signals for phase difference detection, and the A+B+C+D signal is obtained as a signal for image acquisition. Therefore, in order to acquire a signal for phase difference detection, it is only necessary to perform the read operation once. That is, in the above-described current read operation, it was necessary to perform reading a plurality of times in order to acquire the signal for phase difference detection, but in the read operation of the present technology, it is possible to reduce the number of times of read operation to one. Note that in a case where the pixel120-33is caused to operate as the image acquisition pixel, as illustrated inFIG.22, the drive signal SEL0 is set to an H level state, and moreover the drive signals TRG4 to TRG7 and the drive signals TRG30 and TRG31 are set to an H level. Therefore, in the pixel120-33, similarly to the other pixels120of the third row, the transfer transistors TR-Tr-A to TR-Tr-D are simultaneously set to an ON state, and as indicated by the thick frame inFIG.22, the electric charges accumulated in the photodiodes112A to112D are added up and read, and the pixel signal (A+B+C+D signal) is obtained. 8. Variation In the above description, the pixels100or pixels120arranged in the pixel array unit11are described as being configured as the first pixel portion or the second pixel portion depending on the form of connection with the drive lines TRG. However, these pixel portions can be pixel units having one or more photoelectric conversion units (for example, photodiodes). For example, the pixel unit can have an even number of photoelectric conversion units (for example, photodiodes). More specifically, the pixel100configured as the first pixel portion or the second pixel portion has two photoelectric conversion units: the photodiode112A of the left pixel100A and the photodiode112B of the right pixel100B. Furthermore, the pixel120configured as the first pixel portion or the second pixel portion has four photoelectric conversion units: the photodiode112A of the upper left pixel120A, the photodiode112B of the upper right pixel120B, the photodiode112C of the lower left pixel120C, and the photodiode112D of the lower right pixel120D. Note that, in the above description, the case where the first pixel portion or the second pixel portion is a pixel unit having two or four photoelectric conversion units is described, but more photoelectric conversion units such as a pixel unit having, for example, eight photoelectric conversion units, may be provided. Furthermore, in the above description, the case where the electric charge accumulated in the photodiode112A or the photodiode112B is independently read in the pixel portion has been mainly described, but, as described above, the electric charges accumulated in the photodiode112A and the photodiode112B may be independently read. Furthermore, in the above-described embodiments, the case is described where the AE unit212functions as the illuminance detection unit that detects the illuminance in the imaging region of the pixel array unit11on the basis of the exposure information set in the solid-state imaging element10, but the method for detecting the illuminance is not limited thereto. That is, in the above-described embodiments, the AE unit212detects the illuminance in the imaging region of the pixel array unit11on the basis of the exposure amount obtained from the image frame preceding a target image frame, but, for example, an image frame for detecting the illuminance may be separately generated. Furthermore, an illuminance sensor for detecting illuminance may be provided. The illuminance sensor can be provided inside or outside the solid-state imaging element10(at a position different from the solid-state imaging element10). Moreover, in the above-described embodiments, as the information related to the accuracy of the phase difference detection used for the threshold value determination together with the gain depending on the illuminance (hereinafter, also referred to as the accuracy-related information), the luminance level in the target region in the target image frame (luminance level of the above Formula (1)), the number of effective pixels among the pixels used for phase difference detection (the number of effective phase difference detection pixels of the above Formula (2)), and the size of the region of interest in the target image frame (ROI area of the above Formula (3)) are described, but the accuracy-related information is not limited thereto. That is, the luminance level, the number of effective phase difference detection pixels, and the ROI area described in the above embodiments are examples of accuracy-related information. Furthermore, it can also be said that the luminance level detection unit213of the control unit200A (FIG.5), the phase difference detection unit214and the counting unit215of the control unit200B (FIG.7), and the ROI setting unit216of the control unit200C (FIG.9) are an acquisition unit that acquires the accuracy-related information related to the accuracy of phase difference detection. Note that, in the above-described embodiments, as the imaging apparatus, the imaging apparatus1A (FIGS.5and11), the imaging apparatus1B (FIG.7), and the imaging apparatus1C (FIG.9) are described, but the solid-state imaging element10(FIG.1and the like) may be understood to be an imaging apparatus. That is, it can also be said that the solid-state imaging element10is, for example, a CMOS image sensor and is an imaging apparatus. In the above-described embodiments, as the structure of the pixels100or pixels120arranged in the pixel array unit11, the dual PD-type structure and the 2×2 OCL structure are described, but other structures may be adopted. In short, as the pixels arranged in the pixel array unit11, it is sufficient if pixels can be used as image acquisition pixels or phase difference detection pixels, and their structure is arbitrary. Note that the phase difference detection pixel is a pixel for image plane phase difference AF, and is also called a phase detection auto focus (PDAF) pixel or the like. Furthermore, in the above-described embodiments, as the solid-state imaging element10, a CMOS image sensor is described as an example, but the application is not limited to the CMOS image sensor, but it is applicable to general solid-state imaging elements in which pixels are two-dimensionally arranged, e.g., a charge coupled device (CCD) image sensor. Moreover, the present technology is applicable not only to a solid-state imaging element that detects the distribution of the incident light amount of visible light and captures it as an image, but also to general solid-state imaging elements that capture the distribution of the incident light amount of particles or the like as an image. 9. Configuration of Electronic Equipment FIG.23is a block diagram illustrating a configuration example of electronic equipment including a solid-state imaging element to which the present technology is applied. Electronic equipment1000is electronic equipment with an imaging function, such as an imaging apparatus including a digital still camera, a video camera, or the like, a mobile terminal apparatus including a smartphone, a tablet terminal, or a mobile phone, and the like, for example. The electronic equipment1000includes a lens unit1011, an imaging unit1012, a signal processing unit1013, a control unit1014, a display unit1015, a recording unit1016, an operation unit1017, a communication unit1018, a power source unit1019, and a drive unit1020. Furthermore, the signal processing unit1013, the control unit1014, the display unit1015, the recording unit1016, the operation unit1017, the communication unit1018, and the power source unit1019are connected to each other through a bus1021in the electronic equipment1000. The lens unit1011includes a zoom lens, a focus lens, and the like and condenses light from a subject. The light (subject light) condensed by the lens unit1011enters the imaging unit1012. The imaging unit1012includes a solid-state imaging element to which the present technology has been applied (for example, the solid-state imaging element10ofFIG.1). The imaging unit1012photoelectrically converts the light (subject light) received through the lens unit1011into an electrical signal and supplies the resultant signal to the signal processing unit1013. Note that, in the imaging unit1012, the pixel array unit11of the solid-state imaging element10includes pixels100(or pixels120) as pixels that are regularly arranged in a predetermined arrangement pattern. The pixel100(or the pixel120) can be used as an image acquisition pixel or a phase difference detection pixel. Here, the imaging unit1012may be considered as a solid-state imaging element to which the present technology is applied. The signal processing unit1013is a signal processing circuit that processes a signal supplied from the imaging unit1012. For example, the signal processing unit1013includes a digital signal processor (DSP) circuit and the like. The signal processing unit1013processes the signal from the imaging unit1012to generate image data of a still image or a moving image, and supplies the image data to the display unit1015or the recording unit1016. Furthermore, the signal processing unit1013generates data for detecting the phase difference (phase difference detection data) on the basis of the signal from the imaging unit1012(phase difference detection pixel) and supplies the data to the control unit1014. The control unit1014includes, for example, a central processing unit (CPU), a microprocessor, and the like. The control unit1014controls the operation of each unit of the electronic equipment1000. The display unit1015includes, for example, a display apparatus, such as a liquid crystal display (LCD) and an organic electro luminescence (EL) display. The display unit1015processes the image data supplied from the signal processing unit1013and displays the still images or the moving images captured by the imaging unit1012. The recording unit1016includes, for example, a recording medium, such as a semiconductor memory, a hard disk, and an optical disk. The recording unit1016records the image data supplied from the signal processing unit1013. Furthermore, the recording unit1016outputs recorded image data according to control from the control unit1014. The operation unit1017includes, for example, physical buttons as well as a touch panel in combination with the display unit1015. The operation unit1017outputs operation commands regarding various functions of the electronic equipment1000according to operation by the user. The control unit1014controls operation of each unit on the basis of the operation commands supplied from the operation unit1017. The communication unit1018includes, for example, a communication interface circuit or the like. The communication unit1018exchanges data with external equipment through wireless communication or wired communication according to a predetermined communication standard. The power source unit1019appropriately supplies various power sources as operation power sources of the imaging unit1012, the signal processing unit1013, the control unit1014, the display unit1015, the recording unit1016, the operation unit1017, the communication unit1018, and the drive unit1020to these supply targets. Furthermore, the control unit1014detects the phase difference between two images on the basis of the phase difference detection data supplied from the signal processing unit1013. Then, the control unit1014determines whether or not the object as a target of focusing (object to be focused) is focused on the basis of the detection result of the phase difference. The control unit1014calculates an amount of deviation of focus (amount of defocus) in a case where the object to be focused is not focused and supplies the amount of defocus to the drive unit1020. The drive unit1020includes, for example, a motor or the like and drives the lens unit1011including the zoom lens, the focus lens, and the like. The drive unit1020calculates an amount of drive of the focus lens of the lens unit1011on the basis of the amount of defocus supplied from the control unit1014and moves the focus lens according to the amount of drive. Note that the drive unit1020maintains the current position of the focus lens in a case where the object to be focused is focused. In this way, the image plane phase difference AF is performed. The electronic equipment1000is configured as described above. 10. Example of Use of the Solid-State Imaging Element FIG.24is a diagram illustrating a usage example of the solid-state imaging element to which the present technology is applied. The solid-state imaging element10(FIG.1) can be used in, for example, various cases of sensing light, such as visible light, infrared light, ultraviolet light, and X rays, and the like. That is, as illustrated inFIG.24, the solid-state imaging element10can be used in apparatuses used not only in a field of viewing in which images to be viewed are captured, but also in a field of traffic, a field of home appliance, a field of medical and healthcare, a field of security, a field of beauty, a field of sports, a field of agriculture, or the like, for example. Specifically, in the field of viewing, the solid-state imaging element10can be used in, for example, an apparatus (for example, electronic equipment1000ofFIG.23) for capturing an image to be viewed, such as a digital camera, a smartphone, and a mobile phone with a camera function. In the field of traffic, the solid-state imaging element10can be used in, for example, an apparatus used for traffic, such as an on-board sensor that captures images of the front, back, surroundings, inside of a car, or the like, a monitoring camera that monitors traveling vehicles or roads, and a distance measurement sensor that measures the distance between vehicles and the like, for safe drive like automatic stop or for recognizing the state of the driver. In the field of home appliance, the solid-state imaging element10can be used in, for example, an apparatus used as a home appliance, such as a television receiver, a refrigerator, and an air conditioner, that captures an image of a gesture of the user to perform equipment operation according to the gesture. Furthermore, in the field of medical and healthcare, the solid-state imaging element10can be used in, for example, an apparatus used for medical or healthcare, such as an endoscope and an apparatus that captures images of blood vessels by receiving infrared light. In the field of security, the solid-state imaging element10can be used in, for example, an apparatus used for security, such as a monitoring camera for crime prevention and a camera for personal authentication. Furthermore, in the field of beauty, the solid-state imaging element10can be used in, for example, an apparatus used for beauty, such as a skin measurement device that captures images of the skin and a microscope that captures images of the scalp. In the field of sports, the solid-state imaging element10can be used in, for example, an apparatus used for sports, such as an action camera and a wearable camera for sports and the like. Furthermore, in the field of agriculture, the solid-state imaging element10can be used in, for example, an apparatus used for agriculture, such as a camera that monitors the state of a farm or produce. 11. Application Examples to Mobile Objects The technology according to the present disclosure (present technology) is applicable to a variety of products. For example, the technology according to the present disclosure may be implemented as apparatuses mounted on any type of movable bodies such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobilities, airplanes, drones, ships, or robots. FIG.25is a block diagram illustrating a schematic configuration example of a vehicle control system, which is an example of a movable body control system to which the technology according to the present disclosure can be applied. The vehicle control system12000includes a plurality of electronic control units connected via a communication network12001. In the example illustrated inFIG.25, the vehicle control system12000includes a drive line control unit12010, a body system control unit12020, a vehicle outside information detecting unit12030, a vehicle inside information detecting unit12040, and an integrated control unit12050. Furthermore, a microcomputer12051, an audio and image output unit12052, and an in-vehicle network interface (I/F)12053are illustrated as functional configurations of the integrated control unit12050. The drive line control unit12010controls the operation of apparatuses related to the drive line of the vehicle in accordance with a variety of programs. For example, the drive line control unit12010functions as a control apparatus for a driving force generating apparatus such as an internal combustion engine or a driving motor that generates the driving force of the vehicle, a driving force transferring mechanism that transfers the driving force to wheels, a steering mechanism that adjusts the steering angle of the vehicle, a braking apparatus that generates the braking force of the vehicle, and the like. The body system control unit12020controls the operations of a variety of apparatuses attached to the vehicle body in accordance with a variety of programs. For example, the body system control unit12020functions as a control apparatus for a keyless entry system, a smart key system, a power window apparatus, or a variety of lights such as a headlight, a backup light, a brake light, a blinker, or a fog lamp. In this case, the body system control unit12020can receive radio waves transmitted from a portable device that serves instead of the key or signals of a variety of switches. The body system control unit12020accepts input of these radio waves or signals, and controls the door lock apparatus, the power window apparatus, the lights, or the like of the vehicle. The vehicle outside information detecting unit12030detects information regarding the outside of the vehicle including the vehicle control system12000. For example, the imaging unit12031is connected to the vehicle outside information detecting unit12030. The vehicle outside information detecting unit12030causes the imaging unit12031to capture images of the outside of the vehicle, and receives the captured image. The vehicle outside information detecting unit12030may perform processing of detecting an object such as a person, a car, an obstacle, a traffic sign, or a letter on a road, or processing of detecting the distance on the basis of the received image. The imaging unit12031is an optical sensor that receives light and outputs an electric signal corresponding to the amount of received light. The imaging unit12031can output the electric signal as the image or output the electric signal as ranging information. Furthermore, the light received by the imaging unit12031may be visible light or invisible light such as infrared light. The vehicle inside information detecting unit12040detects information of the inside of the vehicle. The vehicle inside information detecting unit12040is connected, for example, to a driver state detecting unit12041that detects the state of the driver. The driver state detecting unit12041includes, for example, a camera that images a driver, and the vehicle inside information detecting unit12040may compute the degree of the driver's tiredness or the degree of the driver's concentration or determine whether or not the driver has a doze, on the basis of detection information input from the driver state detecting unit12041. The microcomputer12051can calculate a control target value of the driving force generating apparatus, the steering mechanism, or the braking apparatus on the basis of information regarding the inside and outside of the vehicle acquired by the vehicle outside information detecting unit12030or the vehicle inside information detecting unit12040, and output a control instruction to the drive line control unit12010. For example, the microcomputer12051can perform cooperative control for the purpose of executing the functions of the advanced driver assistance system (ADAS) including vehicle collision avoidance or impact reduction, follow-up driving based on the inter-vehicle distance, constant vehicle speed driving, vehicle collision warning, vehicle lane deviation warning, or the like. Furthermore, the microcomputer12051can perform cooperative control for the purpose of automatic driving or the like for autonomous running without depending on the driver's operation through control of the driving force generating apparatus, the steering mechanism, the braking apparatus, or the like on the basis of information around the vehicle acquired by the vehicle outside information detecting unit12030or the vehicle inside information detecting unit12040. Furthermore, the microcomputer12051can output a control instruction to the body system control unit12020on the basis of the information outside the vehicle obtained by the vehicle outside information detecting unit12030. For example, the microcomputer12051can perform the cooperative control for realizing glare protection such as controlling the head light according to a position of a preceding vehicle or an oncoming vehicle detected by the vehicle outside information detecting unit12030to switch a high beam to a low beam. The audio and image output unit12052transmits an output signal of at least one of a sound or an image to an output apparatus capable of visually or aurally notifying a passenger of the vehicle or the outside of the vehicle of information. In the example ofFIG.25, an audio speaker12061, a display unit12062, and an instrument panel12063are exemplified as the output apparatus. For example, the display unit12062may include at least one of an onboard display or a head-up display. FIG.26is a view illustrating an example of an installation position of the imaging unit12031. InFIG.26, a vehicle12100includes imaging units12101,12102,12103,12104, and12105as the imaging unit12031. Imaging units12101,12102,12103,12104and12105are positioned, for example, at the front nose, a side mirror, the rear bumper, the back door, the upper part of the windshield in the vehicle compartment, or the like of the vehicle12100. The imaging unit12101attached to the front nose and the imaging unit12105attached to the upper part of the windshield in the vehicle compartment mainly acquire images of the area ahead of the vehicle12100. The imaging units12102and12103attached to the side mirrors mainly acquire images of the areas on the sides of the vehicle12100. The imaging unit12104attached to the rear bumper or the back door mainly acquires images of the area behind the vehicle12100. The forward images acquired by the imaging units12101and12105are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, and the like. Note thatFIG.26illustrates an example of the respective imaging ranges of the imaging units12101to12104. An imaging range12111represents the imaging range of the imaging unit12101attached to the front nose. Imaging ranges12112and12113respectively represent the imaging ranges of the imaging units12102and12103attached to the side mirrors. An imaging range12114represents the imaging range of the imaging unit12104attached to the rear bumper or the back door. For example, overlaying image data captured by the imaging units12101to12104offers an overhead image that looks down on the vehicle12100. At least one of the imaging units12101to12104may have a function of obtaining distance information. For example, at least one of the imaging units12101to12104may be a stereo camera including a plurality of image sensors, or may be an image sensor having pixels for phase difference detection. For example, the microcomputer12051may extract especially a closest three-dimensional object on a traveling path of the vehicle12100, the three-dimensional object traveling at a predetermined speed (for example, 0 km/h or higher) in a direction substantially the same as that of the vehicle12100as the preceding vehicle by determining a distance to each three-dimensional object in the imaging ranges12111to12114and change in time of the distance (relative speed relative to the vehicle12100) on the basis of the distance information obtained from the imaging units12101to12104. Moreover, the microcomputer12051can set an inter-vehicle distance to be secured in advance from the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this manner, it is possible to perform the cooperative control for realizing automatic driving or the like to autonomously travel independent from the operation of the driver. For example, the microcomputer12051can extract three-dimensional object data regarding the three-dimensional object while sorting the data into a two-wheeled vehicle, a regular vehicle, a large vehicle, a pedestrian, and other three-dimensional object such as a utility pole on the basis of the distance information obtained from the imaging units12101to12104and use the data for automatically avoiding obstacles. For example, the microcomputer12051discriminates obstacles around the vehicle12100into an obstacle visibly recognizable to a driver of the vehicle12100and an obstacle difficult to visually recognize. Then, the microcomputer12051determines a collision risk indicating a degree of risk of collision with each obstacle, and when the collision risk is equal to or higher than a set value and there is a possibility of collision, the microcomputer12051can perform driving assistance for avoiding the collision by outputting an alarm to the driver via the audio speaker12061and the display unit12062or performing forced deceleration or avoidance steering via the drive line control unit12010. At least one of the imaging units12101to12104may be an infrared camera for detecting infrared rays. For example, the microcomputer12051can recognize a pedestrian by determining whether or not there is a pedestrian in the captured images of the imaging units12101to12104. Such pedestrian recognition is carried out, for example, by a procedure of extracting feature points in the captured images of the imaging units12101to12104as infrared cameras and a procedure of performing pattern matching processing on a series of feature points indicating an outline of an object to discriminate whether or not the object is a pedestrian. When the microcomputer12051determines that there is a pedestrian in the captured images of the imaging units12101to12104and recognizes the pedestrian, the audio and image output unit12052causes the display unit12062to superimpose a rectangular contour for emphasis on the recognized pedestrian. Furthermore, the audio and image output unit12052may causes the display unit12062to display icons or the like indicating pedestrians at desired positions. An example of the vehicle control system to which the technology according to the present disclosure is applicable is heretofore described. The technology according to the present disclosure can be applied to the imaging unit12031among the configurations described above. Specifically, the solid-state imaging element10ofFIG.1can be applied to the imaging unit12031. By applying the technology according to the present disclosure to the imaging unit12031, the frame rate can be increased and a captured image that is easier to see can be obtained, so that fatigue of the driver can be reduced. 12. Application Example to Endoscopic Surgery System The technology according to the present disclosure (present technology) is applicable to a variety of products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system. FIG.27is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system to which the technology (present technology) according to the present disclosure can be applied. FIG.27illustrates a situation where an operator (doctor)11131is performing surgery on a patient11132on a patient bed11133using the endoscopic surgery system11000. As illustrated, the endoscopic surgery system11000includes an endoscope11100, other surgical tools11110, e.g., a pneumoperitoneum tube11111, an energy treatment tool11112, or the like, a support arm apparatus11120supporting the endoscope11100, and a cart11200on which various apparatuses for an endoscopic surgery are mounted. The endoscope11100includes a lens tube11101in which a region of a predetermined length from a tip end, is inserted into the body cavity of the patient11132, and a camera head11102connected to a base end of the lens tube11101. In the illustrated example, the endoscope11100configured as a so-called rigid scope including a rigid lens tube11101, is illustrated, but the endoscope11100may be configured as a so-called flexible scope including a flexible lens tube. An opening portion into which an objective lens is fitted, is provided on the tip end of the lens tube11101. A light source apparatus11203is connected to the endoscope11100, and light generated by the light source apparatus11203is guided to the tip end of the lens tube by a light guide provided to extend in the lens tube11101, and is emitted towards an observation target in the body cavity of the patient11132through the objective lens. Note that the endoscope11100may be a forward-viewing endoscope, or may be an oblique-viewing endoscope or a side-viewing endoscope. In the camera head11102, an optical system and an imaging element are provided, and reflection light (observation light) from the observation target, is condensed in the image sensor by the optical system. The observation light is subjected to the photoelectric conversion by the image sensor, and an electrical signal corresponding to the observation light, that is, an image signal corresponding to an observation image, is generated. The image signal is transmitted to a camera control unit (CCU)11201, as RAW data. The CCU11201includes a central processing unit (CPU), a graphics processing unit (GPU), or the like, and integrally controls the operation of the endoscope11100and the display apparatus11202. Moreover, the CCU11201receives the image signal from the camera head11102and performs various image processing for displaying the image based on the image signal, for example, as development processing (demosaic processing) or the like, on the image signal. The display apparatus11202displays an image based on the image signal subjected to the image processing by the CCU11201according to the control from the CCU11201. The light source apparatus11203, for example, includes a light source such as a light emitting diode (LED) or the like, and supplies the irradiation light at the time of capturing the surgery site to the endoscope11100. The input apparatus11204is an input interface with respect to the endoscopic surgery system11000. The user is capable of performing the input of various information items, or the input of an instruction with respect to endoscopic surgery system11000, through the input apparatus11204. For example, the user inputs an instruction or the like to change conditions of imaging (type of irradiation light, magnification, focal length, and the like) by the endoscope11100. The treatment tool control apparatus11205controls the driving of the energy treatment tool11112for the cauterization and the incision of the tissue, the sealing of the blood vessel, or the like. In order to ensure a visual field of the endoscope11100and to ensure a working space of the surgery operator, the pneumoperitoneum apparatus11206sends gas into the body cavity through the pneumoperitoneum tube11111such that the body cavity of the patient11132is inflated. The recorder11207is an apparatus capable of recording various information items associated with the surgery. The printer11208is an apparatus capable of printing various information items associated with the surgery, in various formats such as a text, an image, or a graph. Note that the light source apparatus11203that supplies irradiation light when capturing the surgical site to the endoscope11100can be configured from, for example, a white light source configured by an LED, a laser light source, or a combination thereof. In a case where the white light source includes a combination of RGB laser light sources, it is possible to control an output intensity and an output timing of each color (each wavelength) with a high accuracy, and thus, it is possible to adjust a white balance of the captured image with the light source apparatus11203. Furthermore, in this case, laser light from each of the RGB laser light sources is emitted to the observation target in a time division manner, and the driving of the image sensor of the camera head11102is controlled in synchronization with the emission timing, and thus, it is also possible to capture an image corresponding to each of RGB in a time division manner. According to such a method, it is possible to obtain a color image without providing a color filter in the image sensor. Furthermore, the driving of the light source apparatus11203may be controlled such that the intensity of the light to be output is changed for each predetermined time. The driving of the image sensor of the camera head11102is controlled in synchronization with a timing when the intensity of the light is changed, images are acquired in a time division manner, and the images are synthesized, and thus, it is possible to generate an image of a high dynamic range, without so-called black defects and overexposure. Furthermore, the light source apparatus11203may be configured to supply light of a predetermined wavelength band corresponding to special light imaging. In the special light imaging, for example, light of a narrow band is applied, compared to irradiation light at the time of performing usual observation by using wavelength dependency of absorbing light in the body tissue (i.e., white light), and thus, so-called narrow band imaging of capturing a predetermined tissue of a blood vessel or the like in a superficial portion of a mucous membrane with a high contrast, is performed. Alternatively, in the special light imaging, fluorescent light imaging of obtaining an image by fluorescent light generated by being irradiated with excited light, may be performed. In the fluorescent light imaging, for example, the body tissue is irradiated with the excited light, and the fluorescent light from the body tissue is observed (autofluorescent light imaging), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue, and the body tissue is irradiated with excited light corresponding to a fluorescent light wavelength of the reagent, and thus, a fluorescent image is obtained. The light source apparatus11203can be configured to supply the narrow band light and/or the excited light corresponding to such special light imaging. FIG.28is a block diagram illustrating an example of a functional configuration of the camera head11102and the CCU11201illustrated inFIG.27. The camera head11102includes a lens unit11401, an imaging unit11402, a drive unit11403, a communication unit11404, and a camera head control unit11405. The CCU11201includes a communication unit11411, an image processing unit11412, and a control unit11413. The camera head11102and the CCU11201are connected to be capable of mutual communication through a transmission cable11400. The lens unit11401is an optical system provided in a connection portion with the lens tube11101. Observation light incorporated from a tip end of the lens tube11101is guided to the camera head11102and is incident on the lens unit11401. The lens unit11401includes a combination of a plurality of lenses including a zoom lens and a focus lens. The imaging unit11402includes an imaging element. The image sensor constituting the imaging unit11402may be one (so-called single plate type) or plural (so-called multi-plate type). In a case where the imaging unit11402is configured as a multi-plate type, for example, image signals corresponding to RGB may be generated by each image sensor, and a color image may be obtained by combining them. Alternatively, the imaging unit11402may include a pair of image sensors for respectively acquiring right-eye and left-eye image signals corresponding to 3D (dimensional) display. The 3D display is performed, and thus, the surgery operator11131is capable of more accurately grasping the depth of the biological tissue in the surgery portion. Note that, in a case where the imaging unit11402is configured by a multi-plate type configuration, a plurality of lens units11401may be provided corresponding to each of the image sensors. Furthermore, the imaging unit11402may not be necessarily provided in the camera head11102. For example, the imaging unit11402may be provided immediately after the objective lens, in the lens tube11101. The drive unit11403includes an actuator, and moves the zoom lens and the focus lens of the lens unit11401along the optical axis by a predetermined distance, according to the control from the camera head control unit11405. Therefore, it is possible to suitably adjust the magnification and the focal point of the image captured by the imaging unit11402. The communication unit11404includes a communication apparatus for transmitting and receiving various information items with respect to the CCU11201. The communication unit11404transmits the image signal obtained from the imaging unit11402to the CCU11201through the transmission cable11400, as the RAW data. Furthermore, the communication unit11404receives a control signal for controlling the driving of the camera head11102from the CCU11201and supplies the control signal to the camera head control unit11405. The control signal, for example, includes information associated with the imaging condition, such as information of designating a frame rate of the captured image, information of designating an exposure value at the time of the imaging, and/or information of designating the magnification and the focal point of the imaged image. Note that the imaging conditions such as the frame rate, exposure value, magnification, and focus described above may be appropriately designated by the user, or may be automatically set by the control unit11413of the CCU11201on the basis of the acquired image signal. In the latter case, a so-called auto exposure (AE) function, an auto focus (AF) function, and an auto white balance (AWB) function are provided in the endoscope11100. The camera head control unit11405controls the driving of the camera head11102on the basis of the control signal from the CCU11201received through the communication unit11404. The communication unit11411includes a communication apparatus for transmitting and receiving various information items with respect to the camera head11102. The communication unit11411receives the image signal to be transmitted from the camera head11102, through the transmission cable11400. Furthermore, the communication unit11411transmits the control signal for controlling the driving of the camera head11102to the camera head11102. The image signal and the control signal can be transmitted by electrical communication, optical communication, or the like. The image processing unit11412performs various image processing on the image signal which is the RAW data transmitted from the camera head11102. The control unit11413performs various types of control related to imaging of the surgical site or the like by the endoscope11100and display of a captured image obtained by imaging of the surgical site or the like. For example, the control unit11413generates the control signal for controlling the driving of the camera head11102. Furthermore, the control unit11413causes the display apparatus11202to display the captured image of the surgery site or the like on the basis of the image signal subjected to the image processing by the image processing unit11412. At this time, the control unit11413may recognize various objects in the captured image by using various image recognition technologies. For example, the control unit11413detects the shape, the color, or the like of the edge of the object included in the captured image, and thus, it is possible to recognize a surgical tool such as forceps, a specific biological portion, bleed, mist at the time of using the energy treatment tool11112, and the like When the captured image is displayed on the display apparatus11202, the control unit11413may display various surgery support information items to be superimposed on the image of the surgery site, by using a recognition result. Surgery support information is displayed in a superimposed manner and presented to the operator11131, thereby reducing the burden on the operator11131and allowing the operator11131to proceed with surgery reliably. The transmission cable11400connecting the camera head11102and the CCU11201together, is an electrical signal cable corresponding to the communication of the electrical signal, an optical fiber corresponding to the optical communication, or a composite cable thereof. Here, in the illustrated example, the communication is performed in a wired manner, by using the transmission cable11400, but the communication between the camera head11102and the CCU11201, may be performed in a wireless manner. An example of the endoscopic surgery system to which the technology according to the present disclosure can be applied, has been described. The technology according to the present disclosure can be applied to (the imaging unit11402of) the camera head11102among the configurations described above. Specifically, the solid-state imaging element10ofFIG.1can be applied to (the imaging unit11402of) the camera head11102. By applying the technology according to the present disclosure to the imaging unit11402, it is possible to increase the frame rate and obtain a more observable surgical site image, so that the operator can reliably confirm the surgical site. Note that, here, although an endoscopic surgery system has been described as an example, the technology according to the present disclosure may be applied to, for example, a microscope surgery system and the like. Note that the embodiment of the present technology is not limited to the aforementioned embodiments, but various changes may be made within the scope not departing from the gist of the present technology. Furthermore, the present technology can adopt the configuration described below. (1) An imaging apparatus including:a pixel array unit including a first pixel portion and a second pixel portion different from the first pixel portion, in whicheach of the first pixel portion and the second pixel portion includes a first photoelectric conversion unit and a second photoelectric conversion unit adjacent to the first photoelectric conversion unit, andthe pixel array unit includesa first drive line connected to the first photoelectric conversion unit of the first pixel portion and the second pixel portion,a second drive line connected to the second photoelectric conversion unit of the first pixel portion, anda third drive line connected to the second photoelectric conversion unit of the second pixel portion. (2) The imaging apparatus according to (1), in whichthe second drive line is nonconnected to the second photoelectric conversion unit of the second pixel portion. (3) The imaging apparatus according to (1) or (2), in whichthe third drive line is nonconnected to the second photoelectric conversion unit of the first pixel portion. (4) The imaging apparatus according to any of (1) to (3), further includingan illuminance detection unit that detects illuminance in an imaging region of the pixel array unit, in whichin a case where the illuminance detected by the illuminance detection unit is smaller than a predetermined threshold value, in the first pixel portion, a pixel signal corresponding to the first photoelectric conversion unit and a pixel signal corresponding to the second photoelectric conversion unit are generated using the first drive line and the second drive line, in a case where the illuminance detected by the illuminance detection unit is larger than the predetermined threshold value, in the second pixel portion, a pixel signal corresponding to the first photoelectric conversion unit and a pixel signal corresponding to the second photoelectric conversion unit are generated using the first drive line and the third drive line, and meanwhile, in the first pixel portion, a pixel signal corresponding to the first photoelectric conversion unit and a pixel signal corresponding to the second photoelectric conversion unit are added up and generated. (5) The imaging apparatus according to (4), further includingan acquisition unit that acquires accuracy-related information related to accuracy of phase difference detection using the pixel signal, in whicha value indicated by the accuracy-related information acquired by the acquisition unit is used for determination with the predetermined threshold value together with a value indicated by the illuminance. (6) The imaging apparatus according to (5), in whichthe accuracy-related information includes a luminance level in a target region in a target image frame. (7) The imaging apparatus according to (5) or (6), in whichthe accuracy-related information includes a number of effective pixels among pixels used for phase difference detection. (8) The imaging apparatus according to any of (5) to (7), in whichthe accuracy-related information includes a size of a region of interest in the target image frame. (9) The imaging apparatus according to any of (1) to (8), in whichthe first pixel portion includes a pixel unit having one or more photoelectric conversion units, andthe second pixel portion includes a pixel unit having one or more photoelectric conversion units. (10) The imaging apparatus according to (9), in which the first pixel portion has an even number of photoelectric conversion units, andthe second pixel portion has an even number of photoelectric conversion units. (11) The imaging apparatus according to (10), in whichthe first pixel portion has two photoelectric conversion units, andthe second pixel portion has two photoelectric conversion units. (12) The imaging apparatus according to (10), in whichthe first pixel portion has four photoelectric conversion units, andthe second pixel portion has four photoelectric conversion units. (13) The imaging apparatus according to any of (4) to (12), in whichthe illuminance detection unit detects the illuminance on the basis of exposure information. (14) The imaging apparatus according to (13), in whichthe illuminance detection unit detects the illuminance on the basis of an exposure amount obtained from an image frame preceding a target image frame. (15) The imaging apparatus according to any of (4) to (14), in whichthe illuminance detection unit is provided inside or outside the apparatus. (16) The imaging apparatus according to any of (4) to (15), further includinga drive control unit that controls driving of the first pixel portion and the second pixel portion on the basis of the illuminance detected by the illuminance detection unit. (17) The imaging apparatus according to (16), further includinga correction unit that corrects the pixel signal used for phase difference detection. (18) Electronic equipment including:an imaging unit including:a pixel array unit including a first pixel portion and a second pixel portion different from the first pixel portion,in whicheach of the first pixel portion and the second pixel portion includes a first photoelectric conversion unit and a second photoelectric conversion unit adjacent to the first photoelectric conversion unit, andthe pixel array unit includesa first drive line connected to the first photoelectric conversion unit of the first pixel portion and the second pixel portion,a second drive line connected to the second photoelectric conversion unit of the first pixel portion, anda third drive line connected to the second photoelectric conversion unit of the second pixel portion. (19) An imaging apparatus including:a pixel array unit including a first pixel portion and a second pixel portion different from the first pixel portion; andan illuminance detection unit that detects illuminance in an imaging region of the pixel array unit, in whicheach of the first pixel portion and the second pixel portion includes a first photoelectric conversion unit and a second photoelectric conversion unit adjacent to the first photoelectric conversion unit, in a case where the illuminance detected by the illuminance detection unit is smaller than a predetermined threshold value, in the first pixel portion and the second pixel portion, a pixel signal from the first photoelectric conversion unit and a pixel signal from the second photoelectric conversion unit are read, in a case where the illuminance detected by the illuminance detection unit is larger than the predetermined threshold value, in the second pixel portion, a pixel signal from the first photoelectric conversion unit and a pixel signal from the second photoelectric conversion unit are read, and meanwhile, in the first pixel portion, a pixel signal from the first photoelectric conversion unit and a pixel signal from the second photoelectric conversion unit are added up and read. REFERENCE SIGNS LIST 1A,1B,1C Imaging apparatus10Solid-state imaging element11Pixel array unit12Vertical drive circuit13Column signal processing circuit14Horizontal drive circuit15Output circuit16Control circuit17Input/output terminal21Pixel drive line22Vertical signal line100Pixel100A,100B Pixel120Pixel120A,120B,120C,120D Pixel111On-chip lens112A,112B,112C,112D Photodiode113Color filter151Comparator152DAC200A,200B,200C Control unit211Sensor drive control unit212AE unit213Luminance level detection unit214Phase difference detection unit215Counting unit216ROI setting unit300Signal processing unit311Pixel correction unit312Selector313Image signal processing unit1000Electronic equipment1012Imaging unit
144,123
11943550
DETAILED DESCRIPTION OF THE EMBODIMENTS In order to make the objectives, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure are clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. Apparently, the described embodiments are some rather than all of the embodiments of the present disclosure. On the basis of the embodiments of the present disclosure, all other embodiments acquired by those of ordinary skill in the art without making inventive efforts fall within the scope of protection of the present disclosure. It should be noted that in the description of the embodiments of the present disclosure, the terms such as “center”, “top”, “bottom”, “left”, “right”, “vertical”, “horizontal”, “inner” and “outer” indicate the orientation or position relationships based on the drawings. These terms are merely intended to facilitate description of the embodiments of the present disclosure and simplify the description, rather than to indicate or imply that the mentioned apparatus or element must have a specific orientation and must be constructed and operated in a specific orientation. Therefore, these terms should not be construed as a limitation to the embodiments of the present disclosure. In addition, the terms such as “first”, “second”, and “third” are used only for descriptive purposes, and should not be construed as indicating. In the description of the embodiments of the present disclosure, unless otherwise clearly specified, the terms such as “mounting”, “interconnection”, and “connection” are intended to be understood in a broad sense. For example, the “connection” may be a fixed connection, a detachable connection or an integrated connection; a mechanical connection or an electrical connection; or a direct connection, an indirect connection via a medium, or inner communication between two elements. A person of ordinary skill in the art may understand specific meanings of the foregoing terms in the embodiments of the present disclosure based on a specific situation. The embodiments of the present disclosure provide a dual-modality neuromorphic vision sensor, including: a first-type current-mode APS circuit and a voltage-mode APS circuit; the first-type current-mode APS circuit includes a target first-type photosensitive device; and the target first-type photosensitive device is configured to obtain a target light signal, and convert the target light signal into a first-type current signal, and the first-type current-mode APS circuit is configured to output, based on a difference between the first-type current signal and a sum of second-type current signals converted by a first preset quantity of non-target first-type photosensitive devices around the target first-type photosensitive device, a specified digital signal representing light intensity gradient information in the target light signal; the voltage-mode APS circuit includes a second-type photosensitive device, the second-type photosensitive device is configured to obtain the target light signal, extract a light signal of a specified frequency band from the target light signal, and convert the light signal of the specified frequency band into a third-type current signal, and the voltage-mode APS circuit is configured to output, based on the third-type current signal, a target voltage signal representing light intensity information in the target light signal; and wherein, each of the non-target first-type photosensitive devices is connected to a first-type control switch in series. Specifically, the embodiments of the present disclosure provide a dual-modality neuromorphic vision sensor. Its pixel array is formed by a photosensitive device, which is controlled by a control circuit. As shown inFIG.1, the control circuit includes a current-mode APS circuit1and a voltage-mode APS circuit2. The photosensitive devices in the neuromorphic vision sensor can sense different types of light, and therefore are classified into first-type photosensitive devices and second-type photosensitive devices. The first-type photosensitive device is configured to directly sense the target light signal. The second-type photosensitive device is configured to sense the color component in the target light signal. In the embodiments of the present disclosure, the color component in the target light signal is marked as a light signal of a specified frequency band, that is, the first-type photosensitive device is configured to obtain the target light signal, and convert the target light signal into a current signal. The second-type photosensitive device is configured to obtain a target light signal, extract a light signal of a specified frequency band from the target light signal, and convert the light signal of the specified frequency band into a current signal. The target light signal refers to the light signal reflected by the surface of the target object. The target light signal may be irradiated on the first-type photosensitive device or the second-type photosensitive device directly, through a collimating lens, or through a cover item. The wave band of the target light signal may be visible, that is, the target light signal may be a visible light signal. The target object refers to an object that is observed with human eyes, and may be a real object, an image, or another form. The specific shape of the target object is not limited in the present disclosure. The quantities of the first-type photosensitive devices and the second-type photosensitive devices may be set according to a need. The first-type photosensitive device and its control circuit can mimic the rod cells, and the second-type photosensitive device and its control circuit can mimic the cone cells. The first-type photosensitive device may specifically include a target first-type photosensitive device and a non-target first-type photosensitive device rather than the target first-type photosensitive device. The target first-type photosensitive device and its control circuit can mimic excitatory rod cells, and the non-target first-type photosensitive device and its control circuit can mimic inhibitory rod cells. In the embodiments of the present disclosure, to distinguish the current signals obtained through conversion by the first-type photosensitive device and the second-type photosensitive device, the current signal obtained through conversion by the target first-type photosensitive device is marked as a first-type current signal. The current signal obtained through conversion by the non-target first-type photosensitive device is marked as a second-type current signal. The current signal obtained through conversion by the second-type photosensitive device is marked as a third-type current signal. The first-type photosensitive device in the dual-modality neuromorphic vision sensor is controlled by a current-mode APS circuit. The quantity of the current-mode APS circuits may be determined according to the quantity of the target first-type photosensitive devices. In the embodiments of the present disclosure, to distinguish the control circuits of the target first-type photosensitive device and the non-target first-type photosensitive device, a control circuit of the target first-type photosensitive device is marked as a first-type current-mode APS circuit. The control circuit of the non-target first-type photosensitive device is marked as a second-type current-mode APS circuit. Each target first-type photosensitive device corresponds to a first-type current-mode APS circuit. Each non-target first-type photosensitive device corresponds to a second-type current-mode APS circuit. The second-type photosensitive device is controlled by the voltage-mode APS circuit. A quantity of the voltage-mode APS circuits may be smaller than or equal to that of the second-type photosensitive devices. The quantity relationship between them is specifically determined according to the quantity of the second-type photosensitive devices and a reuse situation, which is not specifically limited in the embodiments of the present disclosure. The current-mode APS circuit refers to an APS circuit operating under a current mode, that is, after obtaining the first-type current signal through conversion, the target first-type photosensitive device does not need to integrate it directly, but outputs, based on a difference between the first-type current signal and a sum of second-type current signals converted by a preset quantity of non-target first-type photosensitive devices around the target first-type photosensitive device, a specified digital signal representing light intensity information in the target light signal. the non-target first-type photosensitive devices are each connected to a first-type control switch in series. The first-type control switch may be specifically a metal-oxide-semiconductor (MOS) transistor. All the first-type control switches may be turned on or off at the same time; alternatively, some of them may be turned on or off, which is specifically set according to a need, and is not limited in the embodiments of the present disclosure. Therefore, the first-type control switch is configurable. Because the first-type control switch determines whether the non-target first-type photosensitive device around the target first-type photosensitive device is efficient, it may be understood that the first-type control switch may be used as a 1-bit convolution kernel with a configurable parameter to perform 1-bit convolution operation within a pixel on the current signal obtained through conversion by the first-type photosensitive device, at a high operation speed, to extract the features. The voltage-mode APS circuit refers to an APS circuit operating under a voltage mode, that is, after obtaining the third-type current signal through conversion, the second-type photosensitive device needs to integrate it, to obtain a target voltage signal. The target voltage signal represents the light intensity information in the target light signal. The light intensity information is absolute and includes color information. Embodiments of the present disclosure provide a dual-modality neuromorphic vision sensor. On the one hand, a first-type current-mode APS circuit can mimic excitatory rod cells, to perceive light intensity gradient information in a target light signal, thereby improving a dynamic arrange of an image sensed by a neuromorphic vision sensor and its shooting speed. In addition, a first-type control switch is introduced for each of non-target first-type photosensitive devices, to control the obtained light intensity gradient information, and adjust the dynamic arrange of the image sensed by the neuromorphic vision sensor, thereby adjusting the shooting speed, and realizing a reconfigurable effect. On the other hand, a voltage-mode APS can mimic cone cells, to output a target voltage signal representing light intensity information in the target light signal, and perceive the light intensity information in the target light signal. In this way, the obtained light intensity information represented by the target voltage signal has a higher precision, and an image with higher quality can be obtained, that is, the image has a higher signal-noise ratio. Based on the foregoing embodiments, in the dual-modality neuromorphic vision sensor provided by this embodiment of the present disclosure, when the light intensity of the target light signal is greater than a first preset value, all the first-type control switches are turned on at the same time. When the light intensity of the target light signal is smaller than a second preset value, all the first-type control switches are turned off at the same time. Specifically, all the first-type control switches are independent of each other. When one is turned on or off, another one is not affected. The quantity of the switches to be turned on or off may be selected according to a need. For example, all the switches may be turned on or off. In the embodiments of the present disclosure, to obtain a better effect, when the light intensity of the target light signal is greater than the first preset value, all the first-type control switches are turned on at the same time. When the light intensity of the target light signal is smaller than the second preset value, all the first-type control switches are turned off at the same time. The first preset value and the second preset value may be determined according to the type, a parameter, and an ambient light intensity of the photosensitive device. For example, the first preset value may be 10 klux, and the second preset value may be 50 lux. In other words, when the light intensity of the target light signal is greater than the first preset value, it is indicated that the light is strong. In this case, to prevent the DACs and the CPs in the first-type current-mode APS circuit from being saturated, all the first-type control switches are turned on at the same time. In this case, all non-target first-type photosensitive devices are efficient, and the specified digital signal outputted by the first-type current-mode APS circuit is a differential-mode signal, which can make the neuromorphic vision sensor obtain the edge information of an image. When the light intensity of the target light signal is smaller than the second preset value, it is indicated that the light is weak. In this case, a first-type current signal I1obtained through conversion by the target first-type photosensitive device is small. Therefore, all the first-type control switches are turned off at the same time. In this case, all non-target first-type photosensitive devices are inefficient, and the specified digital signal outputted by the first-type current-mode APS circuit is a common-mode signal, which can make the neuromorphic vision sensor obtain the original information of an image. The first-type current-mode APS circuit provided in the embodiments of the present disclosure includes a gap junction that can mimic human eyes better, thereby improving the dynamic range of an image sensed by the neuromorphic vision sensor. It should be noted that the light intensity of the target light signal is greater than the first preset value and smaller than the second preset value, it is indicated that the light is moderate. In this case, some of the first-type control switches may be turned on, and some of them may be turned off. When at least one of the first-type control switches is turned on, the specified digital signal outputted by the first-type current-mode APS circuit is a differential-mode signal. When all the first-type control switches are turned off, the specified digital signal outputted by the first-type current-mode APS circuit is a common-mode signal. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the first-type current-mode APS circuit further includes: a first current amplifier, a CP, an adder, and a DAC; the target first-type photosensitive device is connected to the first current amplifier, and the first current amplifier is connected to an input end of the CP; an input end of the adder is connected to the first-type control switch, and an output end of the adder is connected to the other input end of the CP; and an output end of the CP is connected to the DAC, the DAC converts an inputted specified digital signal into a specified analog signal, and outputs the specified analog signal to the first current amplifier or the adder, until the output end of the CP outputs an event pulse signal, the first-type current-mode APS circuit outputs the specified digital signal, and the specified digital signal is used for representing light intensity gradient information in the target light signal. Specifically,FIG.2shows a first-type current-mode APS circuit provided by the embodiments of the present disclosure, and configured to control a target first-type photosensitive device. InFIG.2, the first-type current-mode APS circuit includes a target first-type photosensitive device11, a first current amplifier12, a CP13, an adder14, and a DAC15. The target first-type photosensitive device11is connected to the first current amplifier12. The first current amplifier12is configured to amplify a first current signal I0obtained through conversion by the target first-type photosensitive device11by a first preset quantity of times. In other words, the quantity of amplifying times is equal to that of non-target first-type photosensitive devices around the target first-type photosensitive device11, to ensure that the amplified first-type current signal and a sum of second-type current signals obtained through conversion by the first preset quantity of non-target second-type photosensitive devices around the target first-type photosensitive device11are on a same order of magnitude. It should be noted that the first-type photosensitive device provided by the embodiments of the present disclosure does not include a CF, and therefore the response band of the first-type photosensitive device is related to itself. The first current amplifier12is connected to an input end of the CP13, to input the amplified first-type current signal into the CP13. Four non-target first-type photosensitive devices around the target first-type photosensitive device11are each connected to an input end of the adder14. Because the non-target first-type photosensitive devices are each connected to a first-type control switch in series, The embodiments of the present disclosure show only first-type control switches M1, M2, M3, and M4connected to the non-target first-type photosensitive devices respectively. An output end of the adder14is connected to the other input end of the CP13. Current signals: I1, I2, I3, and I4obtained through conversion by the four non-target first-type photosensitive devices are each inputted into the adder14for summation. Then, the adder14inputs a sum result to the CP13. The CP13compares the amplified first-type current signal with the sum result of the adder14. When the compared results at a precious moment and at a current moment are constant, no outputting is performed, and the DAC15converts an inputted specified digital signal into a specified analog signal, and outputs the specified analog signal to the first current amplifier12or the adder14. The specified analog signal outputted to the first current amplifier12is marked as IDA2, and the specified analog signal outputted to the adder14is marked as IDA1. The CP13compares the specified analog signal after the outputting. When the compared results at a precious moment and at a latter moment are opposite, the output end of the CP13outputs an event pulse signal, that is, the CP13is in an edge-triggered state. In this case, the first-type current-mode APS circuit outputs the specified digital signal. The specified digital signal is used for representing the light intensity gradient information in the target light signal. The specified digital signal outputted by the first-type current-mode APS circuit is represented by using 0 and 1. A specified digital signal that increases periodically may be manually inputted into the DAC15. The specific change form of the specified digital signal is shown inFIG.3. The specified digital signal specifically increases in a step-like manner with time. In an N*step at a moment, a value of the specified digital signal is ΔI. If the CP13outputs an event pulse signal, that is, the CP13is in the edge-triggered state, ΔI at this moment is used as an output of the first-type current-mode APS circuit. N is a quantity of steps passed before, and the step is a duration during which each step is passed. It should be noted that the adder in the embodiments of the present disclosure may be an actual device or a functional module realizing an adding function by, for example, combining lines where the current signals I1, I2, I3, and I4are located into one line. In addition, the first current amplifier may be an actual device or a functional module amplifying a current, which is not specifically limited in the embodiments of the present disclosure. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the first-type current-mode APS circuit further includes: a three-state gate circuit, where the three-state gate circuit is connected to the output end of the CP and an input end of the DAC; and the three-state gate circuit is configured to output the specified digital signal when the output end of the CP outputs the event pulse signal. Specifically, as shown inFIG.4, in the embodiments of the present disclosure, the first-type current-mode APS circuit further includes: a three-state gate circuit41. The three-state gate circuit41is connected to an output end of the CP13and an input end of the DAC15. The three-state gate circuit41is configured to output the specified digital signal when the output end of the CP13outputs an event pulse signal, that is, the CP13is in the edge-triggered state. FIG.5is a specific schematic structural diagram of a first-type current-mode APS circuit according to an embodiment of the present disclosure. InFIG.5, a circuit structure51mimics a rod cell circuit, and a circuit structure52mimics a ganglion cell and a bipolar cell. Vcc is a power supply of the control circuit, and the target first-type photosensitive device53is connected to Vcc. After amplifying the first-type current signal I0by four times obtained through the target first-type photosensitive device53, a current mirror54is connected to an input end of the CP56. The current signals obtained through conversion by the four non-target first-type photosensitive devices around the target first-type photosensitive device53are I1, I2, I3, and I4respectively. It should be noted that the current mirror54inFIG.5is a first current amplifier. The four non-target first-type photosensitive devices around the target first-type photosensitive device53are not shown inFIG.5, but the first-type control switches M1, M2, M3, and M4connected to the non-target first-type photosensitive devices respectively are shown there. The adder combines the lines where the current signals I1, I2, I3, and I4are located into one line. The combined line is connected to an input end of the CP56. The CP56compares the amplified first-type current signal with the sum of I1, I2, I3, and I4. When the compared results at a precious moment and at a current moment are constant, no outputting is performed, and the DAC55converts an inputted specified digital signal into a specified analog signal, and outputs the specified analog signal to the target first-type photosensitive device53or a non-target first-type photosensitive device. The CP56compares the specified analog signal after the outputting. When the compared results at a precious moment and at a latter moment are opposite, the output end of the CP56outputs an event pulse signal, that is, the CP56is in an edge-triggered state. In this case, the three-state gate circuit57outputs the specified digital signal. InFIG.5, a capacitor58is also connected between the CP56and the ground. The capacitor58may be an actual capacitor or a virtual parasitic capacitor in the first-type current-mode APS circuit, which is not specifically limited in the embodiments of the present disclosure. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the first-type current-mode APS circuit further includes: a storage unit. The storage unit is connected to an output end of the three-state gate circuit, and is configured to store the specified digital signal outputted by the first-type current-mode APS circuit. The storage unit may specifically be a register, a latch, a static random-access memory (SRAM), a dynamic random-access memory (DRAM), a memristor, or the like. Taking the register as an example, the quantity of bits of the register may be selected according to the precision of the DAC. A 4-bit register may be selected in the embodiment of the present disclosure. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the voltage-mode APS circuit specifically includes: a second preset quantity of second-type photosensitive devices around the target first-type photosensitive device, the second-type photosensitive devices are each connected to a second-type control switch, and at a same moment, only one second-type control switch is in a conductive state. The voltage-mode APS circuit further includes a current integrator (CI), a shutter, and an analog-to-digital converter (ADC), each of the second-type photosensitive devices and the second-type control switch connected thereto in series form a device branch, and all device branches are connected in parallel and share the CI, the shutter, and the ADC. The CI is configured to obtain a voltage analog signal of a target capacitor in the voltage-mode APS circuit; the shutter is configured to control integration time of the CI; and the ADC is configured to convert the voltage analog signal of the target capacitor into the target voltage signal. Specifically, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, a plurality of second-type photosensitive devices may share a voltage-mode APS, that is, each voltage-mode APS circuit includes the second preset quantity of second-type photosensitive devices around the target first-type photosensitive device. It may be understood that the target first-type photosensitive device and the second preset quantity of second-type photosensitive devices around it form a set. The target first-type photosensitive device in the set is controlled by a current-mode APS. All the non-target first-type photosensitive devices in the set are controlled by a voltage-mode APS circuit. FIG.6is a schematic structural diagram of a current-mode APS circuit according to an embodiment of the present disclosure.FIG.6shows four second-type photosensitive devices61, a CI62, a shutter64, and an ADC63. The second-type photosensitive devices61are each connected to a second-type control switch65, and at a same moment, only one second-type control switch65is in a conductive state. Each of the second-type photosensitive devices61and the second-type control switch65connected thereto in series form a device branch, and four device branches are connected in parallel and share the CI62, the shutter64, and the ADC63. The second-type control switch65may be specifically a MOS transistor. The CI62is configured to obtain a voltage analog signal of a target capacitor in the voltage-mode APS circuit; and the ADC63is configured to convert the voltage analog signal of the target capacitor into the target voltage signal. The shutter64is configured to control integration time of the CI62. For example, the shutter64controls the integration time of the CI62to be 33 ms. After 33 ms, the shutter64is closed, and the CI62obtains the voltage analog signal of the target capacitor, which is read by the ADC63. In the embodiment of the present disclosure, a storage unit may also be connected to the ADC63to store the voltage analog signal of the target capacitor read by the ADC63. The storage unit413may specifically be a register, a latch, a static SRAM, a DRAM, a memristor, or the like. Taking the register as an example, the quantity of bits of the register may be selected according to the precision of the ADC63. An 8-bit register may be selected in the embodiment of the present disclosure to store the voltage analog signal of the target capacitor. After the ADC63reads, the shutter64may also be disconnected, and the CI62continues to integrate the current of the target capacitor. A video signal can be obtained by circulating the foregoing steps. FIG.7is a specific schematic structural diagram of a voltage-mode APS circuit according to an embodiment of the present disclosure.FIG.7shows four second-type photosensitive devices:71,72,73, and74. The second-type photosensitive device71is connected to the second-type control switch75in series, to form a first device branch. The second-type photosensitive device72is connected to the second-type control switch76in series, to form a second device branch. The second-type photosensitive device73is connected to the second-type control switch78in series, to form a third device branch. The second-type photosensitive device74is connected to the second-type control switch77in series, to form a fourth device branch. The first device branch, second device branch, third device branch, and fourth device branch are connected in parallel, and then are connected to the MOSs79and710. The MOS710is connected to a MOS711. The MOS transistor79is configured to perform biasing. The MOS transistor710is configured to switch on/off. The MOS transistor711is configured to perform current integration on the third-type current signal obtained through conversion by a second-type photosensitive device on a device branch to obtain a target voltage signal, which represents the light intensity information in the target light signal. The voltage-mode APS circuit provided in the embodiments of the present disclosure controls the device branch by using the second-type control switch connected to the second-type photosensitive device in series, thereby controlling a plurality of second-type photosensitive devices, improving the integration of the dual-modality neuromorphic vision sensor. Based on the foregoing embodiment, the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure further includes: a second-type current-mode APS circuit. The second-type current-mode APS circuit includes a non-target first-type photosensitive device and a second preset quantity of current mirrors. The current mirrors are each connected to a target second-type photosensitive device around the non-target first-type photosensitive device in series. Specifically, in the embodiments of the present disclosure, the second-type current-mode APS circuit configured to control the non-target first-type photosensitive device specifically includes: a non-target first-type photosensitive device and a second preset quantity of current mirrors. The current mirrors are each connected to a target first-type photosensitive device around the non-target first-type photosensitive device in series. In other words, the second-type current-mode APS circuits in the embodiments of the present disclosure each control a non-target first-type photosensitive device. As shown inFIG.8, the second-type current-mode APS circuit includes a non-target first-type photosensitive device81and four first-type current mirrors82,83,84, and85. The first-type current mirrors are each connected to a target first-type photosensitive device around the non-target first-type photosensitive device81, that is, a current signal h obtained through conversion by the non-target first-type photosensitive device81is copied into four hs, which are each used for including the light intensity gradient information in the target light signal obtained by the first-type current-mode APS circuit of a target first-type photosensitive device around the non-target first-type photosensitive device81, to reuse the non-target first-type photosensitive device, and improve the pixel fill factor of the dual-modality neuromorphic vision sensor. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the first-type photosensitive device is specifically a PD or may be another device to convert a light signal into a current signal, which is not specifically limited in the embodiments of the present disclosure. It should be noted that the first-type photosensitive device does not include a CF. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the second-type photosensitive device specifically includes a CF and a PD. The CF is configured to obtain the target light signal and extract the light signal of the specified frequency band from the target light signal; and the PD is configured to convert the light signal of the specified frequency band into the third-type current signal. Specifically, in the embodiments of the present disclosure, the second-type photosensitive device is configured to sense the color component in the target light signal. The second-type photosensitive device may include a PD and a CF disposed on the PD. An image obtained finally by the dual-modality neuromorphic vision sensor is colorful. The CF is configured to obtain the target light signal and extract a light signal of a specified frequency band from the target light signal. The PD converts the light signal of the specified frequency band into the third-type current signal. The CF may be specifically a filter or lens configured to transmit the light signal of a specified wave. When the CF is a lens, a Byron lens may be specifically selected, and other types of lenses may also be selected. The CFs can be classified into red CFs, blue CFs, and green CFs according to the wavelengths of the transmitted light signals, and the light signals transmitted by them are red light signals, blue light signals, and green light signals. It should be noted that the second-type photosensitive device may be further formed by the PD. PDs with different response curves are selected to obtain the target light signal, extract a light signal of a specified wave band from the target light signal, and convert the light signal of the specified wave band into a third-type current signal. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the target first-type photosensitive device, the non-target first-type photosensitive device, and the second-type photosensitive device are arranged to form a pixel array of the dual-modality neuromorphic vision sensor. In each row of the pixel array, the second-type photosensitive device and the target first-type photosensitive device are arranged alternately, or the second-type photosensitive device and the non-target first-type photosensitive device are arranged alternately. Specifically,FIG.9is a schematic structural diagram of an arrangement manner of the pixel array, and shows a first-type photosensitive device91and a second-type photosensitive device92. Each first-type photosensitive device and each second-type photosensitive device form a pixel. A target first-type photosensitive device in the first-type photosensitive device91is marked as “+”, and a non-target first-type photosensitive device is marked as “−”. A second-type photosensitive device92including a red CF is marked as “R”. A second-type photosensitive device92including a blue CF is marked as “B”. A second-type photosensitive device92including a green CF is marked as “G”. Four non-target first-type photosensitive devices and four second-type photosensitive devices are around each target first-type photosensitive device. Four target first-type photosensitive devices and four second-type photosensitive devices are around each non-target first-type photosensitive device. Specifically,FIG.10is a schematic structural diagram of an arrangement manner of the pixel array, and shows a first-type photosensitive device101and a second-type photosensitive device102. In the first-type photosensitive device101, the target first-type photosensitive device is marked as “+”, and the non-target first-type photosensitive device is marked as “−”. A second-type photosensitive device102including a red CF is marked as “R”. A second-type photosensitive device102including a blue CF is marked as “B”. A second-type photosensitive device102including a green CF is marked as “G”. Six non-target first-type photosensitive devices and two second-type photosensitive devices are around each target first-type photosensitive device. Two target first-type photosensitive devices and four second-type photosensitive devices are around each non-target first-type photosensitive device. Alternatively, four target first-type photosensitive devices and two second-type photosensitive devices are around each non-target first-type photosensitive device. The pixel array may be arranged in another form, which is not specifically limited in the embodiments of the present disclosure. Correspondingly, a first preset quantity and a second preset quantity corresponding to the pixel array shown inFIG.9are both4. For the pixel array shown inFIG.10, a corresponding first preset quantity is 6, and a corresponding second preset quantity is 2 or 4. In the embodiment of the present disclosure, description is made by using the pixel array shown inFIG.9as an example. For example, the second-type photosensitive device71inFIG.7may be marked as “G”. The second-type photosensitive device72and73may be marked as “R”. The second-type photosensitive device74may be marked as “B”. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor provided by the embodiments of the present disclosure, the first-type current-mode APS circuit further includes: a second current amplifier. The second current amplifier is connected between the target second-type photosensitive device and the first current amplifier. Specifically, in the embodiments of the present disclosure, because the current signal obtained through conversion by the first-type photosensitive device is small, the second current amplifier may be connected between the first current amplifier and the target first-type photosensitive device, and is configured to preliminarily amplify the first-type current signal obtained through conversion by the target first-type photosensitive device. The second current amplifier may be an actual device or a functional module amplifying a current, which is not specifically limited in the embodiments of the present disclosure. Correspondingly, a second current amplifier is further disposed between the non-target first-type photosensitive device around the target first-type photosensitive device and the adder, to make a current signal of a branch where the non-target first-type photosensitive device before the adder and a current signal of a branch where the target first-type photosensitive device is located on a same order of magnitude. Based on the foregoing embodiment, in the dual-modality neuromorphic vision sensor in the embodiments of the present disclosure, the target voltage signal and the specified digital signal form an image. Specifically, in the embodiments of the present disclosure, the target voltage signal and the specified digital signal form an image. It should be noted that the target voltage signal and the specified digital signal are outputted at different forms and at different speeds. The target voltage signal is outputted at a speed of 30 ms. Because a scanning speed of the DAC in the first-type current-mode APS circuit is 1 ms, the output is expressed in an asynchronous event address, specifically, (X, Y, P, T). “X, Y” represents an event address, “P” represents a 4-value event output (including a first sign bit), and “T” is time when the event is generated. The finally outputted image is shown inFIG.11. Two frames of images are colorful, which are formed by the successively outputted target voltage signals, and the edge points between the two frames of images are formed by the outputted specified digital signals. Finally, it should be noted that the foregoing embodiments are used only to explain the technical solutions of the present disclosure, but are not intended to limit the present disclosure. Although the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still modify the technical solutions described in the foregoing embodiments, or make equivalent substitutions on some technical features therein. The modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure.
40,782
11943551
DETAILED DESCRIPTION The following disclosure provides many different embodiments or examples for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various embodiments. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to discuss one element or feature's relationship to another element(s) or feature(s) as illustrated in the drawings. These spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the drawings. The apparatus may be otherwise oriented (e.g., rotated by 90 degrees or at other orientations), and the spatially relative descriptors used herein may likewise be interpreted accordingly. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in the respective testing measurements. Also, as used herein, the term “the same” generally means within 10%, 5%, 1%, or 0.5% of a given value or range. Alternatively, the term “the same” means within an acceptable standard error of the mean when considered by one of ordinary skill in the art. As could be appreciated, other than in the operating/working examples, or unless otherwise expressly specified, all of the numerical ranges, amounts, values, and percentages (such as those for quantities of materials, durations of times, temperatures, operating conditions, ratios of amounts, and the likes) disclosed herein should be understood as modified in all instances by the term “the same.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the present disclosure and attached claims are approximations that can vary as desired. At the very least, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Here, ranges can be expressed herein as from one endpoint to another endpoint or between two endpoints. All ranges disclosed herein are inclusive of the endpoints, unless specified otherwise. FIG.1is a functional block diagram illustrating an imaging system100according to embodiments of the present disclosure. The imaging system100can be implemented using a three-dimensional (3D) imaging system, which is configured to obtain depth information (or depth image) of surrounding target(s). For example (however, the present disclosure is not limited thereto), or the imaging system100can be a TOF imaging system, which can obtain depth information of a target102by measuring the distance between the target102and the imaging system100. It should be noted that, in certain embodiments, the imaging system100could be a 3D imaging system, which can determine the depth information of the target102according to the change in the patterns of reflected light signals received by the receiving terminal. For the sake of brevity, the following embodiment is directed to the imaging system100implemented using a TOF imaging system to discuss the imaging solution of the present disclosure. However, persons having ordinary skill in the art should appreciate that the present imaging solution may be applied to other 3D imaging systems capable of obtaining the depth image according to light signals of the transmitting terminal and receiving terminal. The imaging system100can include (but is not limited to) a light-emitting unit110and an image sensor120. The light-emitting unit110is used to generate a light signal LS, wherein the light signal LS can have a predetermined pattern so that the energy can be concentrated on the predetermined pattern; for example, the predetermined pattern can be a speckle array, wherein the light energy is concentrated on each speckle of the speckle array. The light-emitting unit110can include a light source112and an optical microstructure114. The optical microstructure114is configured to change the path and irradiation range (and the like) of the light signal LI outputted from the light source112so as to generate the light signal LS having the predetermined pattern. In the present embodiment, the projection of the light signal LS on the target102forms a plurality of light speckles that are separated from each other to reduce the influence of background noise on the measurement result. For example (however, the present disclosure is not limited thereto), the optical microstructure114can include a diffractive optical element (DOE) or refractive optical element (ROE), configured to conically diffract (or conically refract) the light signal LI to generate the light signal LS such that the projection of the light signal LS onto the target102may form a plurality of light speckles separated from each other. In some embodiments, a collimating lens is further included between the light source112and the optical microstructure114for rectifying the light signal LI to form parallel light. The image sensor120is configured to sense the reflected light signal LR returned from the target102to obtain image information of the target102, wherein reflected light signal LR is generated by the light signal LS reflected from the target102. In the present embodiment, the image sensor120includes (but is not limited to) a pixel array122, a light speckle position determination unit126, a depth information calculation unit124, and a storage128. Reference is also made toFIG.2, in which the pixel array122has a plurality of pixel rows extending along a first predetermined direction X and a plurality of pixel columns extending along a second predetermined direction Y; for example, the first pixel row consists of the pixel PD00to the pixel PD09, the second pixel row consists of the pixel PD10to the pixel PD19, the third pixel row consists of the pixel PD20to the pixel PD29; and the first pixel columns consisting of the pixel PD00to the pixel PD90, the second pixel columns consisting of the pixel PD01to the pixel PD91, the third pixel columns consisting of the pixel PD02to the pixel PD92. In the present embodiment, the first predetermined direction X is perpendicular to the second predetermined direction Y, and the pixel array122is configured to sense the reflected light signal LR. It should be noted that the light signal LS can form a plurality of light speckles separated from each other on the surface of the target102, and the plurality of light speckles are reflected onto the pixel array122and form a plurality of light speckles separated from each other on the pixel array122, such as the black dots shown inFIG.2(in the actual image, they manifest as brighter light speckles, and the black dots in the appended drawings are for illustration purpose only), wherein each light speckle irradiates at least one pixel. The light speckle position determination unit126is coupled to the pixel array122and configured to scan the pixel array122during a pre-operation stage to establish a database of necessary pixels. Specifically, the pre-operation stage can be a pre-operation phase before the actual use of the imaging system100. For example, during the pre-operation stage, a completely flat target102without concave or convex depth may be used as a reference target to return the reflected light signal LR to the pixel array122so that the light speckle position determination unit126reads the sensing result of each pixel in each row of the pixel array122to detect all the plurality of positions of the plurality of light speckles irradiated on the pixel array122, i.e., to detect all the pixels irradiated in whole or in part by the plurality of light speckles. Then, the plurality of pixels in the pixel array are classified into a plurality of first-type pixels and a plurality of second-type pixels based on the plurality of locations. During the normal operation stage after the pre-operation stage, the plurality of second-type pixels in the pixel array122can be disabled, and only the plurality of first-type pixels are enabled. The depth information calculation unit124only uses the plurality of first-type pixels to calculate the depth information and ignores the plurality of second-type pixels so as to save power consumption and time. In certain embodiments, it is feasible to leave the plurality of second-type pixels in the pixel array122conducted, and the depth information calculation unit124can simply ignore (does not read) the data outputted from the plurality of second-type pixels. The light speckle position determination unit126can store position information of the plurality of first-type pixels of the pixel array122in the storage128, and during the normal operation stage, the depth information calculation unit124reads the storage128to obtain position information of the plurality of first-type pixels so as to read a plurality of sensing results of the plurality of first-type pixels in the pixel array122. During the normal operation stage (i.e., when the imaging system100is actually in use), the target102is often non-planar but an actual object to be detected that has an uneven appearance. The depth information calculation unit124detects the TOF of the reflected light signal LR based on the plurality of sensing results of the plurality of first-type pixels that are read, and then obtain the depth information of the target102according to the TOF. For example, the depth information calculation unit124can obtain the depth information (depth image) of the regions irradiated by the plurality of light speckles from the light signal LS on the surface of the target102according to the pixel output generated by the pixels irradiated by the plurality of light speckles. The light speckle position determination unit126takes at least a plurality of pixels on the pixel array122that are irradiated by the plurality of light speckles as the first-type pixel. In the present embodiment, the light-emitting unit110is arranged at one side of the pixel array122along the first predetermined direction X, and the two are arranged side by side, and a virtual line connecting the center of the light-emitting unit110and the center of the pixel array122is parallel to the first predetermined direction X, and hence during the normal operation stage, the concave and convex of the surface of the target102only result in changes in the position of the light speckle on the pixel array122along the first predetermined direction X. That is, in an ideal condition (i.e., without considering the mechanical errors when manufacturing the imaging system100and the distortion of the light speckle pattern resulted from the optical errors), as long as a pixel of any row in the pixel array122is irradiated by the light speckle during the pre-operation stage, the light speckle may also irradiate any other pixel in the same row during the normal operation stage; whereas if none of the pixels of a particular row in the pixel array122is irradiated by the light speckle during the pre-operation stage, any pixel of that row will still not be irradiated by the light speckle during the normal operation stage, either. In this way, in the present embodiment, the light speckle position determination unit126takes all pixels in the plurality of pixel rows having the plurality of pixels that are irradiated by the plurality of light speckles on the pixel array122as the first-type pixels and takes the remaining pixels as the second-type pixels. The generation of the plurality of light speckles as shown inFIG.2can be implemented using the light source112in connection with the optical microstructure114. In one example, the light speckles that are generated have a plurality of light speckle rows LRL0, LRL1, and LRL2extending along the first predetermined direction X, wherein the plurality of light speckle rows LRL0, LRL1and LRL2are equally spaced, and the plurality of light speckles in each light speckle row LRL0, LRL1and LRL2are arranged following specific rules. Specifically, the plurality of light speckles in each light speckle row LRL0, LRL1are equally spaced, and the plurality of light speckles in two adjacent light speckle rows are arranged in a staggered manner; for example, the plurality of light speckles in the light speckle row LRL0and the plurality of light speckles in the light speckle row LRL1are staggered with each other; the plurality of light speckles in the light speckle row LRL1and the plurality of light speckles in the light speckle row LRL2are staggered with each other. Since the first pixel row (the pixel PD00to the pixel PD09), the second pixel row (the pixel PD10to the pixel PD19), the fifth pixel row (the pixel PD40to the pixel PD49), the sixth pixel row (the pixel PD50to the pixel PD59), the ninth pixel row (the pixel PD80to the pixel PD89) and the tenth pixel row (the pixel PD90to the pixel PD99) inFIG.2all have pixels that are irradiated by the plurality of light speckles, all pixels in the first pixel row (the pixel PD00to the pixel PD09), the second pixel row (the pixel PD10to the pixel PD19), the fifth pixel row (the pixel PD40to the pixel PD49), the sixth pixel row (the pixel PD50to the pixel PD59), the ninth pixel row (the pixel PD80to the pixel PD89) and the tenth pixel row (the pixel PD90to the pixel PD99) are determined as the first-type pixels, and the other pixels on the pixel array122are determined as the second-type pixels. The generation of the plurality of light speckles as shown inFIG.3can be implemented using the light source112in connection with the optical microstructure114. In one example, the light speckles that are generated have a plurality of light speckle rows LRL0, LRL1, and LRL2extending along the first predetermined direction X, wherein the plurality of light speckle rows LRL0, LRL1, and LRL2are equally spaced, and the plurality of light speckles in each light speckle row LRL0, LRL1, and LRL2are arranged following specific rules. Specifically, the plurality of light speckles in each light speckle row LRL0, LRL1, and LRL2are equally spaced, and the plurality of light speckles in two adjacent light speckle rows are arranged in an in-line manner; for example, the plurality of light speckles in the light speckle row LRL0and the plurality of light speckles in the light speckle row LRL1are aligned with each other; the plurality of light speckles in the light speckle row LRL1and the plurality of light speckles in the light speckle row LRL2are aligned with each other. Since the first pixel row (the pixel PD00to the pixel PD09), the second pixel row (the pixel PD10to the pixel PD19), the fifth pixel row (the pixel PD40to the pixel PD49), the sixth pixel row (the pixel PD50to the pixel PD59), the ninth pixel row (the pixel PD80to the pixel PD89) and the tenth pixel row (the pixel PD90to the pixel PD99) inFIG.3all have pixels that are irradiated by the plurality of light speckles, all pixels in the first pixel row (the pixel PD00to the pixel PD09), the second pixel row (the pixel PD10to the pixel PD19), the fifth pixel row (the pixel PD40to the pixel PD49), the sixth pixel row (the pixel PD50to the pixel PD59), the ninth pixel row (the pixel PD80to the pixel PD89) and the tenth pixel row (the pixel PD90to the pixel PD99) are determined as the first-type pixels, and the other pixels on the pixel array122are determined as the second-type pixels. The generation of the plurality of light speckles as shown inFIG.4can be implemented using the light source112in connection with the optical microstructure114. In one example, the light speckles that are generated have a plurality of light speckle rows LRL0, LRL1and LRL2extending along the first predetermined direction X, wherein the plurality of light speckle rows LRL0, LRL1and LRL2are equally spaced, and the plurality of light speckles in each light speckle row LRL0, LRL1and LRL2are pseudo-randomly spaced. Since the first pixel row (the pixel PD00to the pixel PD09), the second pixel row (the pixel PD10to the pixel PD19), the fifth pixel row (the pixel PD40to the pixel PD49), the sixth pixel row (the pixel PD50to the pixel PD59), the ninth pixel row (the pixel PD80to the pixel PD89) and the tenth pixel row (the pixel PD90to the pixel PD99) inFIG.4all have pixels that are irradiated by the plurality of light speckles, all pixels in the first pixel row (the pixel PD00to the pixel PD09), the second pixel row (the pixel PD10to the pixel PD19), the fifth pixel row (the pixel PD40to the pixel PD49), the sixth pixel row (the pixel PD50to the pixel PD59), the ninth pixel row (the pixel PD80to the pixel PD89) and the tenth pixel row (the pixel PD90to the pixel PD99) are determined as the first-type pixels, and the other pixels on the pixel array122are determined as the second-type pixels. In certain embodiments, by adjusting the design, the light speckle position determination unit126can only take a plurality of pixels on the pixel array122that are irradiated by the plurality of light speckles as the first-type pixels and take the other pixels as the second-type pixels. In certain embodiments, the light speckle position determination unit126can also take a plurality of pixel rows where a plurality of pixels of the pixel array112are irradiated by the plurality of light speckles reside as the first-type pixels and takes all other pixels as the second-type pixels. By using the light speckle position determination unit126to screen (in advance in the pre-operation stage) pixels that are necessary to be read by the depth information calculation unit124during the normal operation stage, it is feasible to speed up the process that the depth information calculation unit124reads the sensing result and calculates depth information, and also saves power consumption. Take the embodiments shown inFIG.2toFIG.4as an example; one-half of the pixel rows are not read. As discussed above, in certain embodiments, the pixels that are not required to be read can be disabled; that is, in embodiments shown inFIG.2toFIG.4, one-half of the pixel rows can be kept disabled without performing the sensing task. It should be noted that the light speckle positions shown in embodiments ofFIG.2toFIG.4are ideal results, and in reality, distortions may occur due to the imperfection of the light-emitting unit110, so that the plurality of light speckles that should have irradiated the same pixel row in an ideal would irradiate pixels in more than one rows as a result of distortion, thereby causing more pixel rows to be taken as the first-type pixels so that the number of pixel rows can be ignored reduces, thereby affecting the extent that the power consumption can be reduced and slowing down the read speed. Hence, in certain embodiments, the light-emitting unit110can further include a calibration means to calibrate the shift of the light speckle. Specifically, during the pre-operation stage, the light-emitting unit110of the imaging system100is first subjected to the mechanical calibration and optical calibration before the plurality of pixels in the pixel array122are classified as the plurality of the first-type pixels and the plurality of the second-type pixels. In certain embodiments, the mechanical calibration can include testing the light-emitting unit110and establishing an optical offset model of the light-emitting unit110so as to calibrate the light-emitting unit110accordingly, which includes adjusting the relative positions or angles of the light-emitting unit110and the pixel array122. In certain embodiments, the optical calibration can include controlling the light-emitting unit110to transmit the light signal LS and performing the keystone adjustment to the light-emitting unit110accordingly (e.g., adjusting the position or angle of the light source112). FIG.5is a schematic diagram illustrating an electronic device500employing the imaging system100according to embodiments of the present disclosure. In certain embodiments, the electronic device500can be, for example, a smartphone, personal digital assistant, hand-held computing system, or tablet PC, or the like. All or some of the steps of said pre-operational phase may be performed as a pre-operational method of the imaging system100and may be performed before the imaging system100is shipped from the factory or by the user after it is shipped from the factory. For example, when the electronic device500is impacted, and the positions of the pixel array122and the light-emitting unit110in the imaging system100are shifted, and the user therefore finds that the imaging system100is less accurate during the normal operational stage, then the user can perform the pre-operation stage by himself through the system options or programs preset by the electronic device500following the operating instructions provided to re-evaluate which pixels in the pixel array122belongs to the plurality of first-type pixels, in order to improve the accuracy of the normal operation stage. The foregoing outlines features of several embodiments of the present disclosure so that persons having ordinary skill in the art may better understand the various aspects of the present disclosure. Persons having ordinary skill in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Persons having ordinary skill in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.
22,812
11943552
DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure will be described in detail and clearly to such an extent that one skilled in the art may understand and practice the present disclosure. FIG.1is a block diagram of an image detecting system according to an embodiment of the present disclosure. An image detecting system100may be also referred to as an “electronic device”, an “electronic system”, or a “distance detecting system”. For example, the electronic device may be a smartphone, a tablet, a digital camera, a wearable device, or a mobile device. The image detecting system100may include a camera110and a processor130. The camera110may emit a light signal EL to an object based on a time of flight (ToF) technology, may sense a light signal RL reflected from the object, and may sense a distance between the camera110and the object. The camera110may include a light controller111, a light source112, and a depth sensor120. The light controller111may control the light source112under control of the depth sensor120or the processor130. The light controller111may modulate the light signal EL to be emitted or output from the light source112. The light source112may emit the light signal EL modulated by the light controller111. For example, the modulated light signal EL may have the shape of a square wave (or pulse) or a sine wave, and the light signal EL may be an infrared, a microwave, a light wave, or an ultrasonic wave. For example, the light source112may include a light emitting diode (LED), a laser diode (LD), or an organic light emitting diode (OLED). The depth sensor120may be also referred to as an “image sensor” or a “TOF sensor”. The depth sensor120may include a pixel array121, a row driver122, an analog processing circuit123, an analog-to-digital converter (ADC)124, an output buffer125, and a timing controller126. The pixel array121may include pixels PX arranged along a row direction and a column direction. The pixel array121may be implemented on a silicon substrate or a semiconductor substrate. The pixels PX may convert the light signal RL reflected from an object into an electrical signal. Due to a distance between the depth sensor120and the object, the light signal RL incident on the pixel array121may be delayed with respect to the light signal EL output from the light source112. There may be a time difference between the light signals RL and EL. The pixel PX may integrate, store, transfer, or remove charges based on control signals provided from the row driver122. The pixel PX may be also referred to as a “ToF pixel”. The row driver122may control the pixel array121under control of the timing controller126. The row driver122may provide the control signals to the pixels PX. For example, the control signals may include OG, PG, TG, SG, RG, SEL, CTRL, and CTRLB illustrated inFIGS.2A to11. The row driver122may control all the pixels PX of the pixel array121at the same time in a global mode or may control the pixels PX of the pixel array121in the unit of row in a rolling mode. The row driver122may control a toggling operation of all the pixels PX in the global mode. The analog processing circuit123may receive, sample, and hold an output signal (also referred to as an “image signal” or a “depth signal”) output from the pixel array121. The analog processing circuit123may be connected with the pixels PX of the pixel array121and may control output lines extending in the column direction. The analog processing circuit123may perform a correlated double sampling (CDS) operation on the output signal and may remove a noise included in the output signal. For example, the analog processing circuit123may compare a reset signal generated based on a reset operation of each pixel PX with an image signal. The analog processing circuit123may remove a noise included in the image signal based on a difference between the reset signal and the image signal. The analog processing circuit123may output the noise-free image signal to the analog-to-digital converter (ADC)124in units of column, under control of the timing controller126. The analog-to-digital converter124may convert the output signal processed by the analog processing circuit123into a digital signal. The analog-to-digital converter124may organize image data (or depth data) by using the digital signal. The analog-to-digital converter124may provide the image data to the output buffer125. For example, the analog-to-digital converter124may be included or integrated in the analog processing circuit123. The output buffer125may store the image data output from the analog-to-digital converter124. The output buffer125may output the image data to the processor130. The timing controller126may control the pixel array121, the row driver122, the analog processing circuit123, the ADC124, and the output buffer125of the depth sensor120. The timing controller126may control the light controller111under control of the processor130. For example, the timing controller126may control the row driver122based on modulation information or phase information of the light signal EL to be output from the light source112. Under control of the timing controller126, the row driver122may transmit, to the pixel PX, a first modulation signal (or a first photo gate signal), the phase of which is the same as or different from a phase of the light signal EL, and a second modulation signal (or a second photo gate signal), the phase of which is different from the phase of the first modulation signal. The depth sensor120may generate first image data by using the first photo gate signal and may generate second image data by using the second photo gate signal. The depth sensor120may send the first and second image data to the processor130. However, the embodiments of the disclosure are not limited thereto, and the number of photo gate signals may be 2 or more. The processor130may control the camera110. The processor130may control the light controller111and the light source112such that the light signal EL is output. The processor130may control the depth sensor120such that the depth sensor120senses the light signal RL and generates the first and second image data. The processor130may calculate a distance (e.g., a TOF value) between the depth sensor120and an object, a shape of the object, a movement speed of the object, etc. based on the first and second image data. For example, the processor130may calculate a delay time of the light signal RL to the light signal EL based on image data that the depth sensor120generates by using two or more modulation signals being identical to or different from a phase difference with the light signal EL. The processor130may include an image signal processor (ISP) for processing image data provided from the depth sensor120. The processor130may be also referred to as a “host” or a “camera controller”. For example, the processor130may be independent of the camera110as illustrated inFIG.1. As another example, the processor130may be integrated in the camera110or the depth sensor120. FIGS.2A to2Care views illustrating an example of the pixel PX ofFIG.1.FIG.2Ais a circuit diagram of a first pixel PX1among the pixels PX ofFIG.1,FIG.2Bis a layout of the first pixel PX1, andFIG.2Cis a cross-sectional view of the first pixel PX1taken along line A-A′ ofFIG.2B. Referring toFIGS.1and2A, the first pixel PX1may include a charge integration circuit CC, first and second transfer transistors T1and T2, first and second read circuits RC1and RC2, an overflow transistor OF, and a switch SW. For convenience of description, in the following embodiments, the description will be given under the condition that each of various transistors included in the pixels PX is turned on when a voltage of a high level is applied to a gate terminal thereof, like an NMOS transistor. However, the present disclosure is not limited thereto. For example, each of the transistors included in the pixels PXs may be turned on when a voltage of a low level is applied to a gate terminal thereof, like a PMOS transistor. That is, the kind of the transistors included in the pixels PX is not limited to the following description. The charge integration circuit CC may be configured to integrate charges generated from light provided to the first pixel PX1. The light provided to the first pixel PX1may be light that is reflected after an emission light is output from the light source112ofFIG.1. The charge integration circuit CC may include a first photo transistor P1and a second photo transistor P2. During an integration period (or a sensing period), the first and second photo transistors P1and P2may integrate charges. A delay time of the emission light may be calculated based on the amount of charges (corresponding to the reflected light) that the first photo transistor P1and the second photo transistor P2integrate. The first photo transistor P1may integrate charges based on a first photo gate signal PG1. The first photo gate signal PG1may toggle during the integration period. For example, the first photo gate signal PG1may have the same phase as a clock signal for outputting the emission light. When the emission light is output, the first photo transistor P1may sense light and may integrate first charges generated from the sensed light by the first photo transistor P1. The second photo transistor P2may integrate charges based on a second photo gate signal PG2. The second photo gate signal PG2may toggle during the integration period. A phase of the second photo gate signal PG2may be different from the phase of the first photo gate signal PG1. For example, the second photo gate signal PG2and the clock signal for outputting the emission light may have a phase difference of 180 degrees. When the light source112ofFIG.1does not output the emission light, the second photo transistor P2may sense light and may integrate second charges generated from the sensed light by the second photo transistor P2. According to an embodiment, the charge integration circuit CC may further include a third photo transistor integrating charges based on a third photo gate signal and a fourth photo transistor integrating charges based on a fourth photo gate signal. For example, the third photo gate signal and the clock signal may have a phase difference of 90 degrees, and the fourth photo gate signal and the clock signal may have a phase difference of 270 degrees. The first to fourth photo transistors may be connected in parallel. In this case, because reflected emission light is sensed depending on 4 phases, a distance between an object and an image sensor may be calculated more accurately. The first and second transfer transistors T1and T2may control a transfer of the charges integrated from the charge integration circuit CC. The first transfer transistor T1may control the transfer of the integrated first charges from the first photo transistor P1to a first floating diffusion node FD1. The second transfer transistor T2may control a transfer of the integrated second charges from the second photo transistor P2to a second floating diffusion node FD2. During the integration period, the first transfer transistor T1may block the transfer of the first charges to the first floating diffusion node FD1based on a first transfer gate signal TG1of a low level. During the transfer period, the first transfer transistor T1may transfer the first charges to the first floating diffusion node FD1based on the first transfer gate signal TG1of a high level. The first transfer transistor T1may be connected in series between the first photo transistor P1and the first floating diffusion node FD1. Similar to the operations of the first transfer transistor T1, the second transfer transistor T2may control the transfer of the second charges to the second floating diffusion node FD2based on the first transfer gate signal TG1. For example, during the integration period, the second transfer transistor T2may block the transfer of the second charges to the second floating diffusion node FD2based on the first transfer gate signal TG1of a low level. During the transfer period, the second transfer transistor T2may transfer the second charges to the second floating diffusion node FD2based on the first transfer gate signal TG1of a high level. The second transfer transistor T2may be connected in series between the second photo transistor P2and the second floating diffusion node FD2. The first read circuit RC1may generate a first image signal OUT1based on the charges stored at the first floating diffusion node FD1. The second read circuit RC2may generate a second image signal OUT2based on the charges stored at the second floating diffusion node FD2. The first read circuit RC1may include a first reset transistor R1, a first source follower transistor SF1, and a first select transistor SE1. The first reset transistor R1may remove charges stored at the first floating diffusion node FD1based on a reset gate signal RG. For example, before the integration period (or before charges are integrated by the first photo transistor P1are transferred to the first floating diffusion node FD1and/or after a read operation for the first image signal OUT1is performed), a reset operation may be performed based on the reset gate signal RG of a high level. The first reset transistor R1may be connected between a supply terminal of a power supply voltage VDD and the first floating diffusion node FD1. The first source follower transistor SF1may generate the first image signal OUT1based on the charges stored at the first floating diffusion node FD1. A magnitude (e.g., a voltage level) of the first image signal OUT1may be determined depending on the amount of charges stored at the first floating diffusion node FD1. The first source follower transistor SF1may be connected between the supply terminal of the power supply voltage VDD and the first select transistor SE1. The first select transistor SE1may output the first image signal OUT1based on a selection signal SEL. The first select transistor SE1may output the first image signal OUT1to a bit line connected with the first pixel PX1based on the selection signal SEL of a high level. For the correlated double sampling operation of the analog processing circuit123ofFIG.1, the first select transistor SE1may output a signal generated by the first floating diffusion node FD1reset and the first image signal OUT1. The second read circuit RC2may include a second reset transistor R2, a second source follower transistor SF2, and a second select transistor SE2. The second reset transistor R2may remove charges stored at the second floating diffusion node FD2, the second source follower transistor SF2may generate a second image signal OUT2, and the second select transistor SE2may output the second image signal OUT2. The second read circuit RC2has substantially the same configuration as the first read circuit RC1, and thus, additional description will be omitted to avoid redundancy. According to an embodiment, the first pixel PX1may include one read circuit. For example, the second read circuit RC2may be omitted, and a terminal of the second transfer transistor T2, which faces away from the second photo transistor P2, may be connected with the first floating diffusion node FD1. That is, charges integrated by the first and second photo transistors P1and P2may be transferred to the first floating diffusion node FD1shared by the first and second photo transistors P1and P2. In this case, the size of the first pixel PX1may decrease. In response to an overflow gate signal OG, the overflow transistor OF may remove charges integrated by the first and second photo transistors P1and P2during a time other than the integration period or may discharge the integrated charges to the power supply voltage VDD. The overflow transistor OF may be connected between the supply terminal of the power supply voltage VDD and a connection node of the first photo transistor P1and the second photo transistor P2. The switch SW may operate based on a switch control signal CTRL and an inverted switch control signal CTRLB. The switch SW may include a first transistor TR1and a second transistor TR2. The first transistor TR1may be turned on in response to the inverted switch control signal CTRLB, and the second transistor TR2may be turned on in response to the switch control signal CTRL. A first end of the first transistor TR1may be connected to a node between the first and second photo transistors P1and P2and/or to a node between the first and second transfer transistors T1and T2. A second end of the first transistor TR1may be connected with a ground terminal. A first end of the second transistor TR2may be connected to a node between the first and second photo transistors P1and P2and/or to a node between the first and second transfer transistors T1and T2. A second end of the second transistor TR2may be connected with a supply terminal of a negative voltage VSSN. For example, when the switch control signal CTRL is at the low level, the inverted switch control signal CTRLB may be at the high level. In this case, the first transistor TR1may be turned on, and the second transistor TR2may be turned off. As such, a ground voltage may be applied to the node between the first and second photo transistors P1and P2and/or to the node between the first and second transfer transistors T1and T2. When the switch control signal CTRL is at the high level, the inverted switch control signal CTRLB may be at the low level. In this case, the first transistor TR1may be turned off, and the second transistor TR2may be turned on. As such, the negative voltage VSSN may be applied to the node between the first and second photo transistors P1and P2and/or the node between the first and second transfer transistors T1and T2. The configuration of the switch SW is not limited to the embodiment shown inFIG.2A. For example, the switch SW may be implemented with an SPDT switch. The SPDT switch may operate in response to the switch control signal CTRL. For example, the SPDT switch may be connected with the supply terminal of the negative voltage VSSN when the switch control signal CTRL is at the high level and may be connected with the ground terminal when the switch control signal CTRL is at the low level. Referring toFIGS.1,2A, and2B, the first pixel PX1may include first and second photo gate electrodes GP1and GP2of the respective first and second photo transistors P1and P2, first and second transfer gate electrodes GT1and GT2of the respective first and second transfer transistors T1and T2, an overflow gate electrode GO of the overflow transistor OF, and the first and second floating diffusion nodes FD1and FD2. The layout of the first pixel PXT may be understood as one example and is not limited thereto. In the accompanying drawings, a first direction DR1and a second direction DR2are defined as being perpendicular to a direction in which light is received. The first direction DR1and the second direction DR2are defined as being perpendicular to each other. Hatched areas may include sources or drains of transistors illustrated inFIG.2B. For example, the hatched areas that are adjacent to the first transfer gate electrode GT1of the first transfer transistor T1in the first direction DR1may be a source and a drain of the first transfer transistor T1, respectively. However, the structure of the first and second photo transistors P1and P2may be distinguished from those of any other transistors, which will be described with reference toFIG.2C. The first and second photo gate electrodes GP1and GP2may be disposed at a central portion of the first pixel PX1. Each of the first and second photo gate electrodes GP1and GP2may have an area that is great enough to integrate charges generated by the sensed light. For example, the first and second photo gate electrodes GP1and GP2may be of the same size and each may be formed to have an area having the greatest size from among areas of other components included in the first pixel PX1. The first photo gate electrode GP1and the second photo gate electrode GP2may be disposed adjacent to each other in the first direction DR1. The first transfer gate electrode GT1and the first floating diffusion node FD1may be arranged in a direction facing away from the first direction DR1so as to be spaced apart from the first photo gate electrode GP1. The first floating diffusion node FD1may be formed at the source or drain of the first transfer transistor T1. The second transfer gate electrode GT2and the second floating diffusion node FD2may be arranged in the first direction DR1so as to be spaced apart from the second photo gate electrode GP2. The second floating diffusion node FD2may be formed at the source or drain of the second transfer transistor T2. The overflow gate electrode GO may be disposed adjacent to the first and second photo gate electrodes GP1and GP2in the second direction DR2. The overflow transistor OF may be connected with a line for supplying the power supply voltage VDD. After the integration period, charges integrated by the first and second photo transistors P1and P2may be removed through the overflow transistor OF. Although not illustrated, gates of the transistors R1, SF1, and SE1included in the first read circuit RC1may extend from the first floating diffusion node FD1in at least one of the first direction DR1and the second direction DR2, and gates of the transistors R2, SF2, and SE2included in the second read circuit RC2may extend from the second floating diffusion node FD2in at least one of the first direction DR1and the second direction DR2. That is, the layout of the first and second read circuits RC1and RC2may vary depending on embodiments, and additional description associated with the embodiments will be omitted to avoid redundancy. Referring toFIGS.1and2A to2C, the first pixel PX1may include a P-type substrate P-epi, a light detecting area LDA, first and second bridging diffusion areas BD1and BD2, the first and second floating diffusion nodes FD1and FD2, the first and second photo gate electrodes GP1and GP2, and the first and second transfer gate electrodes GT1and GT2. A third direction DR3that is a direction in which light is received by the first pixel PX1and is perpendicular to the first and second directions DR1and DR2. The light detecting area LDA, the first and second bridging diffusion areas BD1and BD2, and the first and second floating diffusion nodes FD1and FD2may be formed in the P-type substrate P-epi. The P-type substrate P-epi may be an epitaxial substrate doped with P—, but the present disclosure is not limited thereto. Although not illustrated, a silicon oxide layer may be formed on the P-type substrate P-epi, and the first and second photo gate electrodes GP1and GP2and the first and second transfer gate electrodes GT1and GT2may be placed on the silicon oxide layer. According to an embodiment, the P-type substrate P-epi may extend in the third direction DR3. To overcome low quantum efficiency (QM) due to an IR light source of a depth sensor used to measure a distance in the ToF manner, a length of the P-type substrate P-epi in the third direction DR3may be greater than or equal to a reference value. The light detecting area LDA may integrate charges depending on voltage levels of the first and second photo gate electrodes GP1and GP2. The charges integrated in the light detecting area LDA may be transferred to the first and second bridging diffusion areas BD1and BD2and the first and second floating diffusion nodes FD1and FD2. The light detecting area LDA may be an area doped with P-type impurities, but the present disclosure is not limited thereto. For example, when the first photo gate signal PG1of the high level is applied to the first photo gate electrode GP1, charges may be collected in the light detecting area LDA adjacent to the first photo gate electrode GP1. However, when the negative voltage VSSN is applied to the P-type substrate P-epi, even when the first photo gate signal PG1of the low level is applied to the first photo gate electrode GP1, charges may be collected in the light detecting area LDA adjacent to the first photo gate electrode GP1. For example, when the second photo gate signal PG2of the high level is applied to the second photo gate electrode GP2, charges may be collected (or accumulated) in the light detecting area LDA adjacent to the second photo gate electrode GP2. However, when the negative voltage VSSN is applied to the P-type substrate P-epi, even when the second photo gate signal PG2of the low level is applied to the second photo gate electrode GP2, charges may be collected (or accumulated) in the light detecting area LDA adjacent to the second photo gate electrode GP2. This will be more fully described with reference toFIGS.5A to5C. For example, when the first transfer gate signal TG1of the high level is applied to the first transfer gate electrode GT1, the charges collected in the light detecting area LDA adjacent to the first photo gate electrode GP1may be transferred and stored at the first floating diffusion node FD1through the first bridging diffusion area BD1. When the first transfer gate signal TG1of the high level is applied to the second transfer gate electrode GT2, the charges collected in the light detecting area LDA adjacent to the second photo gate electrode GP2may be transferred and stored at the second floating diffusion node FD2through the second bridging diffusion area BD2. According to an embodiment, the first and second bridging diffusion areas BD1and BD2may not be formed. In this case, the charges integrated in the light detecting area LDA may be directly transferred to the first and second floating diffusion nodes FD1and FD2based on the first transfer gate signal TG1. The first and second bridging diffusion areas BD1and BD2and the first and second floating diffusion nodes FD1and FD2may be doped with N-type impurities, but the present disclosure is not limited thereto. According to an embodiment, areas corresponding to the first and second photo gate electrodes GP1and GP2in the light detecting area LDA may be separated in the first direction DR1by a channel stop area (not illustrated) so as to be placed under the first and second photo gate electrodes GP1and GP2. FIG.3is a timing diagram of the pixel ofFIG.2A. Referring toFIGS.1,2A, and3, a horizontal axis represents time, and a vertical axis represents magnitudes (or voltage levels) of the overflow gate signal OG, the first and second photo gate signals PG1and PG2, the first transfer gate signal TG1, the reset gate signal RG, the selection signal SEL, and the switch control signal CTRL ofFIG.2A. The overflow gate signal OG, the first and second photo gate signals PG1and PG2, the first transfer gate signal TG1, the reset gate signal RG, the selection signal SEL, and the switch control signal CTRL may be generated by the row driver122ofFIG.1. A first time t1may be a global reset time. During the global reset time, accumulated charges of the first and second floating diffusion nodes FD1and FD2and the charge integration circuit CC of the first pixel PX1may be removed. During the global reset time, accumulated charges of pixels included in the pixel array121ofFIG.1may be removed. To this end, the overflow gate signal OG, the reset gate signal RG, the first transfer gate signal TG1, and the first and second photo gate signals PG1and PG2may all have the high level. According to an embodiment, the first and second photo gate signals PG1and PG2may have the low level. The switch control signal CTRL may have the low level during the first time t1, but the present disclosure is not limited thereto. A second time t2may be the integration period or a sensing time. During the integration period, the light source112ofFIG.1may output an emission light to the outside, and the first pixel PX1may sense the emission light reflected by an object to integrate charges. The emission light may be output to the outside based on a first clock signal. The first clock signal may be identical to the first photo gate signal PG1. However, the present disclosure is not limited thereto. For example, the first clock signal may be identical to the second photo gate signal PG2. The first photo gate signal PG1and the second photo gate signal PG2may be complementary and may toggle. The switch control signal CTRL may be maintained at the high level during the second time t2. Accordingly, the negative voltage VSSN may be applied to the node between the first and second photo transistors P1and P2and/or the node between the first and second transfer transistors T1and T2. When the negative voltage VSSN is applied to the node, a toggle voltage difference may decrease. This will be more fully described with reference toFIGS.5A to5D. During the high level of the first photo gate signal PGT, first charges may be integrated by the first photo transistor P1. For example, the first charges may be collected (or accumulated) in the light detecting area LDA adjacent to the first photo gate electrode GP1. According to an embodiment, when the negative voltage VSSN is applied to the node, even during the low level of the first photo gate signal PGT, the first charges may be integrated by the first photo transistor P1. During the high level of the second photo gate signal PG2, second charges may be integrated by the second photo transistor P2. For example, the second charges may be collected (or accumulated) in the light detecting area LDA adjacent to the second photo gate electrode GP2. According to an embodiment, when the negative voltage VSSN is applied to the node, even during the low level of the second photo gate signal PG2, the second charges may be integrated by the second photo transistor P2. The first transfer gate signal TG1may be at the low level during the second time t2, and thus, the first and second transfer transistors T1and T2may be turned off. Accordingly, the charges collected in the light detecting area LDA may not be transferred to the first and second floating diffusion nodes FD1and FD2. A third time t3may be a row reset time. During the row reset time, accumulated charges at the first and second floating diffusion nodes FD1and FD2and the charge integration circuit CC of the first pixel PX1may be removed. During the row reset time, accumulated charges of pixels included in a row selected by the row driver122ofFIG.1may be removed. To this end, the overflow gate signal OG and the reset gate signal RG may have the high level. A fourth time t4may be a reset signal read time. During the reset signal read time, a reset signal generated for resetting the first and second floating diffusion nodes FD1and FD2may be read. To this end, the selection signal SEL may have the high level, and the reset signal generated by the first and second source follower transistors SF1and SF2may be output to a bit line. The analog processing circuit123ofFIG.1may compare the reset signal with an image signal to be read later. A fifth time t5may be a transfer time. During the transfer time, the first transfer gate signal TG1may be at the high level. The first charges integrated by the first photo transistor P1may be transferred to the first floating diffusion node FD1. The second charges integrated by the second photo transistor P2may be transferred to the second floating diffusion node FD2. A sixth time t6may be an image signal read time. During the image signal read time, an image signal generated by the charges transferred to the first and second floating diffusion nodes FD1and FD2may be read. To this end, the selection signal SEL may have the high level, and the image signal generated by the first and second source follower transistors SF1and SF2may be output to the bit line. After the sixth time t6, the operation described with reference to the third time t3, that is, the row reset time may be performed. In this case, accumulated charges of the first and second floating diffusion nodes FD1and FD2may be removed or discharged. Afterwards, the operation described with reference to the fourth time t4, that is, the reset signal read time may be performed, and the operation described with reference to the fifth time t5, that is, the transfer time may be performed. The operations described with reference to the third to sixth times t3to t6may be repeated depending on the number of times of a transfer of first and second charges. FIGS.4A and4Bare views illustrating an example of an integration operation of a pixel according to an embodiment. Referring toFIGS.1,2A,2C,4A, and4B, a second pixel PX2may include the P-type substrate P-epi, the light detecting area LDA, the first and second bridging diffusion areas BD1and BD2, the first and second floating diffusion nodes FD1and FD2, the first and second photo gate electrodes GP1and GP2, and the first and second transfer gate electrodes GT1and GT2. The third direction DR3that is a direction in which light is received and is perpendicular to the first and second directions DR1and DR2. The components ofFIG.4Aare similar to the components ofFIG.2C, and thus, additional description will be omitted to avoid redundancy. A voltage of 0 V may be applied to the P-type substrate P-epi of the second pixel PX2. For example, when the switch control signal CTRL is at the low level, the node between the first and second photo transistors P1and P2and the node between the first and second transfer transistors T1and T2may be connected with the ground terminal, and the voltage of 0 V may be applied thereto. The second pixel PX2may receive the first and second photo gate signals PG1and PG2from the row driver122during the integration period. The first and second photo gate signals PG1and PG2may be signals that toggle between a first level and a second level such that toggle voltages of different phases are provided. For example, the first level may be 0 V, and the second level may be 1 V. That is, a toggle voltage difference may be 1 V. The row driver122may integrate charges based on the first and second photo gate signals PG1and PG2. Referring toFIG.4A, when the first photo gate signal PG1has a voltage of 0 V, the second photo gate signal PG2may have a voltage of 1 V. In this case, a voltage difference may occur between the second photo gate electrode GP2and the P-type substrate P-epi, and thus, second charges may be integrated in the light detecting area LDA adjacent to the second photo gate electrode GP2. According to an embodiment, the second charges may be stored in the second bridging diffusion area BD2through the light detecting area LDA. Referring toFIG.4B, when the first photo gate signal PG1has a voltage of 1 V, the second photo gate signal PG2may have a voltage of 0 V. In this case, a voltage difference may occur between the first photo gate electrode GP1and the P-type substrate P-epi, and thus, first charges may be integrated in the light detecting area LDA adjacent to the first photo gate electrode GP1. According to an embodiment, the first charges may be stored in the first bridging diffusion area BD1through the light detecting area LDA. That is, when a potential of the P-type substrate P-epi of the second pixel PX2is 0 V, a predetermined toggle voltage difference may be required to clearly separate and integrate the first charges or the second charges. However, when the first and second photo gate signals PG1and PG2toggle during the integration period, the power consumption of the row driver122may become great. The power consumption of the row driver122may be reduced by decreasing the area of the first and second photo gate electrodes GP1and GP2such that a capacitance decreases or a toggle voltage difference decreases, but a demodulation contrast (DC) characteristic may be degraded. Also, as described with reference toFIG.2C, in the case where the P-type substrate P-epi extends in the third direction DR3, a toggle voltage of a given level or higher may be required in collecting charges. FIGS.5A to5Dare views illustrating an example of operations of a pixel according to an embodiment of the present disclosure.FIGS.5A to5Cshow an integration operation and a store operation of a third pixels PX3, andFIG.5Dshows a transfer operation of the third pixels PX3. Referring toFIGS.1,2A,2C, and5A to5D, the third pixel PX3may include the P-type substrate P-epi, the light detecting area LDA, the first and second bridging diffusion areas BD1and BD2, the first and second floating diffusion nodes FD1and FD2, the first and second photo gate electrodes GP1and GP2, and the first and second transfer gate electrodes GT1and GT2. The third direction DR3that is a direction in which light is received may be perpendicular to the first and second directions DR1and DR2. The components ofFIGS.5Ato5D are similar to the components ofFIG.2C, and thus, additional description will be omitted to avoid redundancy. The negative voltage VSSN may be applied to the P-type substrate P-epi of the third pixel PX3. For example, when the switch control signal CTRL is at the high level, the node between the first and second photo transistors P1and P2and the node between the first and second transfer transistors T1and T2may be connected with the supply terminal of the negative voltage VSSN, and a voltage of −1 V may be applied thereto. The third pixel PX3may receive the first and second photo gate signals PG1and PG2from the row driver122during the integration period. The first and second photo gate signals PG1and PG2may be signals that toggle between a first level and a second level such that toggle voltages of different phases are provided. For example, the first level may be 0 V, and the second level may be 0.5 V. That is, a toggle voltage difference may be 0.5 V and may be smaller than a toggle voltage difference ofFIG.4A. The row driver122may integrate charges based on the first and second photo gate signals PG1and PG2. Referring toFIG.5A, when the first photo gate signal PG1has a voltage of 0 V, the second photo gate signal PG2may have a voltage of 0.5 V. Accordingly, a voltage difference may be present between the first photo gate electrode GP1and the P-type substrate P-epi and between the second photo gate electrode GP2and the P-type substrate P-epi. In this case, first charges may be integrated in the light detecting area LDA adjacent to the first photo gate electrode GP1, and second charges may be integrated in the light detecting area LDA adjacent to the second photo gate electrode GP2. According to an embodiment, the first charges may be stored in the first bridging diffusion area BD1through the light detecting area LDA. According to an embodiment, the second charges may be stored in the second bridging diffusion area BD2through the light detecting area LDA. Referring toFIG.5B, when the first photo gate signal PG1has a voltage of 0.5 V, the second photo gate signal PG2may have a voltage of 0 V. Accordingly, a voltage difference may be present between the first photo gate electrode GP1and the P-type substrate P-epi and between the second photo gate electrode GP2and the P-type substrate P-epi. The first charges integrated in the light detecting area LDA ofFIG.5Amay be stored in the first bridging diffusion area BD1in response to the first photo gate signal PG1. Referring toFIG.5C, when the first photo gate signal PG1has a voltage of 0 V, the second photo gate signal PG2may have a voltage of 0.5 V. Accordingly, a voltage difference may be present between the first photo gate electrode GP1and the P-type substrate P-epi and between the second photo gate electrode GP2and the P-type substrate P-epi. The second charges integrated in the light detecting area LDA ofFIG.5Amay be stored in the second bridging diffusion area BD2in response to the second photo gate signal PG2. Referring toFIG.5D, when the first transfer gate signal TG1is at the high level, the first and second transfer transistors T1and T2may be turned on. The high level of the first transfer gate signal TG1may be greater than the high level of the first and second photo gate signals PG1and PG2. For example, the high level of the first transfer gate signal TG1may be 1 V. Accordingly, the first charges stored in the first bridging diffusion area BD1may be transferred to the first floating diffusion node FD1, and the second charges stored in the second bridging diffusion area BD2may be transferred to the second floating diffusion node FD2. During the transfer operation ofFIG.5D, the P-type substrate P-epi may be set to a ground state or may maintain a negative voltage state. As described above, when the negative voltage VSSN is applied to the P-type substrate P-epi, even though the first photo gate signal PG1or the second photo gate signal PG2is at the low level (e.g., has a voltage of 0 V), a voltage difference may be present in the P-type substrate P-epi. Accordingly, charges generated by the sensed light may move in the third direction DR3so as to be integrated in the light detecting area LDA. Also, when the first photo gate signal PGT or the second photo gate signal PG2is at the high level (e.g., has a voltage of 0.5 V), charges integrated in the light detecting area LDA may move in the first direction DR1so as to be stored in the first bridging diffusion area BD1or the second bridging diffusion area BD2. In other words, when the negative voltage VSSN is applied to the body of the third pixels PX3, a toggle voltage difference for the integration operation or the store operation may be smaller than that ofFIG.4A. Accordingly, the third pixels PX3may reduce the power consumption of the row driver122while preventing the DC characteristic from being degraded. FIGS.6to9are views illustrating an example of the pixel ofFIG.1.FIG.6is a cross-sectional view of a fourth pixel PX4,FIG.7is a cross-sectional view of a fifth pixel PX5,FIG.8is a cross-sectional view of a sixth pixel PX6, andFIG.9is a cross-sectional view of a seventh pixel PX7. Components of each of the fourth to seventh pixels PX4, PX5, PX6, and PX7are similar to the components of the first pixel PX1ofFIG.2C, and thus, additional description will be omitted to avoid redundancy. Below, a difference between the fourth to seventh pixels PX4, PX5, PX6, and PX7and the first pixel PXT ofFIG.2Cwill be mainly described. Referring toFIG.6, the fourth pixel PX4may include the P-type substrate P-epi, the first and second photo gate electrodes GP1and GP2, and the first and second transfer gate electrodes GT1and GT2. The light detecting area LDA, the first floating diffusion node FD1, and the second floating diffusion node FD2may be formed in the P-type substrate P-epi. Although not shown, the first and second bridging diffusion areas BD1and BD2may be included in the light detecting area LDA. The P-type substrate P-epi may include an upper surface and a lower surface, and the first and second photo gate electrodes GP1and GP2and the first and second transfer gate electrodes GT1and GT2may be formed over the upper surface of the P-type substrate P-epi. The switch SW may be connected with the lower surface of the P-type substrate P-epi. The switch SW may control a voltage to be applied to the P-type substrate P-epi based on the switch control signal CTRL. For example, during the integration period, the switch SW may apply the negative voltage VSSN to the P-type substrate P-epi in response to the switch control signal CTRL of the high level. For example, during the remaining period other than the integration period, the switch SW may apply the negative voltage VSSN or the ground voltage to the P-type substrate P-epi in response to the switch control signal CTRL of the low level. The first photo gate electrode GP1may receive the first photo gate signal PGT, and the second photo gate electrode GP2may receive the second photo gate signal PG2. A phase of the first photo gate signal PGT may be opposite to a phase of the second photo gate signal PG2. Each of the first and second photo gate signals PGT and PG2may include a signal toggling between the high level and the low level during the integration period. According to an embodiment, the low level may be less than 0 V. For example, the low level may have the same potential as the negative voltage VSSN (e.g., −1 V). Referring toFIG.7, the fifth pixel PX5may include the P-type substrate P-epi, the first and second photo gate electrodes GP1and GP2, and the first and second transfer gate electrodes GT1and GT2. The switch SW may be connected with the lower surface of the P-type substrate P-epi. The P-type substrate P-epi and the switch SW ofFIG.7are similar to the P-type substrate P-epi and the switch SW ofFIG.6, and thus, additional description will be omitted to avoid redundancy. The first photo gate electrode GP1and the second photo gate electrode GP2may extend in a direction perpendicular to the upper surface of the P-type substrate P-epi (or a direction facing away from the third direction DR3). That is, the first and second photo gate electrodes GP1and GP2may be vertical photo gate electrodes. Accordingly, portions of the first and second photo gate electrodes GP1and GP2may be formed within the P-type substrate P-epi. As such, the light detecting area LDA may be expanded in the direction facing away from the third direction DR3. That is, the light detecting area LDA may be formed to have a larger area in the P-type substrate P-epi compared to the fourth pixel PX4shown inFIG.6. Referring toFIG.8, the sixth pixel PX6may include the P-type substrate P-epi and first to third photo gate electrodes GP1, GP2, and GP3. The switch SW may be connected with the lower surface of the P-type substrate P-epi. The P-type substrate P-epi and the switch SW ofFIG.8are similar to the P-type substrate P-epi and the switch SW ofFIG.6, and thus, additional description will be omitted to avoid redundancy. The third photo gate electrode GP3may be interposed between the first photo gate electrode GP1and the second photo gate electrode GP2. That is, a third photo transistor corresponding to the third photo gate electrode GP3may be connected in series between the first photo transistor P1and the second photo transistor P2. A length of the third photo gate electrode GP3in the first direction DR1may be longer than lengths of the first and second photo gate electrodes GP1and GP2in the first direction DR1. That is, a capacitance of the third photo gate electrode GP3may be greater than a capacitance of each of the first photo gate electrode GP1and the second photo gate electrode GP2. The third photo gate electrode GP3may receive a third photo gate signal PG3. During the integration period, the third photo gate signal PG3may not toggle and may maintain a voltage level of 0 V or higher. For example, when the voltage level is 1 V and the negative voltage VSSN is applied to the P-type substrate P-epi through the switch SW, charges may be integrated in the light detecting area LDA. The first and second photo gate signals PG1and PG2may toggle between the high level and the low level, and charges may be transferred to the first floating diffusion node FD1and the second floating diffusion node FD2in response to the first and second photo gate signals PG1and PG2of the high level. That is, the first and second photo gate electrodes GP1and GP2may function as the first and second transfer gate electrodes GT1and GT2ofFIG.2Cexcept that the first and second photo gate electrodes GP1and GP2receive a toggle signal. Accordingly, the sixth pixel PX6may be a toggling transfer gate structure. Referring toFIG.9, the seventh pixel PX7may include first to third sub-pixels SPX1, SPX2, and SPX3. The first to third sub-pixels SPX1, SPX2, and SPX3may be arranged in the first direction DR1. The first sub-pixel SPX1may include a transistor including a gate “G”, a source “S”, and a drain “D”, and the P-type substrate P-epi. A ground voltage GND may be applied to the P-type substrate P-epi of the first sub-pixel SPX1. Each of the second and third sub-pixels SPX2and SPX3has a structure similar to that of the first pixel PX1ofFIG.2C, and thus, additional description will be omitted to avoid redundancy. The second sub-pixel SPX2may include the P-type substrate P-epi, the first and second photo gate electrodes GP1and GP2, and the first and second transfer gate electrodes GT1and GT2. The third sub-pixel SPX3may include the P-type substrate P-epi, third and fourth photo gate electrodes GP3and GP4, and third and fourth transfer gate electrodes GT3and GT4. According to an embodiment, the negative voltage VSSN may be applied to the P-type substrate P-epi of the second sub-pixel SPX2, and the ground voltage GND may be applied to the P-type substrate P-epi of the third sub-pixel SPX3. An area of the second sub-pixel SPX2may be formed by a first deep trench isolation DTI1and a second deep trench isolation DTI2. That is, the second sub-pixel SPX2may be formed in an area between the first deep trench isolation DTI1and the second deep trench isolation DTI2. The first and second deep trench isolations DTI1and DTI2may prevent charges generated by the second sub-pixel SPX2from being transferred to the first and third sub-pixels SPX1and SPX3. The first and second deep trench isolations DTI1and DTI2may include oxide or polysilicon, but the present disclosure is not limited thereto. The first and second deep trench isolations DTI1and DTI2may be formed at opposite ends of the P-type substrate P-epi of the second sub-pixel SPX2(or to make contact with the opposite ends thereof). The first and second deep trench isolations DTI1and DTI2may extend in the third direction DR3. According to an embodiment, each of the first and second deep trench isolations DTI1and DTI2may mean a front deep trench isolation (0). FIG.10is a circuit diagram illustrating an example of the pixel ofFIG.1. Referring toFIG.10, an eighth pixel PX8may include the first and second photo transistors P1and P2, first to fourth transfer transistors T11, T21, T12, and T22, first and second storage transistors S1and S2, the first and second read circuits RC1and RC2, the overflow transistor OF, and the switch SW. The first and second photo transistors P1and P2, the third and fourth transfer transistors T12and T22, the first and second read circuits RC1and RC2, the overflow transistor OF, and the switch SW are similar to the first and second photo transistors P1and P2, the first and second transfer transistors T1and T2, the first and second read circuits RC1and RC2, the overflow transistor OF, and the switch SW, and thus, additional description will be omitted to avoid redundancy. The first photo transistor P1may integrate first charges based on the first photo gate signal PG1toggling during the integration period. The second photo transistor P2may integrate second charges based on the second photo gate signal PG2toggling during the integration period. A phase of the second photo gate signal PG2may be opposite to a phase of the first photo gate signal PG1. The first and second storage transistors S1and S2may store charges integrated by the first and second photo transistors P1and P2. In response to a storage gate signal SG, the first storage transistor S1may store the first charges integrated by the first photo transistor P1and may transfer the stored first charges to the first floating diffusion area FD1. In response to the storage gate signal SG, the second storage transistor S2may store the second charges integrated by the second photo transistor P2and may transfer the stored second charges to the second floating diffusion area FD2. During the integration period, the first storage transistor S1may store the first charges based on the storage gate signal SG of the high level. When the storage gate signal SG is at the high level, the first storage transistor S1may have a maximum storage capacity for storing charges. The first storage transistor S1may be connected in series between the first transfer transistor T11and the third transfer transistor T12. During the remaining period other than the integration period, the first storage transistor S1may transfer the first charges to the first floating diffusion node FD1based on the storage gate signal SG of the low level. When the storage gate signal SG is at the low level, the storage capacity of the first storage transistor S1may decrease. The first charges may be transferred to the first floating diffusion node FD1through the third transfer transistor T12depending on the decreased storage capacity. For example, if the first charges accumulated at the first floating diffusion node FD1is less than a predetermined storage capacity, the third transfer transistor T12may not transfer the first charges at the first floating diffusion node FD1. Similarly, if the second charges accumulated at the second floating diffusion node FD2is less than the predetermined storage capacity, the fourth transfer transistor T22may not transfer the second charges at the second floating diffusion node FD2. Like the first storage transistor S1, during the integration period, the second storage transistor S2may store the second charges based on the storage gate signal SG of the high level. During the integration period, the second storage transistor S2may transfer the second charges to the second floating diffusion node FD2based on the storage gate signal SG of the low level. The second storage transistor S2may be connected in series between the second transfer transistor T21and the fourth transfer transistor T22. The first to fourth transfer transistors T11, T21, T12, and T22may control the transfer of charges integrated by the first and second photo transistors P1and P2. The first and third transfer transistor T11and T12may control the transfer of the integrated first charges from the first photo transistor P1to the first floating diffusion node FD1. The second and fourth transfer transistor T21and T22may control the transfer of the integrated second charges from the second photo transistor P2to the second floating diffusion node FD2. During the integration period, the first transfer transistor T11may transfer the first charges from the first photo transistor P1to the first storage transistor S1based on the first transfer gate signal TG1of the high level. During the transfer period, the first transfer transistor T11may block the transfer of the first charges from the first storage transistor S1to the first photo transistor P1based on the first transfer gate signal TG1of the low level. The first transfer transistor T11may be connected in series between the first photo transistor P1and the first storage transistor S1. As in the first transfer transistor T11, the second transfer transistor T21may control the transfer of the second charges from the second photo transistor P2to the second storage transistor S2based on the first transfer gate signal TG1. The second transfer transistor T21may be connected in series between the second photo transistor P2and the second storage transistor S2. The third transfer transistor T12may be connected in series between the first storage transistor S1and the first floating diffusion node FD1. The third transfer transistor T12may transfer the first charges stored in the first storage transistor S1to the first floating diffusion node FD1based on a second transfer gate signal TG2. The fourth transfer transistor T22may be connected in series between the second storage transistor S2and the second floating diffusion node FD2. The fourth transfer transistor T22may transfer the second charges stored in the second storage transistor S2to the second floating diffusion node FD2based on the second transfer gate signal TG2. The third and fourth transfer transistors T12and T22are similar to the first and second transfer transistors T1and T2ofFIG.2A, and thus, additional description will be omitted to avoid redundancy. The first read circuit RC1generates the first image signal OUT1based on the charges stored at the first floating diffusion node FD1. The first read circuit RC1may include the first reset transistor R1, the first source follower transistor SF1, and the first select transistor SET. The second read circuit RC2generates the second image signal OUT2based on the charges stored at the second floating diffusion node FD2. The second read circuit RC2may include the second reset transistor R2, the second source follower transistor SF2, and the second select transistor SE2. A first tap TAP1may include the first photo transistor P1, the first and third transfer transistors T11and T12, the first storage transistor S1, and the first read circuit RC1. A second tap TAP2may include the second photo transistor P2, the second and fourth transfer transistors T21and T22, the second storage transistor S2, and the second read circuit RC2. A configuration and an operation of the first tap TAP1may be substantially the same as those of the second tap TAP2except that the first tap TAP1and the second tap TAP2respectively receive the first photo gate signal PG1and the second photo gate signal PG2having different phases. The first photo gate signal PG1and the second photo gate signal PG2may have a phase difference of 180 degrees. The overflow transistor OF may be connected in parallel with the connection node of the first photo transistor P1and the second photo transistor P2. The overflow transistor OF may be turned on during the remaining period other than the integration period such that the first charges integrated by the first photo transistor P1and the second charges integrated by the second photo transistor P2are removed. The switch SW may be connected with a node between the first and second photo transistors P1and P2, a node between the first to fourth transfer transistors T11, T21, T12, and T22, and a node between the first and second storage transistors S1and S2. Here, the node between the first and second photo transistors P1and P2, a node between the first to fourth transfer transistors T11, T21, T12and T22, and the node between the first and second storage transistors S1and S2may be the same node. The switch SW may apply the ground voltage or the negative voltage VSSN to the node between the first and second photo transistors P1and P2, the node between the first to fourth transfer transistors T11, T21, T12, and T22, and the node between the first and second storage transistors S1and S2. The switch SW may operate based on the switch control signal CTRL and the inverted switch control signal CTRLB. The switch SW may apply the negative voltage VSSN to the node between the first and second photo transistors P1and P2, the node between the first to fourth transfer transistors T11, T21, T12, and T22, and the node between the first and second storage transistors S1and S2in response to the switch control signal CTRL of the high level. The switch SW may apply the ground voltage to the node between the first and second photo transistors P1and P2, the node between the first to fourth transfer transistors T11, T12, T21, and T22, and the node between the first and second storage transistors S1and S2in response to the switch control signal CTRL of the low level. The switch control signal CTRL may maintain the high level during the integration period. FIG.11is a circuit diagram illustrating an example of the pixel ofFIG.1. Referring toFIG.11, a ninth pixel PX9may include first to fourth taps TAP1, TAP2, TAP3, and TAP4, the overflow transistor OF, and the switch SW. The first tap TAP1may include the first photo transistor P1, the first and third transfer transistors T11and T12, the first storage transistor S1, and the first read circuit RC1. The second tap TAP2may include the second photo transistor P2, the second and fourth transfer transistors T21and T22, the second storage transistor S2, and the second read circuit RC2. The first tap TAP1and the second tap TAP2are similar to the first tap TAP1and the second tap TAP2ofFIG.10, and thus, additional description will be omitted to avoid redundancy. The third tap TAP3may include a third photo transistor P3, fifth and seventh transfer transistors T31and T32, a third storage transistor S3, and a third read circuit RC3. The fourth tap TAP4may include a fourth photo transistor P4, sixth and eighth transfer transistors T41and T42, a fourth storage transistor S4, and a fourth read circuit RC4. The third tap TAP3and the fourth tap TAP4are similar to the first tap TAP1and the second tap TAP2except that the third tap TAP3and the fourth tap TAP4respectively receive a third photo gate signal PG3and a fourth photo gate signal PG4having phases different from those of the first photo gate signal PG1and the second photo gate signal PG2. For example, the first photo gate signal PG1and the third photo gate signal PG3may have a phase difference of 90 degrees, and the first photo gate signal PG1and the fourth photo gate signal PG4may have a phase difference of 270 degrees. However, the one or more embodiments are not limited thereto and the first to fourth photo gate signals PG1to PG4may be shuffled. That is, the phase differences between each of the first photo gate signal PG1, the second photo gate signal PG2, the third photo gate signal PG3, and the fourth photo gate signal PG4may be variously configured. The first to fourth taps TAP1to TAP4may output image signals OUT1, OUT2, OUT3, and OUT4having all phase information of 0 degree, 90 degrees, 180 degrees, and 270 degrees. According to the present disclosure, a depth sensor and image detecting system including the same may reduce a toggle voltage difference of a photo gate signal by applying a negative voltage to a body of a pixel during an integration period. Accordingly, a driving current of a row driver may decreases. Thus, it is possible to reduce the power consumption of the depth sensor while securing the reliability of ToF calculation. While the present disclosure has been described with reference to the embodiments and the accompanying drawings thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as set forth in the following claims.
62,470
11943553
DETAILED DESCRIPTION Electronic systems such as digital cameras, computers, cellular telephones, automotive systems, and other electronic systems may include imaging systems or modules that gather light to capture one or more image frames that include information about their surrounding environments. The imaging system may have sensor circuitry including one or more arrays of image sensor pixels, which are sometimes referred to herein simply as sensor pixels or pixels. The active pixels in the array may include photosensitive elements such as pinned photodiodes that convert the incoming light into electric charge. The array may have any number of pixels (e.g., hundreds or thousands or more). Sensor circuitry may include control circuitry such as circuitry for controlling the pixels and readout circuitry for reading out image signals corresponding to the electric charge generated by the photosensitive elements. FIG.1is a diagram of an illustrative imaging system such as an electronic device that uses sensor circuitry to capture images. Imaging system10ofFIG.1may be a stand-alone camera, a cellular telephone, a tablet computer, a webcam, a video camera, a video surveillance system, an automotive imaging system, a video gaming system with imaging capabilities, an augmented reality and/or virtual reality system, an unmanned aerial vehicle system such as a drone, an industrial system, or any other desired imaging system or device that captures image data. Camera module12, which is sometimes referred to as an imaging module, may be used to convert incoming light into digital image data. Camera module12may include one or more corresponding sensor modules16, which are sometimes referred to as image sensor modules or image sensors. During image capture operations, light from a scene may be focused onto sensor module16by one or more corresponding lenses. Sensor module16may include circuitry for generating analog pixel image signals and circuitry for converting analog pixel image signals into corresponding digital image data, as examples. The digital image data may be provided to storage and processing circuitry18. Storage and processing circuitry18may include one or more integrated circuits such as digital signal processing circuits, image processing circuits, microprocessors, storage devices such as random-access memory and non-volatile memory, and/or other types of processing or memory circuitry. Storage and processing circuitry18may be implemented using components that are separate from camera module12and/or components that form part of camera module12. When storage and processing circuitry18is implemented on different integrated circuits than those implementing camera module12, the integrated circuits with circuitry18may be vertically stacked or packaged with the integrated circuits for camera module12. Image data that has been captured by camera module12may be processed and stored using processing circuitry18. As examples, the captured image data can be processed using an image processing engine on processing circuitry18, using a digital signal processing engine on processing circuitry18, using an imaging mode selection engine on processing circuitry18, and/or using other portions of processing circuitry18. The processed image data may, if desired, be provided to equipment external to camera module12and/or imaging system10such as a computer, an external display, and/or other devices using wired and/or wireless communications paths coupled to processing circuitry18. In some configurations described herein as an illustrative example, camera module12may implement a time-of-flight (TOF) sensor or camera. In these configurations, camera module12may include illumination module14configured to emit light for illuminating an image scene or more specifically one or more objects in the image scene. Sensor module16may be configured to gather reflected versions of the emitted light and to generate TOF information for the image scene such as depth or distance information for one or more of the objects, a depth or distance map of the image scene, a visible and/or infrared image of the image scene, and/or other information indicative of TOF information. FIG.2is an illustrative diagram showing how illumination module14may emit light and how sensor module16may receive the corresponding reflected light after the emitted light reflects off of one or more objects. As shown inFIG.2, illumination module14may include one or more light emitters, which are sometimes referred to herein as light sources or illumination devices. The light emitters may be coupled to driver circuitry and/or controller circuitry for controlling and driving the one or more light emitters. The light emitters may be implemented using and may include one or more light emitting diodes (LEDs), one or more laser diodes, one or more lasers, and/or one or more of other suitable light or illumination sources. The light emitters may emit light of any suitable wavelength such as visible light, infrared light, and/or light of other wavelengths. A light emitter in illumination module14controlled by the corresponding driver circuitry may emit light15having any suitable characteristics such as a suitable waveform, a suitable peak amplitude or power, a suitable periodicity or frequency, a suitable number of light pulses, and/or other characteristics. Light15may reach one or more objects13in an image scene and reflect off one or more objects13as reflected light17. Objects13may include any suitable objects, inanimate or animate, at different depths in the scene. Reflected light17may be received at sensor module16(e.g., at one or more photosensitive elements in the active image pixels). Driver circuitry and/or control circuitry may control the pixels in sensor module16to generate one or more image frames based on reflected light17(e.g., by providing control signals coupled to transistors or other actuated elements in the pixels). In particular, based on the received control signals from the driver circuitry and/or control circuitry, the pixels may generate different portions of charge in response to reflected light17during an integration or exposure time period, may perform readout operations on the generated portions of charge during a readout time period, and may perform other suitable operations during other time periods. The TOF sensor inFIG.2is merely illustrative. Illumination module14and sensor module16may each include other suitable circuitry such as power management and supply circuitry, processing circuitry, control circuitry, readout circuitry, timing circuitry, and/or clock generation circuitry. While illumination module14and sensor module16are shown as completely separate modules inFIGS.1and2, this is merely illustrative. If desired, illumination module14and sensor module16may be coupled to and include shared circuitry in the camera module system such as shared power management and/or supply circuitry, shared modulation/demodulation circuitry, shared clock generation circuitry, a shared timing controller, shared signal generator circuitry, shared control circuitry, and/or shared storage circuitry. FIG.3is a diagram of an illustrative configuration for a sensor module such as sensor module16inFIGS.1and2. As shown inFIG.3, sensor module16may include a pixel array20containing sensor pixels22arranged in rows and columns and control and processing circuitry24. Array20may contain, for example, tens, hundreds, or thousands of rows and columns of sensor pixels22. Control circuitry24may be coupled to pixel control circuitry26and pixel readout and control circuitry28(sometimes referred to simply as pixel readout circuitry28). While pixel control circuitry26is shown in the example ofFIG.3to be coupled to rows of pixels22in array20and pixel readout circuitry28is shown in the example ofFIG.3to be coupled to columns of pixels22in array20, this configuration is merely illustrative. If desired, pixel control circuitry26may be coupled to columns of pixels22in array20and/or pixel readout circuitry28may be coupled to rows of pixels22in array. In general, pixel control circuitry26and pixel readout circuitry28may each be coupled to lines (e.g., in a row-wise direction or in a column-wise direction) of pixels22in array20. Pixel control circuitry26may receive (row or column) addresses from control circuitry24and supply corresponding (row or column) control signals such as reset, anti-blooming or global shutter, pixel (row or column) select, modulation, storage, charge transfer, readout, sample-and-hold control signals to pixels22over (row or column) control paths30. In some illustrative configurations described herein as an example, a first portion of pixel control circuitry26may be coupled to pixels22via column control paths to provide global shutter and modulation control signals while a second portion of pixel control circuitry26may be coupled to pixels22via row control paths to provide the remaining pixel control signals such as the reset, pixel (row or column) select, storage, charge transfer, readout, and/or sample-and-hold control signals. One or more (column or row) readout paths32may be coupled to each line (e.g., column line) of pixels22in array20. Paths32may be used for reading out image signals from pixels22and for supplying bias signals (e.g., bias currents or bias voltages) to pixels22. Pixel readout circuitry28may receive image signals such as analog pixel values generated by pixels22over paths32. Pixel readout circuitry28may include memory circuitry for storing calibration signals (e.g., reset level signals, reference level signals) and/or image signals (e.g., image level signals) read out from array20, amplifier circuitry or a multiplier circuit, analog to digital conversion (ADC) circuitry, bias circuitry, latch circuitry for selectively enabling or disabling different portions of readout circuitry28, or other circuitry that is coupled to one or more pixels22in array20for operating pixels22and/or for reading out image signals from pixels22. ADC circuitry in readout circuitry28may convert analog pixel values received from array20into corresponding digital pixel values (sometimes referred to as digital image data or digital pixel data). Readout circuitry28may supply digital pixel data to control and processing circuitry24and/or processor18(FIG.1) from pixels22for further processing such as digital signal processing. If desired, pixel array20may also be provided with a filter array having multiple color and/or infrared filter elements each overlapping one or more pixels22, thereby allowing a single image sensor to sample light of different colors or sets of wavelengths. In general, filter elements of any desired color and/or wavelength and in any desired pattern may be formed over any desired number of image pixels22. In the illustrative example of time-of-flight sensing using an illumination source (e.g., in illumination module14inFIGS.1and2), pixel array20may be provided with a correspond filter array that passes light having colors and/or frequencies emitted from the illumination source. Sensor module16may include one or more arrays20of image pixels22. Image pixels22may be formed in a semiconductor substrate using complementary metal-oxide-semiconductor (CMOS) technology or charge-coupled device (CCD) technology or any other suitable photosensitive devices technology. Image pixels22may be frontside illumination (FSI) image pixels or backside illumination (BSI) image pixels. If desired, array20may include pixels22of different types such as active pixels, optically shielded pixels, reference pixels, etc. If desired, sensor module16may include an integrated circuit package or other structure in which multiple integrated circuit substrate layers (e.g., from multiple wafers) or chips are vertically stacked with respect to each other. Configurations in which imaging module12inFIG.1is configured to perform indirect TOF measurements based on phase differences between a modulated light signal emitted by illumination module14inFIG.2and the reflected modulated light signal from an object in an image scene received by sensor module16inFIG.2are described herein for illustrative purposes. In these configurations, sensor module16may include an array of active pixels22, each configured to demodulate the received light signal based on a sensor modulation frequency to generate corresponding charge portions useable to generate TOF information. FIG.4is a circuit diagram of an illustrative image sensor pixel22configured to implement each pixel22in array20ofFIG.3. Pixel22may include a photosensitive element such as a (pinned) photodiode40. Photodiode40may receive incident light over an integration time period and may generate electric charge based on the incident light. A first terminal of photodiode40may be coupled to a voltage terminal38such as a ground voltage terminal. An anti-blooming transistor42may couple a second terminal of photodiode40to a voltage terminal44such as a supply voltage terminal. Transistor42may be configured to prevent blooming at photodiode40and/or may serve to keep photodiode40at a reset voltage level (e.g., the supply voltage level). As an example, when control signal AB is asserted (e.g., at a voltage level corresponding to a logic high that turns on transistor42), photodiode40may be reset to the supply voltage level. When control signal AB is de-asserted (e.g., at a voltage level corresponding to a logic low that turns off transistor42), photodiode40may begin to accumulate charge in response to incident light. Pixel22may include local charge storage regions such as storage gates46and56. As an example, each storage gate may include a corresponding adjustable charge transfer barrier portion and a corresponding charge storage portion over which the gate terminal is formed. In other words, control signals SG1and SG2may be adjusted to control the flow of charge from photodiode40into the charge storage regions associated with storage gates46and56, respectively. The use of storage gates in pixel22is merely illustrative. If desired, any suitable types of charge storage regions may be used in pixel22. Transistors45and55may couple photodiode40to storage gates46and56, respectively. Control signals MOD1and MOD2may be used to active transistors45and55, respectively, and may be used to selectively transfer charge generated by photodiode to one of storage gates46or56during the integration time period. As an example, control signals MOD1and MOD2may be inverted versions of each other during the integration time period. As such, at most only one of transistors45or55may be activated at a given time, thereby separating image charge generated at photodiode40into first and second charge portions stored at storage gates46and56, respectively, depending on the time periods during which respective signals MOD1and MOD2are asserted (e.g., depending on a sensor modulation frequency based on which pixel22is modulated). Pixel22may include floating diffusion region60having an associated charge storage capacity (illustratively shown inFIG.4as capacitance CFDrelative to voltage terminal50such as a ground voltage terminal). As an example, floating diffusion region60may be implemented as a doped semiconductor region (e.g., a region in a silicon substrate that is doped by ion plantation, impurity diffusion, or other doping processes). Storage gates46and56may temporarily store (portions of) image charge generated at photodiode40prior to transferring the stored portions of image charge to floating diffusion region60for readout. Transfer transistors48and58may respectively couple storage gates46and56to floating diffusion region60. During readout operations, each transfer transistor, when activated by control signals TX1or TX2, may transfer a charge portion stored at the corresponding storage gate to floating diffusion region60for readout. A reset transistor62may couple floating diffusion region60to a voltage terminal52such as a supply voltage terminal. As an example, when control signal RST is asserted, floating diffusion region60may be reset to a reset voltage level (e.g., the supply voltage level). If desired, transistor62may be used to reset other portions of pixel22to the reset voltage level. As an example, transistor62in combination with transistors48and58may be used to reset storage gates46and56to the reset voltage level. Pixel22may include source follower transistor64and row select transistor66. Source follower transistor64has a gate terminal coupled to floating diffusion region a first source-drain terminal (e.g., one of a source or drain terminal) coupled to voltage terminal54such as a supply voltage terminal, and a second source-drain terminal (e.g., the other one of the source or drain terminal) coupled to row select transistor66. Transistor66may have a gate terminal that is controlled by pixel (row or column) select control signal SEL. When control signal SEL is asserted during a pixel readout operation when reset and/or image level signals from one or more pixels22are being read out, a pixel output signal may be passed onto path70(e.g., coupled to readout path32inFIG.3). The pixel output signal may be an output signal having a magnitude that is proportional to the amount of charge at floating diffusion region60. The configuration of pixel22shown inFIG.4is merely illustrative. If desired, pixel22inFIG.4may include one or more suitable additional elements such as additional transistors, additional charge storage structures, and/or other additional elements, may exclude one or more shown elements, and/or may replace one or more shown elements (e.g., replace storage gates46and56with other types of charge storage structures). If desired, one or more of the voltage terminals in pixel22may be coupled to a variable voltage source or a fixed voltage source. Configurations in which an image sensor pixel array such as array20inFIG.3includes pixels22each having the implementation of pixel22shown inFIG.4are described herein as illustrative examples. If desired, the embodiments described herein may similarly apply to an array20having pixels22of other implementations. As described in connection withFIGS.3and4, pixel control circuitry26may provide a number of different control signals to each pixel22in array20. As one illustrative example, pixel control circuitry26may need to provide modulation signals MOD1and MOD2in a specific manner to each pixel22in array20to perform indirect TOF sensing operations in a satisfactory manner. In particular, the manner in which modulation signals MOD1and MOD2are asserted and de-asserted contributes to the depth and spatial resolution of the indirect TOF sensing. From the perspective of indirect TOF sensing, it may be desirable for the pixel control circuitry to concurrently assert modulation signals (MOD1at a first time and MOD2at second time) across the entire pixel array and to do so at a high modulation frequency. However, doing so while providing a satisfactory spatial resolution (e.g., across a satisfactory number of pixels) can require a large concurrent current draw. As an example, when providing a depth resolution of about 1 mm (with a corresponding modulation frequency of 200 MHz) and a spatial resolution of 1.2 MP for TOF sensing, the peak current drawn by the pixel array may reach 6.9 A. Accordingly, the power delivery network for the sensor module would need to quickly (e.g., within 1 ns) ramp from 0 A to 6.9 A and provide 6.9 A of current without significant supply voltage (IR) drop. This requirement of the power delivery network may be unrealistic without including bulky and/or specialized power delivery networks, which are similarly undesirably in a compact sensor module. To mitigate these issues, the modulation of sets of pixels22in array20may be staggered or interleaved such that the current draw is spread out over the integration time period. However, especially for indirect TOF sensing, the time delay between modulation of different sets of pixels may contribute to depth error, which can be exacerbated by PVT (process-voltage-temperature) effects or other unknown effects that can make the depth error unpredictable and difficult to compensate for. It may therefore be desirable to provide a predictable depth error, which can then be relatively easily removed. FIG.5is a diagram of illustrative pixel control circuitry having an adjustable delay line based on which control signals such as modulation signals MOD1and MOD2and a global shutter control signal AB are provided to pixels22in array20. As shown inFIG.5, pixel control circuitry26may include driver circuitry27. Driver circuitry27may include any number of driver units or cells27-1,27-2, . . . ,27-N. Each driver unit produces control signals such as modulation signals MOD1and MOD2that are conveyed to lines (columns in the example ofFIG.5) of pixels22in pixel array20via corresponding paths30. As an explicit example, driver unit27-1(or any of the other driver units) may provide a common modulation signal MOD1to one or more columns of pixels via corresponding paths30and may provide a common modulation signal MOD2to one or more columns of pixels via corresponding paths30. Collectively, driver units27-1,27-2, . . . ,27-N may provide control signals to each pixel22in array20. A delay line such as delay line80may extend across each of driver units27-1,27-2, . . .27-N and may provide outputs based on which the control signals are generated. Delay line80may receive, at its input terminal, an input (control) signal having an adjustable duty cycle from global duty cycle adjuster78(sometimes referred to herein as global duty cycle adjustment circuit78). In particular, global duty cycle adjuster78may receive an input control signal along path79and, based on the input control signal, provide (modulate) the input signal to delay line80with a desired duty cycle. The reference signal input to delay line80may have a 50% duty cycle or a non-50% duty cycle such as a 30% duty cycle, 40% duty cycle, 45% duty cycle, 55% duty cycle, 60% duty cycle, or any other desired non-50% duty cycle. Delay line80, based on the input signal, may produce corresponding delayed versions of the input signal to each of driver units27-1,27-2, . . .27-N as outputs. As shown inFIG.5, delay line80may include adjustable inverters82such as current-starved adjustable inverters couple in series with one another, thereby providing various incrementally delayed versions of the input signal (e.g., from adjustment circuit78). As an example, the output terminal of inverter82-1B may be coupled to clock tree86-1via path84-1. Inverter82-1B may provide an input signal (e.g., a version of the delay line input signal from circuit78delayed by inverters82-1A and82-B) to clock tree86-1via path84-1. Clock tree86-1may propagate the received input signal to driver circuits88-1in driver unit27-1. Based on the propagated signals received from clock tree86-1, driver circuits88-1may generate (e.g., assert and de-assert) the control signals produced by driver unit27-1on paths30. In general, the output terminal of inverter82-NB may be coupled to clock tree86-N via path84-N. Inverter82-NB may provide an input signal (e.g., a version of the delay line input signal from circuit78delayed by inverter82-NB and all preceding upstream inverters along the delay line) to clock tree86-N via path84-N. Clock tree86-N may propagate the received input signal to driver circuits88-N in driver unit27-N. Based on the propagated signals received from clock tree86-N, driver circuits88-N may generate (e.g., assert and de-assert) the control signals produced by driver unit27-N on paths30. Configured in the manner described above, driver unit27-1may produce a first set of control signals such as modulation control signals MOD1and MOD2and global shutter control signal AB for a first set of pixels22in array20(e.g., a first set of pixel columns). Because the timing of these first set of control signals are based on the same signal received along path84-1, pixels22coupled to driver unit27-1may be controlled based on the same control signal timing. As examples, control signals MOD1received by pixels22coupled to driver unit27-1via paths30may be asserted and de-asserted at the same time, control signals MOD2received by pixels22coupled to driver unit27-1via paths30may be asserted and de-asserted at the same time, and/or other control signals such as control signal AB received by pixels22coupled to driver unit27-1via paths30may be asserted and de-asserted at the same time. In a similar manner, driver unit27-2may produce a second set of control signals such as modulation control signals MOD1and MOD2and global shutter control signal AB for a second set of pixels22in array20(e.g., a second set of pixel columns). Because the timing of these second set of control signals are based on the same signal received along path84-2, pixels22coupled to driver unit27-2may be controlled based on the same control signal timing. As examples, control signals MOD1received by pixels22coupled to driver unit27-2via paths30may be asserted and de-asserted at the same time, control signals MOD2received by pixels22coupled to driver unit27-2via paths30may be asserted and de-asserted at the same time, and/or other control signals such as control signal AB received by pixels22coupled to driver unit27-2via paths30may be asserted and de-asserted at the same time. Because, the signal received along path84-2and used in driver unit27-2is more delayed than the signal received along path84-1and used in driver unit27-1, the control signal assertions and/or de-assertions produced on paths30coupled to driver unit27-2may also be delayed relative to the same control signal assertions and/or de-assertions produced on paths30coupled to driver unit27-1. More generally, the signal received along path84-N and used in driver unit27-N is more delayed than the signal received along path84-(N-1) and used in driver unit27-(N-1), the control signal assertions and/or de-assertions produced on paths30coupled to driver unit27-N may also be delayed relative to the same control signal assertions and/or de-assertions produced on paths30coupled to driver unit27-(N-1). As such, for the same control signal assertion (or de-assertion), pixels22coupled to driver unit27-1may exhibit the control signal assertion (or de-assertion) first, pixels22coupled to driver unit27-2may exhibit the control signal assertion (or de-assertion) after a delay, pixels22coupled to driver unit27-3may exhibit the control signal assertion (or de-assertion) after a further delay, and so on, until pixels22coupled to driver unit27-N may exhibit the control signal assertion (or de-assertion) last. By temporally offsetting the assertion and/or de-assertion of control signals such as control signals MOD1and MOD2, driver circuitry27may draw current from a power distribution network in a distributive manner, thereby reducing peak current draw when compared to scenarios, in which control signal assertions (or de-assertions) occur simultaneously across the entire pixel array. While this use of a delay line to offset current draw may provide satisfactory operations for some applications, in indirect TOF sensing application (as an example), the differently delayed modulation signals may cause a depth error. This depth error may be due to the modulation not occurring in a synchronous manner across the entire array, and in TOF sensing, relative timing delays translate to image data corresponding to varying depths or distances, thereby producing a depth error. While processing circuitry in and/or coupled to the sensor module can compensate for the depth error, this type of compensation may be difficult if the relative timing delays are not fixed or at least predictable. To produce fixed and/or predictable delays, delay line80may be configured to exhibit a delay lock (sometimes referred to herein as a phase lock). In particular, delay lock circuitry (sometimes referred to as phase lock circuitry) may be coupled to delay line80and may detect undesired delay offsets or differences and adjust delay line80may remove these undesired delay offsets or differences, thereby producing a delay-locked delay line that outputs signals with fixed and predicable delays across the different driver units27-1,27-2, . . . ,27-N. As shown inFIG.5, delay lock circuitry90may include phase detector92, charge pump96, loop filter98, and current-to-voltage current mirror100, which collectively may be referred to herein as a first delay lock loop of delay lock circuitry90. Phase detector92may be coupled to a first tap point at the output of inverter82-1A in the first driver unit27-1via path94-1and may be coupled to a second tap point at the output of inverter82-NA in the last driver unit27-N via path94-2. Phase detector92may determine a phase difference between the rising edge of the signal received on path94-1from the output of inverter82-1A and the rising edge of the signal received on path94-2from the output of inverter82-NA. For example, the rising edge of the signal received on path94-1and the rising edge of the signal received on path94-1may be desirably offset by integer multiples of half of the period of the signal (e.g., half of the period or the period). Phase detector92may detect any phase differences from these desired offsets between the rising edges of the two phase detector inputs. Based on the determined phase difference, phase detector92may output signals (e.g., up and down gating signals) to charge pump96, thereby controlling charge pump96to provide current to loop filter98or to draw current from loop filter98. Current-to-voltage current mirror100may convert the corresponding current flow or draw from loop filter98to corresponding bias voltages PBIAS and NBIAS. Each inverter82along delay line80may receive bias voltages PBIAS and NBIAS, which are used to control the adjustable delay provided by inverters82. In particular, bias voltages PBIAS and NBIAS may be used to control inverters82to output signals such that the rising edges of the signals on paths94-1and94-2are aligned, thereby achieving a delay lock (e.g., a half period phase lock or a full period phase lock) at least with respect to the rising edges. As shown inFIG.5, delay lock circuitry90may also include phase detector102, charge pump106, loop filter108, which collectively may be referred to herein as a second delay lock loop of delay lock circuitry90. Phase detector102may also be coupled to the first tap point at the output of inverter82-1A in the first driver unit27-1via path94-1and may also be coupled to the second tap point at the output of inverter82-NA in the last driver unit27-N via path94-2. Phase detector102may determine a phase difference between the falling edge of the signal received on path94-1from the output of inverter82-1A and the falling edge of the signal received on path94-2from the output of inverter82-NA. For example, the falling edge of the signal received on path94-1and the falling edge of the signal received on path94-1may be desirably offset by integer multiples of half of the period of the signal (e.g., half of the period or the period). Phase detector102may detect any phase differences from these desired offsets between the falling edges of the two phase detector inputs. Based on the determined phase difference, phase detector102may output signals (e.g., up and down gating signals) to charge pump106, thereby controlling charge pump106to provide current to loop filter108or to draw current from loop filter108. This current flow or draw from loop filter108may produce a corresponding bias voltage ADCC. Each inverter82along delay line80may receive bias voltage ADCC, which is used to control, in combination with bias voltages PBIAS and NBIAS, the adjustable duty cycle provided by inverters82. In particular, bias voltage ADCC may be used to control inverters82to output signals such that the falling edges of the signals on paths94-1and94-2are aligned, thereby achieving a delay lock (e.g., a full period phase lock) at least with respect to the falling edges. By separately phase locking the rising and falling edges of the reference signal passed along the delay line, even when the reference signal does not have a 50% duty cycle, the non-50% duty cycle reference signal may still be delay locked. The configuration of adjustable delay line80with adjustable inverters82as shown inFIG.5is merely illustrative. In configurations sometimes described herein as an illustrative example, each control signal such as modulation control signal MOD1, modulation control signal MOD2, and/or a global shutter control signal AB may have its own dedicated adjustable delay line80with adjustable inverters82. In particular, in the example of control signals MOD1, MOD2, AB, three parallel adjustable delay lines80may be provided in the pixel control circuitry. These three parallel adjustable lines80may share the same bias voltages PBAIS, NBIAS, and ADCC. In other words, inverters82in the three parallel adjustable lines80may receive the same global bias voltages. Delay lock circuitry90may generate global bias voltages PBIAS, NBIAS, and ADCC based on tapping any one of the three delay lines. However, inverters82in each of the three parallel adjustable lines80may receive different (independently controllable) inverter-specific voltages and/or control signals with respect to inverters82in the other two parallel adjustable lines80. These inverter-specific voltages and/or control signals are further illustrated inFIG.6. FIG.6is a diagram of an illustrative voltage-controlled current-starved adjustable inverter of the type used to implement inverters82on delay line80inFIG.5. As shown inFIG.6, inverter82may include PMOS (p-channel metal-oxide-semiconductor) transistor120and NMOS (n-channel metal-oxide semiconductor) transistor122having a common gate terminal coupled to the input DIN of inverter82and having a common drain terminal coupled to the output DOUT of inverter82. Inverter82may include a main branch124with PMOS transistor126coupled between PMOS transistor120and voltage terminal or rail128supplying power supply voltage Vdd and with NMOS transistor130coupled between voltage terminal or rail132supplying ground voltage Vss. Transistor126may receive, at its gate terminal, bias voltage PBIAS. Transistor130may receive, at its gate terminal, bias voltage NBIAS. Bias voltages PBIAS and NBIAS may be used to adjust the delay introduced between the input and output of inverter82. In particular, by increasing voltage PBIAS and/or decreasing NBIAS, the effective drive resistance of transistors120and122may be increases, thereby increasing the delay introduced by inverter82. Inverter82may also include a second branch134with PMOS transistors136and138coupled in series between PMOS transistor120and voltage rail128and with NMOS transistors140and142coupled in series between NMOS transistor122and voltage rail132. Transistor136may receive, at its gate terminal, bias voltage PBIAS. Transistor138may receive, at its gate terminal, control voltage PTRIM. Transistor140may receive, at its gate terminal, bias voltage NBIAS. Transistor142may receive, at its gate terminal, control voltage NTRIM. Transistors136and138may be coupled in parallel with transistor126between transistor120and voltage terminal128. Transistors140and142may be coupled in parallel with transistor130between transistor122and voltage terminal132. While bias voltages PBIAS and NBIAS are shared (common) bias voltages provided to all inverters82along delay line80, control voltages PTRIM and NTRIM may be local bias voltages that can vary across different inverters. This set of localized bias voltages enable inverters82(e.g., a pair of inverters) associated with each driver unit to be controlled and adjusted independently from inverters82associated other driver units. Inverter82may also include a third branch144with PMOS transistors146and148coupled in series between PMOS transistor120and voltage rail128and with NMOS transistors150and152coupled in series between NMOS transistor122and voltage rail132. Transistor146and transistor150may receive, at their common gate terminals, bias voltage ADCC. Transistor148may receive, at its gate terminal, bias voltage PADCC. Transistor152may receive, at its gate terminal, bias voltage NADCC. Transistors146and148may be coupled in parallel with transistor126between transistor120and voltage terminal128. Transistors150and152may be coupled in parallel with transistor130between transistor122and voltage terminal132. While transistors126and130in the first branch and transistors136,138,140, and142in the second branch may be used to control the delay of the rising edges of the input signal received by inverter82, transistors146,148,150, and152may be used to control the delay of the falling edges of the input signal received by inverter82. While not explicitly shown inFIG.5or6, a portion of phase lock circuitry90inFIG.5may provide independently controlled local control voltages PTRIM, NTRIM, PADCC, and NADCC along separate paths to each inverter82in delay line80, thereby providing local bias voltage (and therefore delay) adjustments to each inverter82. In this context, bias voltages PBIAS, NBIAS, and ADCC may be global bias voltages shared across each inverter82, while control voltages PTRIM, NTRIM, PADCC, and NADCC may be local control voltages that are inverter-specific. In some configurations described herein as an illustrative example, control voltages PTRIM, NTRIM, PADCC, and NADCC may each provide a logic high voltage or a logic low voltage selectively turn the controlled transistor (and corresponding inverter branches) on or off. FIG.7is an illustrative diagram of an illustrative bypass circuit configured to bypass outputs from delay line80. As shown inFIG.7, a bypass circuit160-1may be coupled along path84-1between the output of inverter82-1B and clock tree86-1. In particular, in additional to a voltage-controlled delay line80, sensor module16may also include a top-level clock tree162that distributes one or more reference control signals on corresponding input path(s)163to each of the driver units in driver circuitry27. As examples, top-level clock tree162may distribute modulation control signal MOD1received along a first path163, modulation control signal MOD2received along a second path163, a global shutter control signals AB received along a third path63, and/or any other control signals, if desired. In other words, top-level clock tree162may output (balanced) control signals of each input control signal along paths165to each driver unit in driver circuitry27based on the reference input signal on each path163. An illustrative path165-1(e.g., for one of the distributed modulation signals such as one of signal MOD1, MOD2, or AB) of all of the paths165is shown inFIG.7. Bypass circuit160-1, when activated, may supply lower-level clock tree86-1with the distributed control signal on path165-1instead of the control signal produced on delay line80. A corresponding bypass circuit160may be provided for each driver unit in driver circuitry27. By providing bypass circuits, the distributed control scheme imparted by the delay line as described in connection withFIG.5may selectively deactivated. While the use of the balanced control signals from top-level clock tree162via paths165at each driver unit in driver circuitry27may contribute to large current draw, this configuration (e.g., activation of bypass circuit160-1and use of signals from top-level clock tree162) may enable testing and/or calibration of components within image sensor160and/or may be used for other purposes. The configuration ofFIG.7is merely illustrative. While not explicitly shown inFIG.7, additional paths163,165-1,84-1may also be included, e.g., for the other control signals distributed by top-level clock tree162. In other words, when distributing three control signals such as control signals MOD1, MOD2, and AB, paths163,165-1, and84-1may each contain three parallel paths, one for each of the control signals. These additional paths may be coupled to other bypass circuits on other delay lines (e.g., because each control signal may also have its own delay line) before being coupled to clock tree86-1and distributed to drivers88-1. FIG.8is an illustrative graph of the relative delay or depth across the output of a delay line such as delay line80ofFIG.5. In the example ofFIG.8, the relative delay or depth of the in-phase outputs of inverters82on delay line80(e.g., the outputs of every other inverter) starting with inverter82-2B and ending with inverter82-(N-1)B exhibit staircase function170. Each step172(e.g., each step delay or depth) of staircase function should be of the same magnitude. This characteristic may be enabled by taping and locking the phases of the signals at the ends of delay line80(e.g., outputs from inverters82-1A and82-NA) by delay lock circuitry90(FIG.5). FIG.9shows three sets of graphs illustrating how current draw across time differs between three different control schemes. In the example of the topmost graph, the assertions of a control signal such as control signal MOD1may occur across all pixels22in array20concurrently at time t1, thereby exhibiting a single large current draw shown by curve180. Similarly, the assertions of a control signal such as control signal MOD2may occur across all pixels22in array20concurrently at time t2, thereby exhibiting a single large current draw shown by curve182. As an example, this type of operation may occur when sub-level clock trees86receive reference signals from top-level clock tree163(e.g., as shown inFIG.7) but may be undesirable the strain placed on the power delivery network of the sensor module. In the example of the middle graph, the assertions of a control signal such as control signal MOD1may occur in a slightly offset manner across six different groups (e.g., sets of columns) of pixels22in array20beginning at time t1, thereby exhibiting six sets of current draws shown by curves180-1,180-2,180-3,180-4,180-5, and180-6. Similarly, the assertions of a control signal such as control signal MOD2may occur in a slightly offset manner across six different groups (e.g., sets of columns) of pixels22in array20beginning at time t2, thereby exhibiting six sets of current draws shown by curves182-1,182-2,182-3,182-4,182-5, and182-6. While this type of operation may result in a more spread out current draw (and therefore a lower peak current) than the current draw shown by curves180and182, the peak current draw may still be undesirably elevated. In the example of the bottommost graph, the assertions of a control signal such as control signal MOD1may occur in a regularly (evenly) spaced offset manner across six different groups (e.g., sets of columns) of pixels22in array20beginning at time t1, thereby exhibiting six sets of evenly spread out current draws shown by curves180-1′,180-2′,180-3′,180-4′,180-5′, and180-6′. Similarly, the assertions of a control signal such as control signal MOD2may occur in a regularly (evenly) spaced offset manner across six different groups (e.g., sets of columns) of pixels22in array20beginning at time t2, thereby exhibiting six sets of evenly spread out current draws shown by curves182-1′,182-2′, . . . ,182-6′. As an example, this type of operation may occur when sub-level clock trees86receive reference signals from phase-locked delay line80(e.g., as shown in FIGS.5and7). By using phase lock circuitry90, each driver unit in driver circuitry27may produce evenly delayed sets of control signals relative to the preceding driver units. Accordingly, current draw may be evenly distributed as the control signal assertions and/or assertions for different sets of pixels are offset based on the driver unit to which each set of pixels is coupled. While the peak current draw shown by each of the graphs appear to be similar inFIG.9, this is merely illustrative and may not be to scale. As an example, the peak current draw exhibited by curve180may be at least 5 times, at least 6 times, at least times, at least 50 times larger than any of the peak current draws exhibited by curves180-1′,180-2′, . . . ,180-6′. While the current draw is distributed across 6 different time periods (e.g., associated with 6 sets of assertions for 6 sets of pixels22), this is merely illustrative. If desired, assertions for a particular control signal may be distributed across any number of time periods (e.g., based on the number of different driver units in driver circuitry27). In the example of curves180(e.g.,180-1′,180-2′, etc.) being associated with control signal MOD1and curves182(e.g.,182-1′,180-2′, etc.) being associated with control signal MOD2, time period184may be half of the period of the modulation signal (e.g., control signal MOD1is asserted in an evenly distributed manner during half of each period and control signal MOD2is asserted in an evenly distributed manner during the other half of each period). If desired, time period184may be the period of a reference signal or may be any suitable length of time relative to the reference signal (e.g., in scenarios in which curves180and182are associated with the same control signal). Various embodiments have been described illustrating systems and methods for phase-locked and distributed control of pixels in an array. As an example, an imaging module may include an array of image sensor pixels, a voltage-controlled delay line having a plurality of outputs, delay lock circuitry coupled to the voltage-controlled delay line and configured to control the voltage-controlled delay line to provide the plurality of outputs with distributed delays based on a plurality of bias voltages, and a plurality of driver units each coupled to a corresponding output in the plurality of outputs and each configured to generate a control signal for a corresponding set of pixels in the array based on the corresponding output. As another example, a sensor module configured to perform time-of-flight sensing may include an array of image sensor pixels, each image sensor pixel configured to receive a modulation control signal based on which image charge for the time-of-flight sensing is modulated, driver circuitry comprising a plurality of driver units, each driver unit providing the modulation control signal for a different set of image sensor pixels in the image sensor pixels, a delay line configured to provide each driver unit in the plurality of driver units with a corresponding output signal, and delay lock circuitry coupled to the delay line and configured to control the delay line to exhibit a fixed delay across the delay line. As yet another example, a voltage-controlled delay line may include a plurality of inverters coupled in series. Each inverter in the plurality of inverters may include first and second transistors having a common gate terminal configured to receive an input signal and having a common drain terminal coupled to provide an output signal, a third transistor coupled between a first voltage terminal and the first transistor, a fourth transistor coupled between a second voltage terminal and the second transistor, fifth and sixth series-coupled transistors coupled in parallel with the third transistor between the first voltage terminal and the first transistor, seventh and eighth series-coupled transistors coupled in parallel with the fourth transistor between the second voltage terminal and the second transistor. The third transistor of each inverter in the plurality of inverters is configure to receive a same first bias voltage, the fourth transistor of each inverter in the plurality of inverters is configured to receive a same second bias voltage, the sixth transistor of each inverter in the plurality of inverters is configured to receive a third bias voltage that is independently controlled across the plurality of inverters, and the eighth transistor of each inverter in the plurality of inverters is configured to receive a fourth bias voltage that is independently controlled across the plurality of inverters. The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention. The foregoing embodiments may be implemented individually or in any combination.
48,631
11943554
MODE FOR CARRYING OUT THE INVENTION Embodiments are described in detail with reference to the drawings. However, the present invention is not limited to the following description, and it is readily appreciated by those skilled in the art that modes and details can be modified in various ways without departing from the spirit and the scope of the present invention. Therefore, the present invention should not be interpreted as being limited to the descriptions of embodiments below. Note that in structures of the invention described below, the same portions or portions having similar functions are denoted by the same reference numerals in different drawings, and the description thereof is not repeated in some cases. Note that the hatching of the same component that constitutes a drawing is omitted or changed as appropriate in different drawings in some cases. Even in the case where a single component is illustrated in a circuit diagram, the component may be composed of a plurality of parts as long as there is no functional inconvenience. For example, in some cases, a plurality of transistors that operate as a switch are connected in series or in parallel. In some cases, capacitors are divided and arranged in a plurality of positions. One conductor has a plurality of functions such as a wiring, an electrode, and a terminal in some cases. In this specification, a plurality of names are used for the same component in some cases. Even in the case where components are illustrated in a circuit diagram as if they were directly connected to each other, the components may actually be connected to each other through one conductor or a plurality of conductors. In this specification, even such a structure is included in direct connection. Embodiment 1 In this embodiment, an imaging device of one embodiment of the present invention is described with reference to drawings. One embodiment of the present invention is an imaging device having an additional function such as image recognition. The imaging device can retain analog data (image data) obtained by an imaging operation in a pixel and extract data obtained by multiplying the analog data by a predetermined weight coefficient. In addition, when the data taken out from the pixel is taken in a neural network or the like, processing such as image recognition can be performed. Since, in one embodiment of the present invention, an enormous amount of image data can be retained in pixels in an analog data state and an arithmetic operation can be performed in the pixels, processing can be performed efficiently. FIG.1is a block diagram illustrating an imaging device of one embodiment of the present invention. The imaging device includes a pixel array300, a circuit201, a circuit301, a circuit302, a circuit303, a circuit304, and a circuit305. Note that the structures of the circuit201and the circuit301to the circuit305are not limited to single circuits and may each consist of a plurality of circuits. Alternatively, any of two or more of the above circuits may be combined. The pixel array300has an imaging function and an arithmetic function. The circuits201and301each have an arithmetic function. The circuit302has an arithmetic function or a data conversion function. The circuits303and304each have a selection function. The circuit305has a function of supplying a potential to a pixel. The pixel array300includes a plurality of pixel blocks200. As illustrated inFIG.2, the pixel block200includes a plurality of pixels100arranged in a matrix, and each of the pixels100is electrically connected to the circuit201. Note that the circuit201can also be provided in the pixel block200. The pixel100can obtain image data. Note that the number of pixels is 2×2 in an example illustrated inFIG.2but is not limited to this. For example, the number of pixels can be 3×3, 4×4, or the like. Alternatively, the number of pixels in the horizontal direction and the number of pixels in the vertical direction may differ from each other. Furthermore, some pixels may be shared by adjacent pixel blocks. The pixel block200and the circuit201operate as a product-sum operation circuit. The circuit201also has a function of a correlated double sampling circuit (CDS circuit). As illustrated inFIG.3A, the pixel100can include a photoelectric conversion device101, a transistor102, a transistor103, a capacitor104, a transistor105, a transistor106, and a transistor108. Note that the photoelectric conversion device can also be referred to as a photoelectric conversion element. The capacitor can also be referred to as a capacitor or a capacitor element. One electrode of the photoelectric conversion device101is electrically connected to one of a source and a drain of the transistor102. The other of the source and the drain of the transistor102is electrically connected to one of a source and a drain of the transistor103. The one of the source and the drain of the transistor103is electrically connected to one electrode of the capacitor104. The one electrode of the capacitor104is electrically connected to a gate of the transistor105. One of a source and a drain of the transistor105is electrically connected to one of a source and a drain of the transistor108. The other electrode of the capacitor104is electrically connected to one of a source and a drain of the transistor106. The other electrode of the photoelectric conversion device101is electrically connected to a wiring114. A gate of the transistor102is electrically connected to a wiring116. The other of the source and the drain of the transistor103is electrically connected to a wiring115. A gate of the transistor103is electrically connected to a wiring117. The other of the source and the drain of the transistor105is electrically connected to a GND wiring or the like. The other of the source and the drain of the transistor108is electrically connected to a wiring113. The other of the source and the drain of the transistor106is electrically connected to a wiring111. A gate of the transistor106is electrically connected to a wiring112. A gate of the transistor108is electrically connected to a wiring122. Here, a portion where the other of the source and the drain of the transistor102, the one of the source and the drain of the transistor103, the one electrode of the capacitor104, and the gate of the transistor105are electrically connected is referred to as a node N. The wirings114and115can each have a function of a power supply line. For example, the wiring114can function as a high potential power supply line, and the wiring115can function as a low potential power supply line. The wirings112,116,117, and122can function as signal lines for controlling the electrical conduction of the respective transistors. The wiring111can function as a wiring for supplying a potential corresponding to a weight coefficient to the pixel100. The wiring113can function as a wiring for electrically connecting the pixel100and the circuit201. Note that an amplifier circuit or a gain control circuit may be electrically connected to the wiring113. As the photoelectric conversion device101, a photodiode can be used. In order to increase the light detection sensitivity under low illuminance conditions, an avalanche photodiode is preferably used. The transistor102can have a function of controlling the potential of the node N. The transistor103can have a function of initializing the potential of the node N. The transistor105can have a function of controlling a current fed by the circuit201depending on the potential of the node N. The transistor108can have a function of selecting a pixel. The transistor106can have a function of supplying the potential corresponding to a weight coefficient to the node N. Note that as illustrated inFIG.3B, the transistor105and the transistor108may be arranged such that the one of the source and the drain of the transistor105is electrically connected to the one of the source and the drain of the transistor108, the other of the source and the drain of the transistor105is connected to the wiring113, and the other of the source and the drain of the transistor108is electrically connected to a GND wiring or the like. In the case where an avalanche photodiode is used as the photoelectric conversion device101, a high voltage is sometimes applied and thus a transistor with a high withstand voltage is preferably used as a transistor connected to the photoelectric conversion device101. As the transistor with a high withstand voltage, a transistor using a metal oxide in its channel formation region (hereinafter, an OS transistor) or the like can be used, for example. Specifically, an OS transistor is preferably used as the transistor102. The OS transistor also has a feature of an extremely low off-state current. When OS transistors are used as the transistors102,103, and106, the charge retention period of the node N can be lengthened greatly. Therefore, a global shutter mode in which charge accumulation operation is performed in all the pixels at the same time can be used without complicating the circuit structure and operation method. Furthermore, while image data is retained at the node N, an arithmetic operation using the image data can be performed a plurality of times. Meanwhile, it is desired that the transistor105has excellent amplifying characteristics. The transistors106and108are preferably transistors having a high mobility capable of high-speed operation because the transistors106and108are repeatedly turned on and off at frequent intervals. Accordingly, transistors using silicon in their channel formation regions (hereinafter, Si transistors) may be used as the transistors105,106, and108. Note that without limitation to the above, an OS transistor and a Si transistor may be freely used in combination. Alternatively, all the transistors may be OS transistors. Alternatively, all the transistors may be Si transistors. Examples of the Si transistor include a transistor including amorphous silicon and a transistor including crystalline silicon (microcrystalline silicon, low-temperature polysilicon, or single crystal silicon). The potential of the node N in the pixel100is determined by capacitive coupling between a potential obtained by adding a reset potential supplied from the wiring115and a potential (image data) generated by photoelectric conversion by the photoelectric conversion device101and the potential corresponding to a weight coefficient supplied from the wiring111. That is, a current corresponding to data in which a predetermined weight coefficient is added to the image data flows through the transistor105. As illustrated inFIG.2, the pixels100are electrically connected to each other through the wiring113. The circuit201can perform an arithmetic operation using the sum of the currents flowing through the transistors105of the pixels100. The circuit201includes a capacitor202, a transistor203, a transistor204, a transistor205, a transistor206, and a resistor207. One electrode of the capacitor202is electrically connected to one of a source and a drain of the transistor203. The one of the source and the drain of the transistor203is electrically connected to a gate of the transistor204. One of a source and a drain of the transistor204is electrically connected to one of a source and a drain of the transistor205. The one of the source and the drain of the transistor205is electrically connected to one of a source and a drain of the transistor206. One electrode of the resistor207is electrically connected to the other electrode of the capacitor202. The other electrode of the capacitor202is electrically connected to the wiring113. The other of the source and the drain of the transistor203is electrically connected to a wiring218. The other of the source and the drain of the transistor204is electrically connected to a wiring219. The other of the source and the drain of the transistor205is electrically connected to a reference power supply line such as a GND wiring. The other of the source and the drain of the transistor206is electrically connected to a wiring212. The other electrode of the resistor207is electrically connected to a wiring217. The wirings217,218, and219can each have a function of a power supply line. For example, the wiring218can have a function of a wiring for supplying a reset potential for reading operation. The wirings217and219can function as high potential power supply lines. The wirings213,215, and216can function as signal lines for controlling the electrical conduction of the respective transistors. The wiring212is an output line and can be electrically connected to the circuit302illustrated inFIG.1, for example. The transistor203can have a function of resetting the potential of the wiring211to the potential of the wiring218. The transistors204and205can function as source follower circuits. The transistor206can have a function of controlling reading operation. Note that the circuit201may have another structure as long as the circuit201has a function of operating as a CDS circuit. In one embodiment of the present invention, offset components other than the product of image data (potential X) and a weight coefficient (potential W) are eliminated and an objective WX is extracted. WX can be calculated using data obtained when imaging is performed, data obtained when imaging is not performed, and data obtained by supplying weights to the respective data. The total amount of the currents (Ip) flowing through the pixels100when imaging is performed is kΣ(X−Vth)2, and the total amount of the currents (Ip) flowing through the pixels100when weights are supplied is kΣ(W+X−Vth)2. The total amount of the currents (Iref) flowing through the pixels100when imaging is not performed is kΣ(0−Vth)2, and the total amount of currents (Iref) flowing through the pixels100when weights are supplied is kΣ(W−Vth)2. Here, k is a constant and Vthis the threshold voltage of the transistor105. First, a difference (data A) between data obtained when imaging is performed and data obtained by supplying weights to the data is calculated. The difference is kΣ((X−Vth)2−(W+X−Vth)2)=kΣ(−W2−2 W·X+2 W·Vth). Next, a difference (data B) between data obtained when imaging is not performed and data obtained by supplying weights to the data is calculated. The difference is kΣ((0−Vth)2−(W−Vth)2)=kΣ(−W2+2 W·Vth). Then, a difference between the data A and the data B is calculated. The difference is kΣ(−W2−2 W·X+2 W·Vth−(−W2+2 W·Vth))=kΣ(−2 W·X). That is, offset components other than the product of the image data (X) and the weight coefficient (W) can be removed. The circuit201can read out the data A and the data B. Note that the circuit301can perform calculation of difference between the data A and the data B. FIG.4Ais a timing chart illustrating an operation of calculating the difference (data A) between the data obtained when imaging is performed and the data obtained by supplying the weight to the data in the pixel blocks200and the circuit201. For convenience, the timings of changing signals are matched in the chart; however, in reality, the timings are preferably shifted in consideration of the delay inside the circuit. First, in a period T1, the potential of the wiring117is set to “H” and the potential of the wiring116is set to “H”, so that the nodes N in the pixels100have reset potentials. Furthermore, the potential of the wiring111is set to “L” and wirings112_1and112_2(the wirings112in the first and second rows) are set to “H”, so that weight coefficients 0 are written. In a period T2, the potential of the wiring116is kept at “H” and the potential of the wiring117is set to “L”, so that the potential X (image data) is written to the nodes N by photoelectric conversion in the photoelectric conversion devices101. In a period T3, the potentials of wirings122_1and122_2are set to “H”, so that all of the pixels100in the pixel block are selected. At this time, a current corresponding to the potential X flows to the transistor105in each of the pixels100. The wiring216is set to “H”, so that a potential Vr of the wiring218is written to the wiring211. The operation in the periods T1to T3corresponds to obtainment of the data obtained when imaging is performed, and the data is initialized to the potential Vr of the wiring211. In a period T4, the potential of the wiring111is set to a potential corresponding to a weight coefficient W111(a weight added to the pixels in the first row), and the potential of the wiring112_1is set to “H”, so that the weight coefficient W111is added to the nodes N of the pixels100in the first row by capacitive coupling of the capacitors104. In a period T5, the potential of the wiring111is set to a potential corresponding to a weight coefficient W112(a weight added to the pixels in the second row), and the potential of the wiring112_2is set to “H”, so that the weight coefficient W112is added to the nodes N of the pixels100in the second row by capacitive coupling of the capacitors104. The operation in the periods T4and T5corresponds to generation of data in which weights are supplied to the data obtained when imaging is performed. In a period T6, the potentials of the wirings122_1and122_2are set to “H”, so that all of the pixels100in the pixel block are selected. At this time, a current corresponding to the potential W111+X flows to the transistors105in the pixels100in the first row. A current corresponding to the potential W112+X flows to the transistors105in the pixels100in the second row. Here, the potential of the other electrode of the capacitor202changes in accordance with the current flowing through the wiring113, and an amount Y of change is added to the potential Vr of the wiring211by capacitive coupling. Accordingly, the potential of the wiring211is “Vr+Y”. Here, given that Vr=0, Y is the difference itself, which means that the data A is calculated. The potential of the wiring213is set to “H” and the potential of the wiring215is set to an appropriate analog potential such as “Vbias”, so that the circuit201can output a signal potential in accordance with the data A of the pixel blocks200in the first row by a source follower operation. FIG.4Bis a timing chart illustrating an operation of calculating the difference (data B) between the data obtained when imaging is not performed and the data obtained by adding the weight to the data in the pixel blocks200and the circuit201. Although an operation of consecutively obtaining the data B from the pixel blocks200is described here, the obtainment of the data B and the obtainment of the data A shown inFIG.4may be alternately performed. Alternatively, the data A may be obtained after the data B is obtained. First, in the periods T1and T2, the potential of the wiring117is set to “H” and the potential of the wiring116is set to “H”, so that the nodes N in the pixels100have reset potentials (0). At the end of the period T2, the potential of the wiring117is set to “L” and the potential of the wiring116is set to “L”. That is, in the periods, the potentials of the nodes N are the reset potentials regardless of the operation of the photoelectric conversion devices101. In addition, in the period T1, the potential of the wiring111is set to “L” and the wirings112_1and112_2are brought to “H”, so that weight coefficients 0 are written. This operation is performed during a period in which the potentials of the nodes N are the reset potentials. In the period T3, the potentials of the wirings122_1and122_2are set to “H”, so that all of the pixels100in the pixel block are selected. At this time, a current corresponding to the reset potential flows to the transistor105in each of the pixels100. The wiring216is set to “H”, so that the potential Vr of the wiring218is written to the wiring211. The operation in the periods T1to T3corresponds to obtainment of the data obtained when imaging is not performed, and the data is initialized to the potential Vr of the wiring211. In the period T4, the potential of the wiring111is set to a potential corresponding to the weight coefficient W111(a weight added to the pixels in the first row), and the potential of the wiring112_1is set to “H”, so that the weight coefficient W111is added to the nodes N of the pixels100in the first row by capacitive coupling of the capacitors104. In the period T5, the potential of the wiring111is set to a potential corresponding to the weight coefficient W112(a weight added to the pixels in the second row), and the potential of the wiring112_2is set to “H”, so that the weight coefficient W112is added to the nodes N of the pixels100in the second row by capacitive coupling of the capacitors104. The operation in the periods T4and T5corresponds to generation of data in which weights are supplied to the data obtained when imaging is not performed. In the period T6, the potentials of the wirings122_1and122_2are set to “H”, so that all of the pixels100in the pixel block are selected. At this time, a current corresponding to the potential W111+0 flows to the transistors105in the pixels100in the first row. A current corresponding to the potential W112+0 flows to the transistors105in the pixels100in the first row. Here, the potential of the other electrode of the capacitor202changes in accordance with the current flowing through the wiring113, and the amount Y of change is added to the potential Vr of the wiring211. Accordingly, the potential of the wiring211is “Vr+Z”. Here, given that Vr=0, Z is the difference itself, which means that the data B is calculated. The potential of the wiring213is set to “H” and the potential of the wiring215is set to an appropriate analog potential such as “Vbias”, so that the circuit201can output a signal potential in accordance with the data B of the pixel blocks200in the first raw by a source follower operation. The data A and the data B output from the circuit201in the above operations are input to the circuit301. Calculation of the difference between the data A and the data B is performed in the circuit301, so that unnecessary offset components other than the product of the image data (potential X) and the weight coefficient (potential W) can be eliminated. The circuit301may have a structure in which the difference is calculated by utilizing a memory circuit and software processing, other than the structure including an arithmetic circuit such as the circuit201. Note that in the above operations, the potential of the wiring211of the circuit201is initialized to the potential “Vr” both in the operation of obtaining the data A and the operation of obtaining the data B. Then, “(Vr+Y)−(Vr+Z)”=“Y−Z” in the following difference calculation, so that the component of the potential “Vr” is eliminated. As described above, the other unnecessary offset components are also eliminated, so that the product of the image data (potential X) and the weight coefficient (potential W) can be extracted. This operation corresponds to the initial operation of a neural network performing inference or the like. Thus, at least one arithmetic operation can be performed in the imaging device before an enormous amount of image data is taken out to the outside, so that a load reduction, higher-speed processing, and reduction in power consumption in an arithmetic operation in the outside, input and output of data, or the like are achieved. Alternatively, as an operation other than the operation described above, the potential of the wiring211of the circuit201may be initialized to different potentials in the operation of obtaining the data A and in the operation of obtaining the data B. For example, the potential of the wiring211is initialized to a potential “Vr1” in the operation of obtaining the data A and to a potential “Vr2” in the operation of obtaining the data B. In this case, “(Vr1+Y)−(Vr2+Z)”=“(Vr1−Vr2)+(Y−Z)” in the following difference calculation. “Y−Z” is extracted as the product of the image data (potential X) and the weight coefficient (potential W) as in the above operation, and “Vr1−Vr2” is added. Here, “Vr1−Vr2” corresponds to a bias used for threshold value adjustment in the arithmetic operation in a middle layer of the neural network. Furthermore, the weight has a function of a filter of a convolutional neural network (CNN), for example, and may additionally have a function of amplifying or attenuating data. For example, when the weight coefficient (W) in the operation of obtaining the data A is set to the product of data obtained by the filter processing and an amplified amount, the product of the image data and the weight coefficient in the filter processing can be amplified and data corrected to a brighter image can be extracted. The data B is data obtained when imaging is not performed and thus can also be referred to as black level data. Thus, the operation of calculating the difference between the data A and the data B can be an operation of promoting visualization of an image taken in a dark place. That is, luminance correction using a neural network can be performed. As described above, a bias can be generated by the operation in the imaging device in one embodiment of the present invention. Furthermore, a functional weight can be added in the imaging device. Thus, a load in an arithmetic operation performed in the outside or the like can be reduced and the imaging device can be employed for a variety of usages. For example, part of processing in inference of a subject, correction of the resolution of image data, correction of luminance, generation of a color image from a monochrome image, generation of a three-dimensional image from a two-dimensional image, restoration of defected information, generation of a moving image from a still image, correction of an out-of-focus image, or the like can be performed in the imaging device. Note that the adjacent pixel blocks200may share the pixel100. For example, a transistor107capable of producing output in a manner similar to that of the transistor105is provided in the pixel100as illustrated inFIG.5A. A gate of the transistor107is electrically connected to the gate of the transistor105, and one of a source and a drain of the transistor107is electrically connected to a wiring118through a transistor109. A gate of the transistor109can be electrically connected to the wiring122. The wiring118is utilized for electrical connection to the circuit201connected to the adjacent pixel blocks.FIG.5Billustrates a form of connection between the pixels100(pixels100a,100b,100c,100d,100e,100f,100g, and100h) in the adjacent pixel blocks200(pixel blocks200aand200b) and the circuits201(circuits201aand201b) connected to the pixel blocks200. In the pixel block200a, the pixels100a,100b,100c, and100dare electrically connected to the circuit201athrough the wiring113. Furthermore, the pixels100eand100gare electrically connected to the circuit201athrough the wiring118. In the pixel block200b, the pixels100e,100f,100g, and100hare electrically connected to the circuit201bthrough the wiring113. Furthermore, the pixels100band100dare electrically connected to the circuit201bthrough the wiring118. That is, it can be said that the pixel block200aand the pixel block200bshare the pixels100b,100d,100e, and100g. With this form, a network between the pixel blocks200can be dense, improving the accuracy of image analysis and the like. The weight coefficient can be output from the circuit305illustrated inFIG.1to the wiring111, and it is preferable to rewrite the weight coefficient more than once in a frame period. As the circuit305, a decoder can be used. The circuit305may include a D/A converter or an SRAM. A signal potential can be output from the circuit303to the wiring112for selecting the pixel100to which the weight coefficient is input. As the circuit303, a decoder or a shift register can be used. A signal potential can be output from the circuit304to the wiring122connected to the gate of the transistor108of the pixel100, for example. As the circuit304, a decoder or a shift register can be used. Although the processing of data of the captured image is described above, image data without processing can be extracted in the imaging device of one embodiment of the present invention. In the product-sum operation, pixels in a plurality of rows are preferably selected at a time. Meanwhile, in the case where only imaging data is extracted, data is desirably extracted from pixels in one row. In one embodiment of the present invention, the circuit304for selecting the pixels100has a function of changing the number of rows to be selected. FIG.6illustrates an example of a circuit that can be used as the circuit304. The circuit is a shift register circuit, in which a plurality of logic circuits (SR) are electrically connected. To the logic circuits (SR), signal lines such as a wiring RES, a wiring VSS_RDRS, wirings RPWC_SE[0:3], wirings RCLK[0:3], and a wiring RSP are connected and appropriate signal potentials are input to the respective signal lines, so that selection signal potentials can be sequentially output from the logic circuits (SR). A circuit170is electrically connected to the logic circuits (SR). A plurality of transistors are provided in the circuit170and are connected to signal lines such as wirings SE_SW[0:2] and wirings SX[0:2]. When appropriate signal potentials are input to the respective signal lines, electrical conduction of the transistors is controlled. By the control by the circuit170, the number of rows of pixels to be selected can be changed. One of a source and a drain of one transistor is electrically connected to an output terminal of one logic circuit (SR), and the other of the source and the drain of the transistor is connected to the wiring SE. The wiring SE is electrically connected to the wiring122for selecting the pixel100. A signal potential supplied from the wiring SE_SW[0] can be input to a gate of the transistor connected to the wiring SE[0]. A signal potential supplied from the wiring SE_SW[1] can be input to a gate of the transistor connected to the wiring SE[1]. A signal potential supplied from the wiring SE_SW[2] can be input to a gate of the transistor connected to the wiring SE[2]. Signal potentials supplied from the wirings SE_SW[0:2] can be input to gates of the transistors connected to the wirings SE after the wirings SE[3] in the same order. Moreover, adjacent wirings SE are electrically connected to each other through one transistor, and the wiring SE[0] is electrically connected to a power supply line (VSS) through one transistor. A signal potential supplied from the wiring SX[0] can be input to a gate of the transistor that electrically connects the power supply line (VSS) and the wiring SE[0]. A signal potential supplied from the wiring SX[1] can be input to a gate of the transistor that electrically connects the wiring SE[0] and the wiring SE[1]. A signal potential supplied from the wiring SX[2] can be input to a gate of the transistor that electrically connects the wiring SE[1] and the wiring SE[2]. Signal potentials supplied from the wirings SX[0:2] can be input to gates of the transistors that electrically connect the subsequent adjacent wirings SE in the same order. FIG.7is a timing chart illustrating an operation in which a plurality of rows (three rows) are selected at a time by the circuit illustrated inFIG.6. Note that (0) to (161) correspond to timings at which the logic circuits (SR) output signal potentials to the wirings SE. When the potential of the wiring SX[0] is “L”, the potential of the wiring SX[1] is “H”, the potential of the wiring SX[2] is “H”, the potential of the wiring SE_SW[0] is “H”, the potential of the wiring SE_SW[1] is “L”, and the potential of the wiring SE_SW[2] is “L” at the timing (0), electrical conduction of the respective transistors is controlled and “H”, “H”, and “H” are output to the wiring SE[0], the wiring SE[1], and the wiring SE[2], respectively. To the other wirings SE, “L” is output. Thus, three rows can be selected at a time, and a product-sum operation of pixels of three rows and three columns can be performed, for example. When the potential of the wiring SX[0] is “H”, the potential of the wiring SX[1] is “L”, the potential of the wiring SX[2] is “H”, the potential of the wiring SE_SW[0] is “L”, the potential of the wiring SE_SW[1] is “H”, and the potential of the wiring SE_SW[2] is “L” at the timing (1), electrical conduction of the respective transistors is controlled and “L”, “H”, “H”, and “H” are output to the wiring SE[0], the wiring SE[1], the wiring SE[2], and the wiring SE[3], respectively. To the other wirings SE, “L” is output. That is, at the timing (1), a product-sum operation with a stride of 1, in which one-row shift from the timing (0) is made, can be performed. FIG.8is a timing chart illustrating an operation in which one row is selected by the circuit illustrated inFIG.6. In the operation in accordance with the timing chart, the potentials of the wirings SE_SW[0:2] always remain at “H”, and the potentials of the wirings SX[0:2] always remain at “L”. Thus, outputs of the logic circuits (SR) are input to the respective wirings SE without any changes, which enables selection of one row at a time. Note that in the structure illustrated inFIG.2, the circuit201reads out the pixel blocks200performing an arithmetic operation of a weight (filter processing) or the like one by one, and accordingly a lot of time to read out is required in the product-sun operation with a stride of 1 or the like. In other words, in the structure illustrated inFIG.2, the filter processing cannot be performed on the pixel blocks200in the column direction in parallel. In view of the above, as illustrated inFIG.9, a structure may be employed in which a transistor131and a transistor132are provided in the pixel100so that parallel reading can be performed. A gate of the transistor131is electrically connected to the gate of the transistor105. A gate of the transistor132is electrically connected to a wiring123. One of a source and a drain of the transistor131is electrically connected to one of a source and a drain of the transistor132, and the other of the source and the drain of the transistor131is electrically connected to a reference potential line such as a GND wiring. Furthermore, the other of the source and the drain of the transistor108is electrically connected to a wiring113a. The other of the source and the drain of the transistor132is electrically connected to a wiring113b. FIG.10illustrates connection relations among the plurality of pixels100(the pixel100ato a pixel100j) connected in the vertical direction in five consecutive rows, the wirings122(a wiring122_n−2 to a wiring122_n+2, n is a natural number) electrically connected to the pixels, the wirings123(a wiring123_n−2 to a wiring123_n+2, n is a natural number) electrically connected to the pixels, and the circuits201(the circuit201aand the circuit201b) electrically connected to the pixels. In the structure illustrated inFIG.10, two circuits201are included. The wiring113ais electrically connected to the circuit201aand the wiring113bis electrically connected to the circuit201b. An operation in which parallel reading is performed in the structure illustrated inFIG.10is described with reference to a timing chart illustrated inFIG.11A,FIG.12, andFIG.13. Note that an operation is described here in which convolution filters applied to a pixel block of four pixels, which are illustrated inFIG.11BandFIG.11C, are used, and the pixel blocks to which the filters are applied are sequentially read with a stride of 1. F1 to F4 and F5 to F8 correspond to the weights added to the respective pixels100. Note that only an operation of selecting the pixel100regarding the parallel reading operation is described here. The description made with reference toFIG.4AandFIG.4Bcan be referred to for a detailed operation of the pixels100and the circuit201. In the period T1, a pixel block consisting of the pixel100ato the pixel100dand a pixel block consisting of the pixel100eto the pixel100hare concurrently subjected to reading operation in parallel, which are illustrated inFIG.12. The filter illustrated inFIG.11Bis used for the former pixel block. The filter illustrated inFIG.11Cis used for the latter pixel block. When the wiring122_n−2, the wiring122_n−1, the wiring123_n, and the wiring123_n+1 are brought to “H” in the period T1, the transistors108in the pixel100ato the pixel100dare brought to a conduction state, and a product-sum operation result of the pixel100ato the pixel100dis output from the circuit201a. The transistors109in the pixel100eto the pixel110hare brought to a conduction state, and a product-sum operation result of the pixel100eto the pixel100his output from the circuit201b. In the period T2, a pixel block consisting of the pixel100cto the pixel100fand a pixel block consisting of the pixel100gto the pixel100jare concurrently subjected to reading operation in parallel, which are illustrated inFIG.13. The filter illustrated inFIG.11Bis used for the former pixel block. The filter illustrated inFIG.11Cis used for the latter pixel block. When the wiring122_n−1, the wiring122_n, the wiring123_n+1, and the wiring123_n+2 are brought to “H” in the period T2, the transistors108in the pixel100cto the pixel100fare brought to a conduction state, and a product-sum operation result of the pixel100cto the pixel100fis output from the circuit201a. The transistors109in the pixel100gto the pixel110jare brought to a conduction state, and a product-sum operation result of the pixel100gto the pixel100jis output from the circuit201b. In the period T3, a pixel block consisting of the pixel100eto the pixel100hillustrated inFIG.13and a pixel block consisting of the pixel100iand the pixel100hillustrated inFIG.13and two pixels not illustrated inFIG.13are concurrently subjected to reading operation in parallel. Through the above operation, the product-sum operation results can be read out in parallel, and the filter processing can be performed at higher speed. Although the unit of a pixel block is 2×2 here, parallel reading can be performed in a similar manner when the unit of a pixel block is 3×3 or more. Moreover, when the number of wirings to which each pixel can selectively output is increased and the wirings are connected to the circuit201, product-sum operation results of three or more pixel blocks can be read out in parallel. Note that the above operation is an example in which selection of pixels is performed every two rows; this can be achieved in such a manner that two shift register circuits which can activate a plurality of selection wirings at the same time, such as the shift register circuit illustrated inFIG.6, are provided. Alternatively, one shift register may be provided when a logic circuit is used which can activate the wiring122_n−2, the wiring122_n−1, the wiring123_n, and the wiring123_n+1 at the same time in the period T1and can active the wiring122_n−1, the wiring122_n, the wiring123_n+1, and the wiring123_n+2 at the same time in the period T2. FIG.14Ais a diagram explaining signal potentials output from the pixel blocks200. For simple description,FIG.14Aillustrates an example where the pixel array300consists of four pixel blocks200(a pixel block200c, a pixel block200d, a pixel block200e, and a pixel block200f) and each of the pixel blocks200includes four pixels100. Generation of signal potentials will be described taking the pixel block200cas an example, and the pixel blocks200d,200e, and200fcan output signal potentials through similar operations. In the pixel block200c, the pixels100retain their respective image data p11, p12, p21, and p22in the nodes N. Weight coefficients (W111, W112, W121, and W122) are input to the pixels100, and a product-sum operation result h111is output through a wiring113_1(the wiring113in the first column), the circuit201, and a wiring212_1(the wiring212in the first column). Here, h111=p11×W111+p12×W112+p21×W121+p22×W122. Note that the weight coefficients are not limited to being all different from each other, and the same value might be input to some of the pixels100. Concurrently through a process similar to the above, a product-sum operation result h121is output from the pixel block200dthrough a wiring113_2(the wiring113in the second column), the circuit201, and a wiring212_2(the wiring212in the second column); thus, the output from the pixel blocks200in the first row is completed. Note that arrows in the diagram indicate time axes (Time). Subsequently, in the pixel blocks200in the second row, through a process similar to the above, a product-sum operation result h112is output from the pixel block200ethrough the wiring113_1and the circuit201. Concurrently, a product-sum operation result h122is output from the pixel block200fthrough the wiring113_2and the circuit201; thus, the output from the pixel blocks200in the second row is completed. Moreover, weight coefficients are changed in the pixel blocks200in the first row and a process similar to the above is performed, so that h211and h221can be output. Furthermore, weight coefficients are changed in the pixel blocks200in the second row and a process similar to the above is performed, so that h212and h222can be output. The above operation is repeated as necessary. Product-sum operation result data output from the circuits201are sequentially input to the circuits301as illustrated inFIG.14B. The circuits301may each have a variety of arithmetic functions in addition to the above-described function of calculating a difference between the data A and the data B. For example, the circuits301may each include a circuit that performs arithmetic operation of an activation function. A comparator circuit can be used as the circuit, for example. A comparator circuit outputs a result of comparing input data and a set threshold as binary data. In other words, the pixel blocks200and the circuits301can operate as part of elements in a neural network. Furthermore, in the case where the data output from the pixel blocks200, which corresponds to image data of a plurality of bits, can be binarized by the circuits301, the binarization can be rephrased as compression of image data. The data output from the circuits301(h111′, h121′, h112′, h122′, h211′, h221′, h212′, and h222′) are sequentially input to the circuit302. The circuit302can have a structure including a latch circuit, a shift register, and the like, for example. With this structure, parallel serial conversion is possible, and data input in parallel may be output to a wiring311as serial data, as illustrated inFIG.14B. The connection destination of the wiring311is not limited. For example, it can be connected to a neural network, a memory device, a communication device, or the like. Moreover, as illustrated inFIG.15, the circuit302may include a neural network. The neural network includes memory cells arranged in a matrix, and each memory cell retains a weight coefficient. Data output from the circuit301are input to the cells in the row direction, and the product-sum operation in the column direction can be performed. Note that the number of memory cells illustrated inFIG.15is an example, and the number is not limited. The neural network illustrated inFIG.15includes memory cells320and reference memory cells325which are arranged in a matrix, a circuit340, a circuit350, a circuit360, and a circuit370. FIG.16illustrates an example of the memory cells320and the reference memory cells325. The reference memory cells325are provided in any one column. The memory cells320and the reference memory cells325have similar structures and each include a transistor161, a transistor162, and a capacitor163. One of a source and a drain of the transistor161is electrically connected to a gate of the transistor162. The gate of the transistor162is electrically connected to one electrode of the capacitor163. Here, a point at which the one of the source and the drain of the transistor161, the gate of the transistor162, and the one electrode of the capacitor163are connected is referred to as a node NM. A gate of the transistor161is electrically connected to a wiring WL. The other electrode of the capacitor163is electrically connected to a wiring RW. One of a source and a drain of the transistor162is electrically connected to a reference potential wiring such as a GND wiring. In the memory cell320, the other of the source and the drain of the transistor161is electrically connected to a wiring WD. The other of the source and the drain of the transistor162is electrically connected to a wiring BL. In the reference memory cell325, the other of the source and the drain of the transistor161is electrically connected to a wiring WDref. The other of the source and the drain of the transistor162is electrically connected to a wiring BLref. The wiring WL is electrically connected to a circuit330. As the circuit330, a decoder, a shift register, or the like can be used. The wiring RW is electrically connected to the circuit301. Binary data output from the circuit301to a wiring311_1or a wiring311_2is written to each memory cell. The wiring WD and the wiring WDref are electrically connected to the circuit340. As the circuit340, a decoder, a shift register, or the like can be used. Furthermore, the circuit340may include a D/A converter or an SRAM. The circuit340can output a weight coefficient to be written to the node NM. The wiring BL and the wiring BLref are electrically connected to the circuit350and the circuit360. The circuit350is a current source circuit, and the circuit360can have a structure equivalent to that of the circuit201. By the circuit350and the circuit360, a signal potential of a product-sum operation result from which offset components are eliminated can be obtained. The circuit360is electrically connected to the circuit370. The circuit370can also be referred to as an activation function circuit. The activation function circuit has a function of performing calculation for converting the signal potential input from the circuit360in accordance with a predefined activation function. As the activation function, for example, a sigmoid function, a tanh function, a softmax function, a ReLU function, a threshold function, or the like can be used. The signal potential converted by the activation function circuit is output to the outside as output data. As illustrated inFIG.17A, a neural network NN can be formed of an input layer IL, an output layer OL, and a middle layer (hidden layer) HL. The input layer IL, the output layer OL, and the middle layer HL each include one or more neurons (units). Note that the middle layer HL may be composed of one layer or two or more layers. A neural network including two or more middle layers HL can also be referred to as a DNN (deep neural network). Learning using a deep neural network can also be referred to as deep learning. Input data is input to each neuron in the input layer IL. An output signal of a neuron in the previous layer or the subsequent layer is input to each neuron in the middle layer HL. To each neuron in the output layer OL, output signals of the neurons in the previous layer are input. Note that each neuron may be connected to all the neurons in the previous and subsequent layers (full connection), or may be connected to some of the neurons. FIG.17Bshows an example of an operation with the neurons. Here, a neuron N and two neurons in the previous layer which output signals to the neuron N are illustrated. An output x1of a neuron in the previous layer and an output x2of a neuron in the previous layer are input to the neuron N. Then, in the neuron N, a total sum x1w1+x2w2of a multiplication result (x1w1) of the output x1and a weight w1and a multiplication result (x2w2) of the output x2and a weight w2is calculated, and then a bias b is added as necessary, so that the value a=x1w1+x2w2+b is obtained. Then, the value a is converted with an activation function h, and an output signal y=ah is output from the neuron N. In this manner, the arithmetic operation with the neurons includes the arithmetic operation that sums the products of the outputs and the weights of the neurons in the previous layer, that is, the product-sum operation (x1w1+x2w2described above). This product-sum operation may be performed using a program on software or may be performed using hardware. In one embodiment of the present invention, an analog circuit is used as hardware to perform a product-sum operation. In the case where an analog circuit is used as the product-sum operation circuit, the circuit scale of the product-sum operation circuit can be reduced, or higher processing speed and lower power consumption can be achieved by reduced frequency of access to a memory. The product-sum operation circuit preferably has a structure including an OS transistor. An OS transistor is suitably used as a transistor included in an analog memory of the product-sum operation circuit because of its extremely low off-state current. Note that the product-sum operation circuit may be formed using both a Si transistor and an OS transistor. This embodiment can be combined with any of the other embodiments and examples as appropriate. Embodiment 2 In this embodiment, structure examples and the like of the imaging device of one embodiment of the present invention are described. FIG.18AandFIG.18Billustrate examples of a structure of a pixel included in the imaging device. The pixel illustrated inFIG.18Ahas a stacked-layer structure of a layer561and a layer562, for example. The layer561includes the photoelectric conversion device101. The photoelectric conversion device101can have a stacked-layer structure of a layer565a, a layer565b, and a layer565cas illustrated inFIG.18C. The photoelectric conversion device101illustrated inFIG.18Cis a pn-junction photodiode; for example, a p+-type semiconductor can be used for the layer565a, an n-type semiconductor can be used for the layer565b, and an n+-type semiconductor can be used for the layer565c. Alternatively, an n+-type semiconductor may be used for the layer565a, a p-type semiconductor may be used for the layer565b, and a p+-type semiconductor may be used for the layer565c. Alternatively, a pin-junction photodiode in which the layer565bis an i-type semiconductor may be used. The pn-j unction photodiode or the pin-junction photodiode can be formed using single crystal silicon. The pin-junction photodiode can also be formed using a thin film of amorphous silicon, microcrystalline silicon, polycrystalline silicon, or the like. The photoelectric conversion device101included in the layer561may have a stacked-layer structure of a layer566a, a layer566b, a layer566c, and a layer566das illustrated inFIG.18D. The photoelectric conversion device101illustrated inFIG.18Dis an example of an avalanche photodiode, and the layer566aand the layer566dcorrespond to electrodes and the layers566band566ccorrespond to a photoelectric conversion portion. The layer566ais preferably a low-resistance metal layer or the like. For example, aluminum, titanium, tungsten, tantalum, silver, or a stacked layer thereof can be used. A conductive layer having a high light-transmitting property with respect to visible light is preferably used as the layer566d. For example, indium oxide, tin oxide, zinc oxide, indium tin oxide, gallium zinc oxide, indium gallium zinc oxide, graphene, or the like can be used. Note that a structure in which the layer566dis omitted can also be employed. A structure of a pn-j unction photodiode containing a selenium-based material in a photoelectric conversion layer can be used for the layers566band566cof the photoelectric conversion portion, for example. A selenium-based material, which is a p-type semiconductor, is preferably used for the layer566b, and gallium oxide or the like, which is an n-type semiconductor, is preferably used for the layer566c. A photoelectric conversion device containing a selenium-based material has characteristics of high external quantum efficiency with respect to visible light. In the photoelectric conversion device, electrons are greatly amplified with respect to the amount of incident light (Light) by utilizing the avalanche multiplication. A selenium-based material has a high light-absorption coefficient and thus has advantages in production; for example, a photoelectric conversion layer can be formed using a thin film. A thin film of a selenium-based material can be formed by a vacuum evaporation method, a sputtering method, or the like. As a selenium-based material, crystalline selenium such as single crystal selenium or polycrystalline selenium, amorphous selenium, a compound of copper, indium, and selenium (CIS), a compound of copper, indium, gallium, and selenium (CIGS), or the like can be used. An n-type semiconductor is preferably formed using a material with a wide band gap and a light-transmitting property with respect to visible light. For example, zinc oxide, gallium oxide, indium oxide, tin oxide, or mixed oxide thereof can be used. In addition, these materials have a function of a hole-injection blocking layer, so that a dark current can be decreased. The photoelectric conversion device101included in the layer561may have a stacked-layer structure of a layer567a, a layer567b, a layer567c, a layer567d, and a layer567eas illustrated inFIG.18E. The photoelectric conversion device101illustrated inFIG.18Eis an example of an organic optical conductive film, and the layer567aand the layer567ecorrespond to electrodes and the layers567b,567c, and567dcorrespond to a photoelectric conversion portion. One of the layers567band567din the photoelectric conversion portion can be a hole-transport layer and the other can be an electron-transport layer. The layer567ccan be a photoelectric conversion layer. For the hole-transport layer, molybdenum oxide can be used, for example. For the electron-transport layer, fullerene such as C60 or C70, or a derivative thereof can be used, for example. As the photoelectric conversion layer, a mixed layer of an n-type organic semiconductor and a p-type organic semiconductor (bulk heterojunction structure) can be used. For the layer562illustrated inFIG.18A, a silicon substrate can be used, for example. The silicon substrate includes a Si transistor or the like. With the use of the Si transistor, as well as a pixel circuit, a circuit for driving the pixel circuit, a circuit for reading out an image signal, an image processing circuit, a memory circuit, or the like can be provided. Specifically, some or all of the transistors included in the pixel circuits and the peripheral circuits (the pixels100, the circuits201,301,302,303,304, and305, and the like) described in Embodiment 1 can be provided in the layer562. Furthermore, the pixel may have a stacked-layer structure of the layer561, a layer563, and the layer562as illustrated inFIG.18B. The layer563can include an OS transistor. In that case, the layer562may include a Si transistor. Furthermore, some of the transistors included in the peripheral circuits described in Embodiment 1 may be provided in the layer563. With such a structure, components of the pixel circuit and the peripheral circuits can be distributed in a plurality of layers and the components can be provided to overlap with each other or any of the components and any of the peripheral circuits can be provided to overlap with each other, whereby the area of the imaging device can be reduced. Note that in the structure ofFIG.18B, the layer562may be a support substrate, and the pixels100and the peripheral circuits may be provided in the layer561and the layer563. As a semiconductor material used for an OS transistor, a metal oxide whose energy gap is greater than or equal to 2 eV, preferably greater than or equal to 2.5 eV, further preferably greater than or equal to 3 eV can be used. A typical example thereof is an oxide semiconductor containing indium, and a CAAC-OS (C-Axis Aligned Crystalline Oxide Semiconductor), a CAC (Cloud-Aligned Composite)-OS, each of which will be described later, or the like can be used, for example. A CAAC-OS has a crystal structure including stable atoms and is suitable for a transistor that is required to have high reliability, and the like. A CAC-OS has high mobility and is suitable for a transistor that operates at high speed, and the like. In an OS transistor, a semiconductor layer has a large energy gap, and thus the OS transistor has an extremely low off-state current of several yoctoamperes per micrometer (current per micrometer of a channel width). An OS transistor has features such that impact ionization, an avalanche breakdown, a short-channel effect, or the like does not occur, which are different from those of a Si transistor. Thus, the use of an OS transistor enables formation of a circuit having high withstand voltage and high reliability. Moreover, variations in electrical characteristics due to crystallinity unevenness, which are caused in the Si transistor, are less likely to occur in OS transistors. A semiconductor layer in an OS transistor can be, for example, a film represented by an In-M-Zn-based oxide that contains indium, zinc, and M (one or more metals selected from aluminum, titanium, gallium, germanium, yttrium, zirconium, lanthanum, cerium, tin, neodymium, and hafnium). The In-M-Zn-based oxide can be typically formed by a sputtering method. Alternatively, the In-M-Zn-based oxide may be formed by an ALD (Atomic layer deposition) method. It is preferable that the atomic ratio of metal elements of a sputtering target used for forming the In-M-Zn-based oxide by a sputtering method satisfy In ≥M and Zn≥M. The atomic ratio of metal elements in such a sputtering target is preferably, for example, In:M:Zn=1:1:1, In:M:Zn=1:1:1.2, In:M:Zn=3:1:2, In:M:Zn=4:2:3, In:M:Zn=4:2:4.1, In:M:Zn=5:1:6, In:M:Zn=5:1:7, or In:M:Zn=5:1:8. Note that the atomic ratio in the formed semiconductor layer may vary from the above atomic ratio of metal elements in the sputtering target in a range of ±40%. An oxide semiconductor with low carrier density is used for the semiconductor layer. For example, for the semiconductor layer, an oxide semiconductor whose carrier density is lower than or equal to 1×1017/cm3, preferably lower than or equal to 1×1015/cm3, further preferably lower than or equal to 1×1013/cm3, still further preferably lower than or equal to 1×1011/cm3, even further preferably lower than 1×1010/cm3, and higher than or equal to 1×10−9/cm3can be used. Such an oxide semiconductor is referred to as a highly purified intrinsic or substantially highly purified intrinsic oxide semiconductor. The oxide semiconductor has a low density of defect states and can thus be referred to as an oxide semiconductor having stable characteristics. Note that the composition is not limited to those described above, and a material having the appropriate composition may be used depending on required semiconductor characteristics and electrical characteristics of the transistor (e.g., field-effect mobility and threshold voltage). To obtain the required semiconductor characteristics of the transistor, it is preferable that the carrier density, the impurity concentration, the defect density, the atomic ratio between a metal element and oxygen, the interatomic distance, the density, and the like of the semiconductor layer be set to appropriate values. When silicon or carbon, which is one of elements belonging to Group 14, is contained in the oxide semiconductor contained in the semiconductor layer, oxygen vacancies are increased, and the semiconductor layer becomes n-type. Thus, the concentration of silicon or carbon (the concentration obtained by secondary ion mass spectrometry) in the semiconductor layer is set to lower than or equal to 2×1018atoms/cm3, preferably lower than or equal to 2×1017atoms/cm3. Alkali metal and alkaline earth metal might generate carriers when bonded to an oxide semiconductor, in which case the off-state current of the transistor might be increased. Therefore, the concentration of alkali metal or alkaline earth metal in the semiconductor layer (the concentration obtained by secondary ion mass spectrometry) is set to lower than or equal to 1×1018atoms/cm3, preferably lower than or equal to 2×1016atoms/cm3. When nitrogen is contained in the oxide semiconductor contained in the semiconductor layer, electrons serving as carriers are generated and the carrier density increases, so that the semiconductor layer easily becomes n-type. As a result, a transistor using an oxide semiconductor that contains nitrogen is likely to have normally-on characteristics. Hence, the nitrogen concentration (the concentration obtained by secondary ion mass spectrometry) in the semiconductor layer is preferably set to lower than or equal to 5×1018atoms/cm3. When hydrogen is contained in the oxide semiconductor contained in the semiconductor layer, hydrogen reacts with oxygen bonded to a metal atom to be water, and thus sometimes forms oxygen vacancies in the oxide semiconductor. When the channel formation region in the oxide semiconductor includes oxygen vacancies, the transistor sometimes has normally-on characteristics. In some cases, a defect in which hydrogen enters oxygen vacancies functions as a donor and generates electrons serving as carriers. In other cases, bonding of part of hydrogen to oxygen bonded to a metal atom generates electrons serving as carriers. Thus, a transistor using an oxide semiconductor that contains a large amount of hydrogen is likely to have normally-on characteristics. A defect in which hydrogen enters oxygen vacancies can function as a donor of the oxide semiconductor. However, it is difficult to evaluate the defects quantitatively. Thus, the oxide semiconductor is sometimes evaluated by not its donor concentration but its carrier concentration. Therefore, in this specification and the like, the carrier concentration assuming the state where an electric field is not applied is sometimes used, instead of the donor concentration, as the parameter of the oxide semiconductor. That is, “carrier concentration” in this specification and the like can be replaced with “donor concentration” in some cases. Therefore, hydrogen in the oxide semiconductor is preferably reduced as much as possible. Specifically, the hydrogen concentration of the oxide semiconductor, which is obtained by secondary ion mass spectrometry (SIMS), is lower than 1×1020atoms/cm3, preferably lower than 1×1019atoms/cm3, further preferably lower than 5×1018atoms/cm3, still further preferably lower than 1×1018atoms/cm3. When an oxide semiconductor with sufficiently reduced impurities such as hydrogen is used for a channel formation region of a transistor, stable electrical characteristics can be given. The semiconductor layer may have a non-single-crystal structure, for example. Examples of the non-single-crystal structure include CAAC-OS (C-Axis Aligned Crystalline Oxide Semiconductor) including a c-axis aligned crystal, a polycrystalline structure, a microcrystalline structure, and an amorphous structure. Among the non-single-crystal structures, the amorphous structure has the highest density of defect states, whereas the CAAC-OS has the lowest density of defect states. An oxide semiconductor film having an amorphous structure has disordered atomic arrangement and no crystalline component, for example. Alternatively, an oxide semiconductor film having an amorphous structure has, for example, a completely amorphous structure and no crystal part. Note that the semiconductor layer may be a mixed film including two or more of a region having an amorphous structure, a region having a microcrystalline structure, a region having a polycrystalline structure, a CAAC-OS region, and a region having a single crystal structure. The mixed film has, for example, a single-layer structure or a stacked-layer structure including two or more of the above regions in some cases. The composition of a CAC (Cloud-Aligned Composite)-OS, which is one embodiment of a non-single-crystal semiconductor layer, will be described below. A CAC-OS refers to one composition of a material in which elements constituting an oxide semiconductor are unevenly distributed with a size greater than or equal to 0.5 nm and less than or equal to 10 nm, preferably greater than or equal to 1 nm and less than or equal to 2 nm, or a similar size, for example. Note that a state in which one or more metal elements are unevenly distributed and regions including the metal element(s) are mixed with a size greater than or equal to 0.5 nm and less than or equal to 10 nm, preferably greater than or equal to 1 nm and less than or equal to 2 nm, or a similar size in an oxide semiconductor is hereinafter referred to as a mosaic pattern or a patch-like pattern. Note that an oxide semiconductor preferably contains at least indium. It is particularly preferable that indium and zinc be contained. Moreover, in addition to these, one kind or a plurality of kinds selected from aluminum, gallium, yttrium, copper, vanadium, beryllium, boron, silicon, titanium, iron, nickel, germanium, zirconium, molybdenum, lanthanum, cerium, neodymium, hafnium, tantalum, tungsten, magnesium, and the like may be contained. For example, of the CAC-OS, an In—Ga—Zn oxide with the CAC composition (such an In—Ga—Zn oxide may be particularly referred to as CAC-IGZO) has a composition in which materials are separated into indium oxide (InOX1, where X1 is a real number greater than 0) or indium zinc oxide (InX2ZnY2OZ2, where X2, Y2, and Z2 are real numbers greater than 0), and gallium oxide (GaOX3, where X3 is a real number greater than 0) or gallium zinc oxide (GaX4ZnY4OZ4, where X4, Y4, and Z4 are real numbers greater than 0), and a mosaic pattern is formed. Then, InOX1or InX2ZnY2OZ2forming the mosaic pattern is evenly distributed in the film. This composition is also referred to as a cloud-like composition. That is, the CAC-OS is a composite oxide semiconductor having a composition in which a region including GaOX3as a main component and a region including InX2ZnY2OZ2or InOX1as a main component are mixed. Note that in this specification, for example, when the atomic ratio of In to an element M in a first region is larger than the atomic ratio of In to the element M in a second region, the first region is regarded as having a higher In concentration than the second region. Note that IGZO is a commonly known name and sometimes refers to one compound formed of In, Ga, Zn, and O. A typical example is a crystalline compound represented by InGaO3(ZnO)m1(m1 is a natural number) or In(1+x0)Ga(1-x0)O3(ZnO)m0(−1≤x0≤1; m0 is a given number). The above crystalline compound has a single crystal structure, a polycrystalline structure, or a CAAC structure. Note that the CAAC structure is a crystal structure in which a plurality of IGZO nanocrystals have c-axis alignment and are connected in the a-b plane without alignment. On the other hand, the CAC-OS relates to the material composition of an oxide semiconductor. The CAC-OS refers to a composition in which, in the material composition containing In, Ga, Zn, and O, some regions that include Ga as a main component and are observed as nanoparticles and some regions that include In as a main component and are observed as nanoparticles are randomly dispersed in a mosaic pattern. Therefore, the crystal structure is a secondary element for the CAC-OS. Note that the CAC-OS is regarded as not including a stacked-layer structure of two or more kinds of films with different compositions. For example, a two-layer structure of a film including In as a main component and a film including Ga as a main component is not included. Note that a clear boundary cannot sometimes be observed between the region including GaOX3as a main component and the region including InX2ZnY2OZ2or InOX1as a main component. Note that in the case where one kind or a plurality of kinds selected from aluminum, yttrium, copper, vanadium, beryllium, boron, silicon, titanium, iron, nickel, germanium, zirconium, molybdenum, lanthanum, cerium, neodymium, hafnium, tantalum, tungsten, magnesium, and the like are contained instead of gallium, the CAC-OS refers to a composition in which some regions that include the metal element(s) as a main component and are observed as nanoparticles and some regions that include In as a main component and are observed as nanoparticles are randomly dispersed in a mosaic pattern. The CAC-OS can be formed by a sputtering method under a condition where a substrate is not heated intentionally, for example. Moreover, in the case of forming the CAC-OS by a sputtering method, any one or more selected from an inert gas (typically, argon), an oxygen gas, and a nitrogen gas are used as a deposition gas. Furthermore, the ratio of the flow rate of an oxygen gas to the total flow rate of the deposition gas at the time of deposition is preferably as low as possible, and for example, the ratio of the flow rate of the oxygen gas is preferably higher than or equal to 0% and lower than 30%, further preferably higher than or equal to 0% and lower than or equal to 10%. The CAC-OS is characterized in that no clear peak is observed in measurement using θ/2θ scan by an Out-of-plane method, which is one of X-ray diffraction (XRD) measurement methods. That is, it is found from the X-ray diffraction measurement that no alignment in the a-b plane direction and the c-axis direction is observed in a measured region. In addition, in an electron diffraction pattern of the CAC-OS which is obtained by irradiation with an electron beam with a probe diameter of 1 nm (also referred to as a nanobeam electron beam), a ring-like high-luminance region (ring region) and a plurality of bright spots in the ring region are observed. It is therefore found from the electron diffraction pattern that the crystal structure of the CAC-OS includes an nc (nano-crystal) structure with no alignment in the plan-view direction and the cross-sectional direction. Moreover, for example, it can be confirmed by EDX mapping obtained using energy dispersive X-ray spectroscopy (EDX) that the CAC-OS in the In—Ga—Zn oxide has a composition in which regions including GaOX3as a main component and regions including InX2ZnY2OZ2or InOX1as a main component are unevenly distributed and mixed. The CAC-OS has a composition different from that of an IGZO compound in which the metal elements are evenly distributed, and has characteristics different from those of the IGZO compound. That is, in the CAC-OS, the region including GaOX3or the like as a main component and the region including InX2ZnY2OZ2or InOX1as a main component are separated to form a mosaic pattern. Here, a region including InX2ZnY2OZ2or InOX1as a main component is a region whose conductivity is higher than that of a region including GaOX3or the like as a main component. In other words, when carriers flow through the regions including InX2ZnY2OZ2or InOX1as a main component, the conductivity of an oxide semiconductor is exhibited. Accordingly, when the regions including InX2ZnY2OX2or InOX1as a main component are distributed in an oxide semiconductor like a cloud, high field-effect mobility (μ) can be achieved. By contrast, a region including GaOX3or the like as a main component is a region whose insulating property is higher than that of a region including InX2ZnY2OZ2or InOX1as a main component. In other words, when the regions including GaOX3or the like as a main component are distributed in an oxide semiconductor, leakage current can be suppressed and favorable switching operation can be achieved. Accordingly, when the CAC-OS is used for a semiconductor element, the insulating property derived from GaOX3or the like and the conductivity derived from InX2ZnY2OX2or InOX1complement each other, whereby a high on-state current (Ion) and high field-effect mobility (μ) can be achieved. A semiconductor element using the CAC-OS has high reliability. Thus, the CAC-OS is suitably used as a constituent material of a variety of semiconductor devices. Note that the layer563may include a Si transistor. For example, a component included in the pixel circuit can be provided in the layer563. Furthermore, a circuit for driving the pixel circuit, a circuit for reading out an image signal, an image processing circuit, a memory circuit, or the like can be provided in the layer562. In this case, the layer562and the layer563form a stack structure of layers including Si transistors. In addition, when a pn-junction photodiode with silicon for a photoelectric conversion layer is used for the layer561, all of the layers can be formed using Si devices. FIG.19Ais a diagram illustrating an example of a cross section of the pixel illustrated inFIG.18A. The layer561includes a pn-junction photodiode with silicon for a photoelectric conversion layer, as the photoelectric conversion device101. The layer562includes Si transistors, and the transistors102and105included in the pixel circuit are shown as examples inFIG.19A. In the photoelectric conversion device101, the layer565acan be a p+-type region, the layer565bcan be an n-type region, and the layer565ccan be an n+-type region. The layer565bis provided with a region539for connecting a power supply line to the layer565c. For example, the region539can be a p+-type region. The Si transistors illustrated inFIG.19Aeach have a fin-type structure including a channel formation region in a silicon substrate540, andFIG.20Ashows a cross section (an A1-A2cross section inFIG.19A) in the channel width direction. The Si transistors may each have a planar-type structure as illustrated inFIG.20B. Alternatively, as illustrated inFIG.20C, transistors each including a semiconductor layer545of a silicon thin film may be used. The semiconductor layer545can be single crystal silicon (SOI (Silicon on Insulator)) formed on an insulating layer546on the silicon substrate540, for example. FIG.19Aillustrates an example of a structure in which electrical connection between components included in the layer561and components included in the layer562is obtained by a bonding technique. An insulating layer542, a conductive layer533, and a conductive layer534are provided in the layer561. The conductive layer533and the conductive layer534each include a region embedded in the insulating layer542. The conductive layer533is electrically connected to the layer565a. The conductive layer534is electrically connected to the region539. Furthermore, the surfaces of the insulating layer542, the conductive layer533, and the conductive layer534are planarized to have the same level. An insulating layer541, a conductive layer531, and a conductive layer532are provided in the layer562. The conductive layer531and the conductive layer532each include a region embedded in the insulating layer541. The conductive layer531is electrically connected to a power supply line. The conductive layer531is electrically connected to the source or the drain of the transistor102. Furthermore, the surfaces of the insulating layer541, the conductive layer531, and the conductive layer532are planarized to have the same level. Here, main components of the conductive layer531and the conductive layer533are preferably the same metal element. Main components of the conductive layer532and the conductive layer534are preferably the same metal element. Furthermore, it is preferable that the insulating layer541and the insulating layer542be formed of the same component. For example, for the conductive layers531,532,533, and534, Cu, Al, Sn, Zn, W, Ag, Pt, Au, or the like can be used. Preferably, Cu, Al, W, or Au is used for easy bonding. In addition, for the insulating layers541and542, silicon oxide, silicon oxynitride, silicon nitride oxide, silicon nitride, titanium nitride, or the like can be used. That is, the same metal material selected from the above is preferably used for the combination of the conductive layer531and the conductive layer533, and the same metal material selected from the above is preferably used for the combination of the conductive layer532and the conductive layer534. Furthermore, the same insulating material selected from the above is preferably used for the insulating layer541and the insulating layer542. With this structure, bonding where a boundary between the layer561and the layer562is a bonding position can be performed. This bonding enables electrical connection between the combination of the conductive layer531and the conductive layer533and between the combination of the conductive layer532and the conductive layer534. In addition, connection between the insulating layer541and the insulating layer542with mechanical strength can be obtained. For bonding the metal layers to each other, a surface activated bonding method in which an oxide film, a layer adsorbing impurities, and the like on the surface are removed by sputtering or the like and the cleaned and activated surfaces are brought into contact to be bonded to each other can be used. Alternatively, a diffusion bonding method in which the surfaces are bonded to each other by using temperature and pressure together can be used, for example. Both methods cause bonding at an atomic level, and therefore not only electrically but also mechanically excellent bonding can be obtained. Furthermore, for bonding the insulating layers to each other, a hydrophilic bonding method or the like can be used; in the method, after high planarity is obtained by polishing or the like, the surfaces of the insulating layers subjected to hydrophilicity treatment with oxygen plasma or the like are arranged in contact with and bonded to each other temporarily, and then dehydrated by heat treatment to perform final bonding. The hydrophilic bonding method can also cause bonding at an atomic level; thus, mechanically excellent bonding can be obtained. When the layer561and the layer562are bonded to each other, the insulating layers and the metal layers coexist on their bonding surfaces; therefore, the surface activated bonding method and the hydrophilic bonding method are performed in combination, for example. For example, a method can be used in which the surfaces are made clean after polishing, the surfaces of the metal layers are subjected to antioxidant treatment and hydrophilicity treatment, and then bonding is performed. Furthermore, hydrophilicity treatment may be performed on the surfaces of the metal layers being hardly oxidizable metal such as Au. Note that a bonding method other than the above-mentioned methods may be used. FIG.19Bis a cross-sectional view in the case where a pn-junction photodiode in which a selenium-based material is used for a photoelectric conversion layer is used for the layer561of the pixel illustrated inFIG.18A. The layer566ais included as one electrode, the layers566band566care included as a photoelectric conversion layer, and the layer566dis included as the other electrode. In this case, the layer561can be directly formed on the layer562. The layer566ais electrically connected to the source or the drain of the transistor102. The layer566dis electrically connected to the power supply line through the conductive layer536. Note that in the case where an organic optical conductive film is used for the layer561, the connection mode with the transistor is the same as the above. FIG.21Ais a diagram illustrating an example of a cross section of the pixel illustrated inFIG.18B. The layer561includes a pn-junction photodiode with silicon for a photoelectric conversion layer, as the photoelectric conversion device101. The layer562includes Si transistors, and the transistors105and108included in the pixel circuit are shown as examples inFIG.21A. The layer563includes OS transistors, and the transistors102and103included in the pixel circuit are illustrated as examples. A structure example is illustrated in which electrical connection between the layer561and the layer563is obtained by bonding. The details of an OS transistor are illustrated inFIG.22A. The OS transistor illustrated inFIG.22Ahas a self-aligned structure in which a source electrode705and a drain electrode706are formed through provision of an insulating layer over a stacked layer of an oxide semiconductor layer and a conductive layer and provision of opening portions reaching the semiconductor layer. The OS transistor can include a gate electrode701and a gate insulating film702in addition to a channel formation region, a source region703, and a drain region704, which are formed in the oxide semiconductor layer. At least the gate insulating film702and the gate electrode701are provided in the opening portion. The opening portion may further be provided with an oxide semiconductor layer707. As illustrated inFIG.22B, the OS transistor may have a self-aligned structure in which the source region and the drain region are formed in the semiconductor layer with the gate electrode701as a mask. As illustrated inFIG.22C, the OS transistor may be a non-self-aligned top-gate transistor including a region where the source electrode705or the drain electrode706overlaps with the gate electrode701. Although the transistors102and103each have a structure with a back gate535, they may have a structure without a back gate. As illustrated in a cross-sectional view of the transistor in the channel width direction inFIG.22D, the back gate535may be electrically connected to a front gate of the transistor, which is provided to face the back gate. Note thatFIG.22Dillustrates an example of a B1-B2cross section of the transistor inFIG.22A, and the same applies to a transistor having any of the other structures. Different fixed potentials may be supplied to the back gate535and the front gate. An insulating layer543that has a function of inhibiting diffusion of hydrogen is provided between a region where OS transistors are formed and a region where Si transistors are formed. Hydrogen in the insulating layer provided in the vicinity of the channel formation region of each of the transistors105and108terminates a dangling bond of silicon. Meanwhile, hydrogen in the insulating layer provided in the vicinity of the channel formation region of each of the transistors102and103is a factor of generating a carrier in the oxide semiconductor layer. Hydrogen is confined in one layer using the insulating layer543, whereby the reliability of the transistors105and108can be improved. Furthermore, diffusion of hydrogen from the one layer to the other layer is inhibited, so that the reliability of the transistors102and103can also be improved. For the insulating layer543, aluminum oxide, aluminum oxynitride, gallium oxide, gallium oxynitride, yttrium oxide, yttrium oxynitride, hafnium oxide, hafnium oxynitride, yttria-stabilized zirconia (YSZ), or the like can be used, for example. FIG.21Bis a cross-sectional view in the case where a pn-junction photodiode in which a selenium-based material is used for a photoelectric conversion layer is used for the layer561of the pixel illustrated inFIG.18B. The layer561can be directly formed on the layer563. The above description can be referred to for the details of the layers561,562, and563. Note that in the case where an organic optical conductive film is used for the layer561, the connection mode with the transistor is the same as the above. FIG.23is a diagram illustrating an example of the pixel illustrated inFIG.18B, which is different fromFIG.21A. In a structure illustrated inFIG.23, Si devices are provided in all of the layer561, the layer563, and the layer562, and the layers are attached to each other by bonding. The layer561includes a pn-junction photodiode with silicon for a photoelectric conversion layer, as the photoelectric conversion device101. The layer563includes Si transistors provided on the silicon substrate540. The transistors102and103illustrated as examples inFIG.23are some components of the pixel circuit. The layer562includes Si transistors provided on a silicon substrate550. Transistors141and142illustrated as examples inFIG.23are some components of a circuit electrically connected to the pixel circuit. A conductive layer531b, a conductive layer532b, and a conductive layer554are embedded in the insulating layer541provided in the layer563. The conductive layer531b, the conductive layer532b, and the conductive layer554are planarized to be level with the insulating layer541. The conductive layer531bis electrically connected to the conductive layer531a. The conductive layer531aand the conductive layer531beach have a function equivalent to that of the conductive layer531in the structure ofFIG.19A. The conductive layer531aand the conductive layer531bcan be formed using the same material as that of the conductive layer531. The conductive layer531bis electrically connected to the conductive layer533included in the layer561by bonding. The conductive layer532bis electrically connected to the conductive layer532a. The conductive layer532aand the conductive layer532beach have a function equivalent to that of the conductive layer532in the structure ofFIG.19A. The conductive layer532aand the conductive layer532bcan be formed using the same material as that of the conductive layer532. The conductive layer532bis electrically connected to the conductive layer534included in the layer561by bonding. The conductive layer554is electrically connected to a conductive layer551and a conductive layer552. The conductive layer552is electrically connected to a wiring connected to the pixel circuit included in the layer563. The conductive layer551is electrically connected to the circuit included in the layer562. The conductive layer554, the conductive layer551, and the conductive layer552can be formed using the same material as that of the conductive layer531. The conductive layer551includes a region embedded in the silicon substrate540and an insulating layer548, and is planarized to be level with the insulating layer548. Furthermore, the conductive layer551includes a region covered with an insulating layer560to be insulated from the silicon substrate540. A conductive layer553includes a region embedded in an insulating layer547provided in the layer562, and is planarized to be level with the insulating layer547. The conductive layer553is electrically connected to the circuit included in the layer562. The conductive layer553can be formed using the same material as that of the conductive layer531. By bonding the insulating layer548included in the layer563and the insulating layer547included in the layer562, the layer563and the layer562are attached to each other to have mechanical strength. Moreover, by bonding the conductive layer551included in the layer563and the conductive layer553included in the layer562, the layer563and the layer562are electrically connected to each other. Note thatFIG.23illustrates the structure in which the conductive layer554and the conductive layer553are connected to each other through the conductive layer551passing through the silicon substrate540; however, the structure is not limited thereto. For example, a structure may be employed in which the conductive layer551passing through the silicon substrate540is not provided and the conductive layer554and the conductive layer553are connected to each other in the outside of the silicon substrate540. In addition to the driver circuit of the pixel circuit, a memory circuit such as a DRAM (Dynamic Random Access Memory), a neural network, a communication circuit, or the like may be provided in the layer562, for example. When any of the circuits is provided to overlap with the pixel circuit, delay can be reduced and imaging, image recognition, and the like can be performed at high speed. As illustrated inFIG.24A, the pixel of one embodiment of the present invention may have a stacked-layer structure of the layer561, the layer563, the layer562, and a layer564.FIG.24Bis a cross-sectional view of an example of the stacked-layer structure. The layer561includes a pn-junction photodiode with silicon for a photoelectric conversion layer, as the photoelectric conversion device101. The layer563and the layer562include OS transistors. The layer564includes Si transistors143and144provided on a silicon substrate590. The OS transistors included in the layer563can be formed over the layer561. A conductive layer538connected to the transistor102and the transistor103is embedded in an insulating layer572provide in the layer563. The conductive layer538is planarized to be level with the insulating layer572. The OS transistors included in the layer562can be formed over the layer564. A conductive layer537connected to the transistor105and the transistor108is embedded in an insulating layer571provided in the layer562. The conductive layer537is planarized to be level with the insulating layer571. The conductive layer537and the conductive layer538can be formed using the same material as that of the conductive layer531. The insulating layer571and the insulating layer572can be formed using the same material as that of the insulating layer541. By bonding the insulating layer572included in the layer563and the insulating layer571included in the layer562, the layer563and the layer562are attached to each other to have mechanical strength. Moreover, by bonding the conductive layer538included in the layer563and the conductive layer537included in the layer562, the layer563and the layer562are electrically connected to each other. The structure illustrated inFIG.24AandFIG.24Bis a four-layer structure (a layer including a Si photodiode \ a layer including OS transistors \ a layer including OS transistors \ a layer including Si transistors), which can be formed through one bonding step. An OS transistor can be formed to be stacked over a silicon substrate on which a device is formed, and thus a bonding step can be skipped. Although an example in which both the layer562and the layer563include the transistors included in the pixel circuit is illustrated inFIG.24B, the structure is not limited thereto and one of the layers may include a pixel circuit and the other may include a memory circuit, for example. Furthermore, in addition to the driver circuit of the pixel circuit, a memory circuit such as a DRAM (Dynamic Random Access Memory), a neural network, a communication circuit, a CPU, or the like may be provided in the layer564, for example. Furthermore, part of the circuit included in the layer564may be formed using OS transistors provided in the layer563. Since an OS transistor has an extremely low off-state current, a data retention function of a circuit can be increased when the OS transistor is used as a transistor connected to a data retention portion. Accordingly, the frequency of refresh operation of a memory circuit can be reduced, which can reduce power consumption. A normally-off CPU (also referred to as “Noff-CPU”) can be formed using an OS transistor. Note that the Noff-CPU is an integrated circuit including a normally-off transistor, which is in a non-conduction state (also referred to as an off state) even when a gate voltage is 0 V. In the Noff-CPU, power supply to a circuit that does not need to operate can be stopped so that the circuit can be brought into a standby state. The circuit brought into the standby state because of the stop of power supply does not consume power. Thus, the power usage of the Noff-CPU can be minimized. Moreover, the Noff-CPU can retain data necessary for operation, such as setting conditions, for a long time even when power supply is stopped. The return from the standby state requires only restart of power supply to the circuit and does not require rewriting of setting conditions or the like. In other words, high-speed return from the standby state is possible. As described here, the Noff-CPU can have a reduced power consumption without a significant decrease in operation speed. FIG.25Ais a perspective view illustrating an example in which a color filter and the like are added to the pixel of the imaging device of one embodiment of the present invention. The perspective view also illustrates cross sections of a plurality of pixels. An insulating layer580is formed over the layer561where the photoelectric conversion device101is formed. As the insulating layer580, a silicon oxide film with a high light-transmitting property with respect to visible light can be used, for example. In addition, a silicon nitride film may be stacked as a passivation film. A dielectric film of hafnium oxide or the like may be stacked as an anti-reflection film. A light-blocking layer581may be formed over the insulating layer580. The light-blocking layer581has a function of inhibiting color mixing of light passing through the upper color filter. As the light-blocking layer581, a metal layer of aluminum, tungsten, or the like can be used. The metal layer and a dielectric film having a function of an anti-reflection film may be stacked. An organic resin layer582can be provided as a planarization film over the insulating layer580and the light-blocking layer581. A color filter583(color filters583a,583b, and583c) is formed in each pixel. Color images can be obtained, for example, when colors of R (red), G (green), B (blue), Y (yellow), C (cyan), M (magenta), and the like are assigned to the color filters583a,583b, and583c. An insulating layer586having a light-transmitting property with respect to visible light can be provided over the color filter583, for example. As illustrated inFIG.25B, an optical conversion layer585may be used instead of the color filter583. Such a structure enables the imaging device to obtain images in various wavelength regions. For example, when a filter that blocks light having a wavelength shorter than or equal to that of visible light is used as the optical conversion layer585, an infrared imaging device can be obtained. When a filter that blocks light having a wavelength shorter than or equal to that of near infrared light is used as the optical conversion layer585, a far-infrared imaging device can be obtained. When a filter that blocks light having a wavelength longer than or equal to that of visible light is used as the optical conversion layer585, an ultraviolet imaging device can be obtained. Furthermore, when a scintillator is used as the optical conversion layer585, an imaging device that obtains an image visualizing the intensity of radiation, which is used for an X-ray imaging device or the like, can be obtained. Radiation such as X-rays passes through an object and enters the scintillator, and then is converted into light (fluorescence) such as visible light or ultraviolet light owing to a photoluminescence phenomenon. Then, the photoelectric conversion device101detects the light to obtain image data. Furthermore, the imaging device having this structure may be used in a radiation detector or the like. A scintillator contains a substance that, when irradiated with radiation such as X-rays or gamma-rays, absorbs energy of the radiation to emit visible light or ultraviolet light. For example, a resin or ceramics in which Gd2O2S:Tb, Gd2O2S:Pr, Gd2O2S:Eu, BaFCl:Eu, NaI, CsI, CaF2, BaF2, CeF3, LiF, LiI, ZnO, or the like is dispersed can be used. In the photoelectric conversion device101containing a selenium-based material, radiation such as X-rays can be directly converted into charge; thus, a structure that does not require a scintillator can be employed. As illustrated inFIG.25C, a microlens array584may be provided over the color filter583. Light passing through an individual lens of the microlens array584goes through the color filter583directly under the lens, and the photoelectric conversion device101is irradiated with the light. The microlens array584may be provided over the optical conversion layer585illustrated inFIG.25B. Examples of a package and a camera module in each of which an image sensor chip is placed will be described below. For the image sensor chip, the structure of the above imaging device can be used. FIG.26A1is an external perspective view of the top surface side of a package in which an image sensor chip is placed. The package includes a package substrate410to which an image sensor chip450(see FIG.26A3) is fixed, a cover glass420, an adhesive430for bonding them, and the like. FIG.26A2is an external perspective view of the bottom surface side of the package. A BGA (Ball grid array) in which solder balls are used as bumps440on the bottom surface of the package is employed. Note that, without being limited to the BGA, an LGA (Land grid array), a PGA (Pin Grid Array), or the like may be employed. FIG.26A3is a perspective view of the package, in which parts of the cover glass420and the adhesive430are not illustrated. Electrode pads460are formed over the package substrate410, and the electrode pads460and the bumps440are electrically connected to each other via through-holes. The electrode pads460are electrically connected to the image sensor chip450through wires470. FIG.26B1is an external perspective view of the top surface side of a camera module in which an image sensor chip is placed in a package with a built-in lens. The camera module includes a package substrate411to which an image sensor chip451is fixed, a lens cover421, a lens435, and the like. Furthermore, an IC chip490(see FIG.26B3) having functions of a driver circuit, a signal conversion circuit, and the like of the imaging device is provided between the package substrate411and the image sensor chip451(see FIG.26B3); thus, the structure as an SiP (System in package) is included. FIG.26B2is an external perspective view of the bottom surface side of the camera module. A QFN (Quad flat no-lead package) structure in which lands441for mounting are provided on the bottom surface and side surfaces of the package substrate411is employed. Note that this structure is only an example, and a QFP (Quad flat package) or the above-mentioned BGA may also be provided. FIG.26B3is a perspective view of the module, in which parts of the lens cover421and the lens435are not illustrated. The lands441are electrically connected to electrode pads461, and the electrode pads461are electrically connected to the image sensor chip451or the IC chip490through wires471. The image sensor chip placed in a package having the above form can be easily mounted on a printed substrate or the like, and the image sensor chip can be incorporated into a variety of semiconductor devices and electronic devices. This embodiment can be combined with any of the other embodiments and examples as appropriate. Embodiment 3 As electronic devices that can include the imaging device of one embodiment of the present invention, display devices, personal computers, image memory devices or image reproducing devices provided with storage media, mobile phones, game machines including portable game machines, portable data terminals, e-book readers, cameras such as video cameras and digital still cameras, goggle-type displays (head mounted displays), navigation systems, audio reproducing devices (car audio players, digital audio players, and the like), copiers, facsimiles, printers, multifunction printers, automated teller machines (ATM), vending machines, and the like are given. Specific examples of these electronic devices are illustrated inFIG.27AtoFIG.27F. FIG.27Ais an example of a mobile phone, which includes a housing981, a display portion982, an operation button983, an external connection port984, a speaker985, a microphone986, a camera987, and the like. The display portion982of the mobile phone includes a touch sensor. A variety of operations such as making a call and inputting text can be performed by touch on the display portion982with a finger, a stylus, or the like. The imaging device of one embodiment of the present invention and the operation method thereof can be used for obtaining an image in the mobile phone. FIG.27Bis a portable data terminal, which includes a housing911, a display portion912, a speaker913, a camera919, and the like. A touch panel function of the display portion912enables input and output of information. Furthermore, a character or the like in an image that is captured by the camera919can be recognized and the character can be voice-output from the speaker913. The imaging device of one embodiment of the present invention and the operation method thereof can be used for obtaining an image in the portable data terminal. FIG.27Cis a surveillance camera, which includes a support base951, a camera unit952, a protection cover953, and the like. By providing the camera unit952provided with a rotating mechanism and the like on a ceiling, an image of all of the surroundings can be taken. The imaging device of one embodiment of the present invention and the operation method thereof can be used for obtaining an image in the camera unit. Note that a surveillance camera is a name in common use and does not limit the use thereof. A device that has a function of a surveillance camera can also be called a camera or a video camera, for example. FIG.27Dis a video camera, which includes a first housing971, a second housing972, a display portion973, an operation key974, a lens975, a connection portion976, a speaker977, a microphone978, and the like. The operation key974and the lens975are provided for the first housing971, and the display portion973is provided for the second housing972. The imaging device of one embodiment of the present invention and the operation method thereof can be used for obtaining an image in the video camera. FIG.27Eis a digital camera, which includes a housing961, a shutter button962, a microphone963, a light-emitting portion967, a lens965, and the like. The imaging device of one embodiment of the present invention and the operation method thereof can be used for obtaining an image in the digital camera. FIG.27Fis a wrist-watch-type information terminal, which includes a display portion932, a housing and wristband933, a camera939, and the like. The display portion932is provided with a touch panel for performing the operation of the information terminal. The display portion932and the housing and wristband933have flexibility and fit a body well. The imaging device of one embodiment of the present invention and the operation method thereof can be used for obtaining an image in the information terminal. This embodiment can be combined with any of the other embodiments and examples as appropriate. Example 1 In this example, an imaging device having the structure of one embodiment of the present invention described in Embodiment 1 was prototyped. Results of image processing in the imaging device will be described. A block diagram of the prototyped imaging device is shown inFIG.28. OS transistors were used as transistors included in components (e.g., a pixel, a row driver, a CDS circuit, an I-V converter, and a column selector) of the imaging device. The column selector has analog outputs AOUT[15:0]. The principle of analog product-sum operation in the imaging device is described with reference toFIG.29,FIG.30A, andFIG.30B. When a transistor Tr1of the pixel is on, the drain current of a transistor Tr2satisfies a condition of a saturation region and Id=β(Vgs−Vth)2/2. A constant voltage VBIAS is supplied to a transistor Tr5of the I-V converter, and a resistance value R is constant regardless of the voltage of a read line WX. An amount of voltage change that occurs in a charge accumulation portion FD of each pixel when photocharges of a photodiode PD are transferred is Xi, and the voltage of filter data supplied from wirings W[8:0] is Wi. FIG.30AandFIG.30Bare timing charts showing the operation of the imaging device. TX is a voltage supplied to a wiring TX connected to a gate of a transistor Tr4included in the pixel. RS is a voltage supplied to a wiring RS connected to a gate of a transistor Tr3included in the pixel. SE is a voltage supplied to a wiring SE connected to a gate of the transistor Tr1included in the pixel. CL is a voltage supplied to a wiring CL connected to a gate of a transistor Tr6included in the CDS circuit. W is a voltage of the wiring W that supplies filter data. FD is a voltage of the charge accumulation portion FD of the pixel. WX is a voltage of the wiring WX that functions as a read line. CDSOUT is a voltage output from an output wiring CDSOUT of the CDS circuit. In the operation according toFIG.30A, read voltages corresponding to the following two conditions can be obtained: the case where filter data Wi is supplied right after the potential of the charge accumulation portion FD of the pixel is reset to a potential VRS and the case where blank filter data (filter data of all 0 V) is supplied. When a difference between these two voltages is generated in the CDS circuit, a voltage V1can be obtained. Here, a voltage Va1shown inFIG.30Acan be expressed as Va1=VIV−Σiβ(VRS+Wi−Vth)2R/6. In addition, a voltage Vb1can be expressed as Vb1=VIV−Σiβ(VRS−Vth)2R/6. The voltage V1can be expressed as V1=VCL+Vb1−Va1=VCL+Σiβ(2(VRS−Vth)Wi+Wi2)R/6. Note that VIV is a voltage supplied to the transistor Tr5of the I-V converter. β is a constant. VCL is a voltage supplied to the transistor Tr6of the CDS circuit. Vthis the threshold voltage of the transistor Tr2. On the other hand, in the operation according toFIG.30B, a voltage V2can be obtained by performing similar processing after the charge accumulation portion FD is reset and photocharges are transferred to the charge accumulation portion FD. Here, a voltage Va2shown inFIG.30Bcan be expressed as Va2=VIV−Σiβ(VRS+Xi+Wi−Vth)2R/6. In addition, a voltage Vb2can be expressed as Vb2=VIV−Σiβ(VRS+Xi−Vth)2R/6. The voltage V2can be expressed as V2=VCL+Vb2−Va2=VCL+Σiβ(2(VRS+Xi−Vth)Wi+Wi2)R/6. When a difference between the two obtained voltages (V1, V2) is calculated in an external circuit, a voltage V2−V1=ΣiβXiWiR/3 is obtained. Thus, the product-sum operation of the imaging data and the filter data can be performed. An operation of obtaining imaging data and performing convolutional operation of filter data with the imaging device illustrated inFIG.30with the use of the above analog product-sum operation is described with reference to a timing chart ofFIG.31A. Note thatFIG.31Bis a diagram illustrating 3×3 filter positions on four units (one unit is 3×3 pixels). After the charge accumulation portions FD of all of the pixels are reset, 3×3 pixels to be subjected to the product-sum operation are selected with the use of the row driver and a switch control signal potential supplied from a wiring SY. The row driver supplies selection signal potentials to the wirings SE for three adjacent rows at a time, and all of the wirings WX are short-circuited every three adjacent columns by the switch control signal potentials supplied from the wirings SY. Thus, 80 sets of 3×3 pixels are selected at once, and 80 kinds of voltages are input to the CDS circuit. Note that there are 240 read lines connected to the CDS circuit, and every three adjacent lines which are short-circuited by the switch control signal potentials supplied from the wirings SY have the same voltage. At this time, the filter data Wi is supplied, the CDS circuit is reset, and then the blank filter data is supplied, whereby 80 kinds of voltages corresponding to the above voltages V1can be generated in the CDS circuit. Selection of 3×3 pixels with the use of the row driver and the switch control signal potentials supplied from the wirings SY is sequentially shifted, and the generated 80 kinds of voltages V1are sequentially read out to the outside. Next, after the pixels are reset and the imaging operation is performed, voltages corresponding to the voltages V2are read out to the outside by performing an operation similar to the above. Lastly, differences between all V1and V2that are read out are calculated in an external circuit, whereby a product-sum operation of all combinations in the shift operation, i.e., the convolutional operation, can be performed. Note that the imaging device can also perform a normal imaging operation. For the normal imaging operation, the voltage VIV and the voltage VBIAS are adjusted such that the transistor in the I-V converter functions as a bias transistor of a source follower, and the row driver sequentially activates the wirings SE one by one. This means that a convolutional operation function can be added without addition of another element to a normal imaging device, which can be said to be advantageous in terms of the mount area of the imaging device. The above imaging device was prototyped using an OS transistor having a channel length of 0.5 μm.FIG.32is a diagram illustrating a structure of the OS transistor. The OS transistor has a double-gate structure and includes a semiconductor layer (CAAC-IGZO)601, a gate insulating film602, a gate electrode603, a source electrode or drain electrode604, a buffer layer605, a gate insulating film606on a back gate side, and a gate electrode607on the back gate side.FIG.33shows the Id-Vd characteristics (Vg=1, 3, 5, 7 V) of the OS transistor (W/L=0.5 μm/0.5 μm). FIG.34is a top view photograph of the prototyped imaging device. Table 1 shows the specifications. TABLE 1ProcessOS transistor with channel length of 0.5 μmDie size8.0 mm × 8.0 mmNumber of pixels240(H) × 162(V)Pixel size15 μm × 15 μmPixel configuration4 transistors, 1 capacitorOutput16ch analog voltageConversion efficiency1.66μV/h+Full well capacity612kh+Read noise751h + rmsAperture ratio79.8%Frame rate7.5fpsPower consumptionNormal capturing: 3.80 mWConvolutional operation: 7.06 mWMultiplication efficiency0.805GOp/s/WCNN filter size3(H) × 3(V)CNN stride1 Crystalline selenium, which has high affinity for the OS transistor process, was used for a photoelectric conversion layer used for the photoelectric conversion device PD.FIG.35shows photocurrent characteristics of the photoelectric conversion device using crystalline selenium. The amount X of voltage change in the photoelectric conversion device PD caused by photocharges was imitated by changing the voltage VRS for resetting the pixel, the filter data (W) was swept with respect to a plurality of voltage change amounts (X), and the multiplication characteristics were measured.FIG.36Ashows theoretical values and the measured values of the product-sum operation, andFIG.36Bshows integral nonlinearity. The evaluation subject was one set of 3×3 pixels, and the voltage change amount (X) and the voltage value of the filter data (W) were the same for all the pixels. It was confirmed that the multiplication characteristic of 4-bit accuracy was able to be obtained in the voltage range of X≤0.5 V. Next, a nature image was captured and subjected to a convolutional operation with two kinds of filter data, whereby the feature values of the image were extracted.FIG.37is the image subjected to the operation and is a photographic image of zebras.FIG.38Ashows filter data for extracting horizontal stripes, andFIG.38Bis an image extracted using the filter data.FIG.39Ashows filter data for extracting vertical stripes, andFIG.39Bis an image extracted using the filter data. It was found that the horizontal direction components and the vertical direction components of patterns on the body surfaces of the zebras were able to be extracted by performing a convolutional operation with the filter data for the horizontal stripes and the vertical stripes. Next, with the use of the filter for extracting the horizontal stripes inFIG.38A, the feature value extraction characteristics when an image painted black and white with one straight line as a border was rotated, which is shown inFIG.40A, were evaluated.FIG.40Bis a diagram showing how to calculate the feature values. The feature value was defined as a difference between product-sum operation results of 3×3 pixels with two vertically adjacent pixels as a center in the rotational center on the border. FIG.40Cis a diagram showing theoretical values and the measured values of the feature value extraction. It was found from the relation between the normalized feature value and the rotation angle that the feature value was extracted most clearly at 0° and was able to be extracted at up to approximately 40°. In the above manner, the convolutional operation in the imaging device of the present invention was able to be demonstrated. This example can be combined with any of the other embodiments as appropriate. Example 2 A variety of studies utilizing an AI (Artificial Intelligence) system have been carried out in recent years. For application for autonomous driving of a passenger car or the like, practical implementation of behavior recognition in which an object and a background in an image are separately extracted and the movement of the object is detected has been desired. In this example, described is an example of an experiment in which an object and a background in an image were separately extracted in an attempt to use the imaging device described in Embodiment 1 for object recognition. FIG.41Ais a photograph showing a display device, an imaging device800, and one PC (notebook personal computer)810to which imaging data is input from the imaging device800, which were used for the experiment. InFIG.41A, after the imaging device800reads an image displayed on the display surface of the display device as video, with a segmentation module, a result of detecting an object in the image is displayed as a monochrome image. InFIG.41A, the result of outputting an object in white and a background in black is shown in part of the display screen of the one PC810(the left side of the screen). The segmentation module includes software for generating a plurality of image segments for use in image analysis. In this example, the one PC810that can perform segmentation on the basis of the learned content by using U-net that is one kind of image processing and convolutional neural network is used. Note that segmentation refers to processing for recognizing what object each pixel of an input image displays. This is also referred to as semantic segmentation. FIG.41Bis a schematic diagram showing the state of data processing. A first image801is imaging data schematically shown, and a plurality of pieces of map information802obtained with the imaging device800are shown next to the image801. The imaging device800can perform feature value extraction in pixels, what is called convolution, and can obtain the plurality of pieces of map information802. That is, the convolutional calculation of a first layer of U-net is performed. With the use of the map information802, the calculation of a second and the following layers of U-net is performed in the PC810. As a result, the probability of what object each pixel displays is output as an output of U-net. In this example, an image is generated in which pixels that are most likely the background are indicated in black and other pixels are indicated in white. That is, image data output to the screen of the PC810, in which an object region803aof the input image is white and a background region803bis black, is obtained. In this way, with the use of the imaging device and the segmentation module, the background region and the object region were able to be distinguished in the imaging data. The imaging device described in Embodiment 1 can extract the feature values in pixels and obtain a plurality of pieces of map information; thus, the arithmetic processing can be reduced as compared to the conventional device and a result can be obtained in a short time. A CNN model requires a large amount of convolution processing. The convolution processing employs a product-sum operation, and thus the CNN model has a big advantage for an LSI chip that can form a power-saving product-sum operation circuit, in particular, an IC chip (e.g., NoffCPU) using a transistor using an oxide semiconductor material. For example, an IC with an AI system (also referred to as an inference chip) is preferably used. The segmentation described in this example can be applied to autonomous driving of a passenger car or the like. This example can be combined with any of the other embodiments as appropriate. REFERENCE NUMERALS 100: pixel,100a: pixel,100b: pixel,100c: pixel,100d: pixel,100e: pixel,100f: pixel,100g: pixel,100h: pixel,100i: pixel,100j: pixel,101: photoelectric conversion device,102: transistor,103: transistor,104: capacitor,105: transistor,106: transistor,107: transistor,108: transistor,109: transistor,111: wiring,112: wiring,112_1: wiring,112_2: wiring,113: wiring,113_1: wiring,113_2: wiring,113a: wiring,113b: wiring,114: wiring,115: wiring,116: wiring,117: wiring,118: wiring,122: wiring,122_n: wiring,122_n−1: wiring,122_n−2: wiring,122_1: wiring,122_2: wiring,123: wiring,123_n: wiring,123_n−2: wiring,131: transistor,132: transistor,141: transistor,142: transistor,161: transistor,162: transistor,163: capacitor,170: circuit,200: pixel block,200a: pixel block,200b: pixel block,200c: pixel block,200d: pixel block,200e: pixel block,200f: pixel block,201: circuit,201a: circuit,201b: circuit,202: capacitor,203: transistor,204: transistor,205: transistor,206: transistor,207: resistor,211: wiring,212: wiring,213: wiring,215: wiring,216: wiring,217: wiring,218: wiring,219: wiring,300: pixel array,301: circuit,302: circuit,303: circuit,304: circuit,305: circuit,311: wiring,311_1: wiring,311_2: wiring,320: memory cell,325: reference memory cell,330: circuit,340: circuit,350: circuit,360: circuit,370: circuit,410: package substrate,411: package substrate,420: cover glass,421: lens cover,430: adhesive,435: lens,440: bump,441: land,450: image sensor chip,451: image sensor chip,460: electrode pad,461: electrode pad,470: wire,471: wire,490: IC chip,531: conductive layer,531a: conductive layer,531b: conductive layer,532: conductive layer,532a: conductive layer,532b: conductive layer,533: conductive layer,534: conductive layer,535: back gate,536: conductive layer,537: conductive layer,538: conductive layer,540: silicon substrate,541: insulating layer,542: insulating layer,543: insulating layer,545: semiconductor layer,546: insulating layer,547: insulating layer,548: insulating layer,550: silicon substrate,551: conductive layer,552: conductive layer,553: conductive layer,554: conductive layer,560: insulating layer,561: layer,562: layer,563: layer,564: layer,565a: layer,565b: layer,565c: layer,566a: layer,566b: layer,566c: layer,566d: layer,567a: layer,567b: layer,567c: layer,567d: layer,567e: layer,571: insulating layer,572: insulating layer,580: insulating layer,581: light-blocking layer,582: organic resin layer,583: color filter,583a: color filter,583b: color filter,583c: color filter,584: microlens array,585: optical conversion layer,586: insulating layer,590: silicon substrate,602: gate insulating film,603: gate electrode,604: drain electrode,605: buffer layer,606: gate insulating film,607: gate electrode,701: gate electrode,702: gate insulating film,703: source region,704: drain region,705: source electrode,706: drain electrode,707: oxide semiconductor layer,800: imaging device,801: image,802: map information,803a: object region,803b: background region,810: PC,911: housing,912: display portion,913: speaker,919: camera,932: display portion,933: housing and wristband,939: camera,951: support base,952: camera unit,953: protection cover,961: housing,962: shutter button,963: microphone,965: lens,967: light-emitting portion,971: housing,972: housing,973: display portion,974: operation key,975: lens,976: connection portion,977: speaker,978: microphone,981: housing,982: display portion,983: operation button,984: external connection port,985: speaker,986: microphone,987: camera.
120,261
11943555
DETAILED DESCRIPTION OF THE EMBODIMENTS The technical solution in the embodiment of the present application will be described with reference to the drawings in the embodiment of the present application. In the description of embodiments of the present application, the terms used in the following embodiments are for the purpose of describing specific embodiments only and are not intended to be limiting to the present application. As used in the description of the present application and the appended claims, the singular expressions “a”, “the”, “above”, “said” and “this” are intended to also include such expressions as “one or more”, unless the context expressly dictates to the contrary. It should also be understood that in the following embodiments of the present application, “at least one” and “one or more” refer to one or more than two (inclusive). The term “and/or” is used to describe the association relationship of associated objects, indicating that there can be three relationships. For example, A and/or B can represent the case where A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural. The character “/” generally indicates that the related objects are an alternative relationship. References to “one embodiment” or “some embodiments” or the like described in the description are intended to include in one or more embodiments of the present application particular features, structures, or features described in conjunction with the embodiment. Thus, statements “in one embodiment,” “in some embodiments,” “in other embodiments,” “in yet other embodiments,” and the like appearing in differences in the description do not necessarily all refer to the same embodiment, but mean “one or more but not all embodiments,” unless otherwise specifically emphasized. The terms “including”, “comprising”, “having” and variations thereof all mean “including but not limited to”, unless otherwise specifically emphasized. The term “connection” includes direct connection and indirect connection, unless otherwise stated. The terms “first”, “second” are for descriptive purposes only and cannot be construed as indicating or implying relative importance or implying the number of the indicated technical features. In embodiments of the present application, the words “exemplary” or “for example” are used as examples, illustrations, or description. Any embodiment or design described as “exemplary” or “for example” in embodiments of the present application should not be construed as being more preferred or advantageous than other embodiments or designs. Rather, the use of the words “exemplary” or “for example” is intended to present related concepts in a concrete manner. In the prior art, the overall circuit configuration of the CMOS image sensor is shown inFIG.1. The circuit configuration consists of a pixel array, a column ADC module, a column memory unit (not shown inFIG.1), a data processing section and a control logic section. The pixel array is composed of several pixel units, and the pixel units convert photocharges into analog voltage quantity after photosensing. The pixel signals of the pixel units in each column of the pixel array are converted from analog to digital by the corresponding ADC in the column ADC module, and then the subsequent data processing is carried out. There are M+1 columns of pixel array inFIG.1, corresponding to M+1 ADCs, which can be ADC (0), ADC (1), . . . , ADC (M), that is, each column in the pixel array corresponds to one ADC. At present, the commonly used CIS readout circuit architecture adopts unilateral parallel ADC readout architecture, as shown inFIG.2. That is, each ADC corresponds to the output of a column of pixels, and a pitch size (PZ) of each ADC in the column ADC module is exactly the same as a pitch size of each column of pixel units in the pixel array. When a size of the pixel unit is small, for example, PZ<2 μm, it is difficult to dispose the ADC in such a small width due to the process limitation. Therefore, in this case, the ADCs in the column ADC module are arranged on both sides of the pixel array, as shown inFIG.3. However, since the ADCs are arranged on both sides of the pixel array, the deviation in process conditioning and power supply on both sides of the pixel array will lead to a slight difference between the ADCs on both sides, which is fixed for the same CIS chip. If the light is uniformly distributed, there will be a difference between the ADCs on both sides of the pixel array. That is, as shown inFIG.3, there is a fixed deviation between the pixel signal output through the Gr channel in the even-numbered column and the pixel signal output through the Gb channel in the odd-numbered column. The Gr channel refers to a channel formed by a green pixel unit Gr and a transmission bus connected to the pixel. The Gb channel refers to a channel formed by a green pixel Gb and a transmission bus connected to the pixel. The general filter implementation of the CIS is based on a Bayer pattern, that is, RGB pattern including R (red), B (blue), Gr (green) and Gb (green) components. After normal illumination coming in, different colors of light will pass through the color filter, which makes R light, G light and B light reach the CIS, respectively. Then the CIS completes the conversion from illumination intensity to digital quantity, and then synthesizes the final image through the corresponding interpolation algorithm.FIG.4shows a simple Bayer pattern-based pixel array including a red pixel unit Gr, a green pixel unit Gb, a red pixel unit R and a blue pixel unit B (not labeled as shown inFIG.4). As shown inFIG.4, assuming that the red pixel unit R is a pixel point to be interpolated, if the green light component is interpolated to this point, it is necessary to collect the values of four nearby green light components (Gr1, Gr2, Gb1, Gb2) to synthesize the green component of the pixel point. Ideally, the quantity after averaging the surrounding four green components is the interpolation quantity, and the gradient between the pixel point and the surrounding four pixels is 0, that is, R=Gr1=Gr2=Gb1=Gb2. But if there is a difference AK between the Gr channel and the Gb channel, Gr1=Gr2=Gb1+ΔK=Gb2+ΔK. In a case of the common bilinear interpolation, R=[(Gr1+Gr1)/2+(Gb1+Gb2)/2]/2=Gb1+ΔK/4, in which case, the gradient of the red pixel unit R from the green pixel unit Gr is |Gr1−R|+|Gr2−R|=3/2*ΔK. The gradient of the red pixel unit R from the green pixel unit Gb is |Gb1−PC|+|Gb2−PC|=½ *ΔK. Due to the existence of different gradients, it is inevitable that steps or pseudo-edges appears in flat areas. The interpolation synthesis algorithm of the actual CIS is not only so simple, this edge will be irregular, and the so-called “maze pattern” will be formed on the image. In order to eliminate the maze pattern caused by the deviation, the present invention provides a pixel processing circuit, which is applied to an image sensor of a bilateral parallel ADC readout architecture, the circuit comprises a plurality of pixel units arranged in a Bayer array, an Analog-to-Digital Converter (ADC) module comprising a plurality of analog-to-digital converters, and a plurality of switch selection modules. In a preferred embodiment as shown inFIG.5, the number of the switch selection modules is set to be half of the number of the analog-to-digital converters. In other embodiments, the number of the switch selection modules can be irrelated to that of the switch selection modules. In the embodiment as shown inFIG.5, the plurality of analog-to-digital converters comprises first analog-to-digital converters located on a first side of the pixel array and second analog-to-digital converters located on a second side of the pixel array opposite to the first side. Each of the switch selection modules is configured to have a first unit disposed between the first side of the pixel array and one of the first analog-to-digital converters, and a second unit disposed between the second side of the pixel array and one of the second analog-to-digital converters. The switch selection modules are configured to switch the communication between the pixel units and the first and second analog-to-digital converters on the opposite first and second sides of the pixel array, such that signals of green pixel units are read by the first analog-to-digital converters located at the first side of the pixel array, such as ADC (0), ADC (2), ADC (4), . . . ADC (n−1), and signals of remaining color pixel units are read by the second analog-to-digital converters located at the second side of the pixel array such as ADC (1), ADC (3), ADC (5), . . . ADC (n). By controlling the state of the switch selection modules, the present invention achieves the ADC conversion of the pixel signals passing through the Gr channel and the Gb channel through the same side of the pixel array, ensuring that the pixel signals of the Gr channel and the Gb channel do not have value deviation, thereby avoiding the phenomenon of “maze pattern” after image interpolation and ensuring image quality. Optionally, the switch selection module comprising a plurality of switching circuits at least including a first switching circuit, a second switching circuit, a third switching circuit and a fourth switching circuit; wherein a number of columns of the pixel array is even, the first switching circuit and the second switch circuit are located at the first side of the pixel array and the third switching circuit and the fourth switching circuit are located at the second side of the pixel array; the pixel units in each of the odd-numbered columns of the pixel array are respectively electrically connected to one end of the first switching circuit of the switch selection module, and are respectively electrically connected to one end of the third switching circuit of the switch selection module; the pixel units in each of the even-numbered columns of the pixel array are respectively electrically connected to one end of the second switching circuit of the switch selection module, and are respectively electrically connected to one end of the fourth switching circuit of the switch selection module; wherein another ends of adjacent first switching circuit and second switching circuit are electrically connected to a common first analog-to-digital converter such as ADC (0), another ends of adjacent third switching circuit and fourth switching circuit are electrically connected to a common second analog-to-digital converter such as ADC (1), and each of the switching circuits is electrically connected to only one analog-to-digital converter. Further, optionally, the first switching circuit includes at least one switch, the second switching circuit includes at least one switch, the third switching circuit includes at least one switch, and the fourth switching circuit includes at least one switch. Optionally, in an embodiment of the present invention, each column of pixel units is connected to a corresponding switching circuit through an output bus. The output bus branches can be connected to a common bus via the switching circuits S0, S1, S2and S3as shown inFIG.5. That is to say, each column of pixel units is connected to an output bus branch, which is electrically connected to an end of each of the switching circuits S0, S1, S2and S3. And the output bus branches can be connected to a common bus such as PIX_OUT (0), PIX_OUT (2), . . . , PIX_OUT(n−1), and PIX_OUT (1), PIX_OUT (3), . . . , PIX_OUT (n). Optionally, the pixel unit includes a photodiode, a transfer transistor, a reset transistor, a source-follower transistor, and a row selection transistor; one end of the photodiode is grounded, and the other end of the photodiode is connected to a drain of the transfer transistor; a gate of the transfer transistor is configured for accessing a TX signal, a source of the transfer transistor is connected to a drain of the reset transistor, and a gate of the source-follower transistor; a source of the reset transistor is configured for accessing a power supply signal, and a gate of the reset transistor is configured for accessing an RX signal; a source of the source-follower transistor is configured for accessing the power supply signal, a drain of the source-follower transistor is connected to a source of the row selection transistor, a gate of the row selection transistor is configured for accessing an SEL signal, and a drain of the row selection transistor is connected to the output bus of the pixel unit; wherein the drain of the reset transistor is connected to the source of the row selection transistor to obtain a row selection control signal for transferring the pixel signal through the row selection transistor. Optionally, the analog-to-digital converter includes a comparator and a counter electrically connected to the comparator. As shown inFIG.5, a pixel processing circuit applied to an image sensor of a bilateral parallel ADC readout architecture includes a pixel array composed of pixel units, an ADC module composed of n+1 analog-to-digital converters, and a plurality of switch selection modules, where n is an odd number greater than or equal to 1. Each of the switch selection modules includes a first switching circuit S0, a second switching circuit S1, a third switching circuit S3and a fourth switching circuit S2. The ADC module includes first analog-to-digital converters disposed on a first side of the pixel array and second analog-to-digital converters disposed on an opposite second side of the pixel array; wherein the number of columns of the pixel array is n+1; the pixel units in each of the odd-numbered columns of the pixel array are respectively electrically connected to one end of the first switching circuit S0and electrically connected to one end of the third switching circuit S3. That is to say, the pixel units in each of the odd-numbered columns are electrically connected to only one first switching circuit S0and only one third switching circuit S3. And the pixel units in each of the even-numbered columns of the pixel array are respectively electrically connected to one end of the second switching circuit S1, and are respectively electrically connected to one end of the fourth switching circuit S2. That is, the pixel units in each of the even-numbered columns are electrically connected to only one second switching circuit S1and only one fourth switching circuit S2. And another ends of adjacent first switching circuit S0and second switching circuit S1are electrically connected to a common first analog-to-digital converter such as ADC (0), another ends of adjacent third switching circuit S3and fourth switching circuit S2are electrically connected to a common second analog-to-digital converter such as ADC (1), and each of the first switching circuit S0, the second switching circuit S1, the third switching circuit S3and the fourth switching circuit S2is electrically connected to only one analog-to-digital converter. The ADC module includes: ADC (0), ADC (1), ADC (2), ADC (3), . . . ADC (n−1), ADC (n), and the first analog-to-digital converters of ADC (0), ADC (2), . . . , ADC (n−1), and the second analog-to-digital converters of ADC (1), ADC (3), . . . ADC (n) are respectively arranged on both sides of the pixel array. Based on the pixel processing circuit provided by the embodiment, the present invention also provides a reading method of the pixel processing circuit, which includes: performing row-by-row reading and quantization processing for pixel units in a pixel array according to a row selection control signal, including:adjusting a conduction state of each switch selection module so that pixel signals of green pixel units and pixel signals of remaining color pixel units in each row of the pixel array are respectively read by the analog-to-digital converters located at different sides of the pixel array while pixel signals of the green pixel units in different rows of the pixel array are read by the analog-to-digital converters located at a same side of the pixel array;the remaining color pixel units being red pixel units and/or blue pixel units. In one possible embodiment, the conduction state of the switch selection module includes: at a time of a read phase, a first switching circuit and a fourth switching circuit of the switch selection module being in a same conduction state, a second switching circuit and a third switching circuit being in the same conduction state, the first switching circuit and the third switching circuit being in opposite conduction states, and the conduction state of the switch selection module being opposite when reading odd-numbered rows and even-numbered rows of the pixel array. The readout process of pixel signals related to different colors in the pixel array is explained by the operation principle of the first switching circuit S0, the second switching circuit S1, the third switching circuit S3and the fourth switching circuit S2. The pixel array inFIG.5includes green pixel units Gr, green pixel units Gb, red pixel units R, and blue pixel units B arranged in Bayer pattern. When a state of a control signal of the first switching circuit S0is “high”, the first switching circuit S0is turned on. When the state of the control signal of the first switching circuit S0is “low”, the first switching circuit S0is turned off. The second switching circuit S1, the fourth switching circuit S2, and the third switching circuit S3are turned on and off in the same way. As shown inFIG.5, the first row of pixel units of the pixel array includes green pixel units Gb and blue pixel units B. When the TX signal is “low”, the transfer transistor is turned off, firstly, the first switching circuit S0and the fourth switching circuit S2are controlled to be turned off, the second switching circuit S1and the third switching circuit S3are turned on, a reset signal of the blue pixel unit B is read out as B_VRST by ADC (1), and a reset signal Gb_VRST of the green pixel unit Gb is read out by ADC (0). After that, the state of the first switching circuit S0, the second switching circuit S1, the fourth switching circuit S2and the third switching circuit S3is controlled to remain unchanged and the TX signal becomes “high”, the transfer transistor is turned on, and an integration signal Gb_VSIG of the green pixel unit Gb is read out by the ADC (0), and an integration signal B_VSIG of the blue pixel unit B is read out by the ADC (1). It can be seen that the pixel signal passing through the Gb channel electrically connected to the second switching circuit S1is read out by the ADC (0). Then, in the same way, the pixel signals of the first row of pixel units are continued to be read, the pixel signals of the green pixel units Gb are respectively read by the first analog-to-digital converters of ADC (0), ADC (2), ADC (4), . . . ADC (n−1), while the pixel signals of the blue pixel units B are respectively read by the second analog-to-digital converters of ADC(1), ADC (3), ADC (5), . . . ADC (n). When the pixel signals of the green pixel units of a current row are read out, the switch selection module need to be controlled and adjusted so that the conduction states of the first switching circuit S0, the second switching circuit S1, the fourth switching circuit S2and the third switching circuit S3are reversed, so that the pixel signals of the green pixel units of a next row are also read out by the first analog-to-digital converters. Specifically, when reading the second row of pixel units, the first switching circuit S0and the fourth switching circuit S2are controlled to be turned on, the second switching circuit S1and the third switching circuit S3are turned off, the reset signal of the red pixel unit R is read out as R_VRST by ADC (1), and the reset signal Gr_VRST of the green pixel unit Gr is read out by ADC (0). After that, the state of the first switching circuit S0, the second switching circuit S1, the fourth switching circuit S2and the third switching circuit S3is controlled to remain unchanged and the TX signal becomes “high”, the transfer transistor is turned on, and an integration signal Gr_VSIG of the green pixel unit Gr is read out by the ADC (0), and an integration signal R_VSIG of the red pixel unit R is read out by the ADC (1). Then, in the same way, the pixel signals of the second row of pixel units are continued to be read, and the signals of the green pixel units Gr are respectively read by the first analog-to-digital converters of ADC (0), ADC (2), ADC (4), . . . ADC (n−1), while the signals of the red pixel units R are respectively read by the second analog-to-digital converters of ADC (1), ADC (3), ADC (5), . . . ADC (n). The above timing control method can ensure that the pixel signals of Gr channel and Gb channel are read out by ADCs on the same side of pixel array, and the difference between Gr channel and Gb channel due to reading out by different ADC arrays as in the traditional bilateral parallel ADC architecture inFIG.3will not occur. Thus in this embodiment, the red pixel units and the blue pixel units can be read out by the second analog-to-digital converters such as ADC (1), ADC (3), ADC (5), . . . ADC (n). It should be noted that signals of the remaining color pixel units connected to a same switch selection module can be read out by a common analog-to-digital converter such as ADC (1), ADC (3), ADC (5), . . . ADC (n). In other embodiments, the pixel processing circuit comprises a switch selection module and a plurality of analog-to-digital converters. The plurality of analog-to-digital converters comprise at least one first analog-to-digital converter such as ADC (0) located on a first side of the pixel array and at least one second analog-to-digital converter such as ADC (1) located on a second side of the pixel array opposite to the first side, the switch selection module is configured to have a first unit disposed between the first side of the pixel array and the at least one first analog-to-digital converters such as ADC (0), and a second unit disposed between the second side of the pixel array and the at least one second analog-to-digital converters such as ADC (1), wherein the switch selection module is configured to switch the communication between the pixel units and the first and second analog-to-digital converters on the opposite first and second sides of the pixel array, such that a signal or signals of green pixel units is read by the first analog-to-digital converter such as ADC (0) that is located at the first side of the pixel array, and a signal or signals of remaining color pixel units is read by the second analog-to-digital converter such as ADC (1) that is located at the second side of the pixel array. That is to say, a signal or signals of green pixel units can be read out by the common first analog-to-digital converter such as ADC (0) that is located at the first side of the pixel array, while a signal or signals of remaining color pixel units such as the red pixel units and/or the blue pixel units can be read out by the common second analog-to-digital converter such as ADC (1) that is located at the second side of the pixel array. The basic structure and operation timing of pixel units are supplemented below. FIG.6is a circuit configuration of a CIS standard four-transistor pixel unit, which is composed of a photodiode PD, a transfer transistor Mtg, a reset transistor Mrst, a source-follower transistor Msf, and a row selection transistor Msel. One end of the photodiode PD is grounded, and the other end of the photodiode PD is connected to the drain of the transfer transistor Mtg. A gate of the transfer transistor Mtg is configured for accessing a TX signal, a source of the transfer transistor Mtg is connected to a drain of the reset transistor Mrst, and a gate of the source-follower transistor Msf. A source of the reset transistor Mrst is configured for accessing a power supply signal, and a gate of the reset transistor Mrst is configured for accessing an RX signal. A source of the source-follower transistor Msf is configured for accessing the power supply signal, a drain of the source-follower transistor Msf is connected to a source of the row selection transistor Msel, a source of the row selection transistor Msel is configured for accessing an SEL signal, and a drain of the row selection transistor Msel is connected to the output of the pixel array. The PD generates photoelectrons proportional to the intensity of light when it is sensitized. The function of Mtg is to transfer photoelectrons in the PD. When the TX signal is at high potential, the Mtg is turned on to transfer photoelectrons in the PD to a floating node FD. The Mrst plays a role in resetting the FD when the RX signal is at high potential. The Msf is an amplifier tube. When the SEL signal is at high potential, the Msel is turned on, and the Msf and the Msel form a path between the current source and ground, at which time, the output of the Msf follows the change of potential of the FD. FIG.7shows the operation timing of CIS standard four-transistor pixel unit, including phases of resetting (Rst), exposing (Exp) and signal reading (Read). In the Rst phase, the TX signal and RX signal are at “high” potential, both the Mtg and the Mrst are turned on, the FD is reset and its potential is pulled up to the power supply voltage VDD. Then the RX signal and the TX signal change to “low” potential, and enter the Exp phase, where PD senses light and accumulates electrons. Then, in the Read phase, the SEL signal is at “high” potential, the RX signal is at “high” potential first and is then pulled to “low” potential after the FD is reset, and the TX signal is kept at “low” potential. At this time, the Msf is controlled by the potential of the FD and outputs reset potential VRST through the PIX_OUT. After that, the TX signal is pulled to the “high” potential, and electrons on the PD are transferred to the FD. At this time, the Msf is controlled by the potential of the FD and outputs integrated potential VSIG through the PIX_OUT. The difference between VRST and VSIG is the analog voltage corresponding to photoelectrons on the PD. VRST and VSIG potentials are converted into digital quantities by the analog-to-digital converter (ADC) circuit and subtracted to obtain the actual digital quantities corresponding to the photoelectrons on the PD. If the ADC is 12 bits and the ADC reference voltage range is VREF, the final output is DOUT=(VRST−VSIG)×212/VREF. Based on the pixel processing circuit provided by any of the above embodiments, the embodiment of the present application provides an image sensor, including: the pixel processing circuit described in any one of the above embodiments; a control logic module for controlling the pixel processing circuit to process a pixel signal; and a data processing module for obtaining a signal processed by the pixel processing circuit. The above is only the detailed description of the embodiments of the present application, but the scope of protection of the embodiments of the present application is not limited thereto, and any change or replacement within the technical scope disclosed in the embodiments of the present application should be covered within the scope of protection of the embodiments of the present application. Therefore, the scope of protection of the embodiments of this application shall be subject to the scope of protection of the claims.
27,617
11943556
The drawings are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes. The dimensions and the relative dimensions do not necessarily correspond to actual reductions to practice of the invention. Any reference signs in the claims shall not be construed as limiting the scope. In the different drawings, the same reference signs refer to the same or analogous elements. DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. Directional terminology such as top, bottom, front, back, leading, trailing, under, over and the like in the description and the claims is used for descriptive purposes with reference to the orientation of the drawings being described, and not necessarily for describing relative positions. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration only, and is in no way intended to be limiting, unless otherwise indicated. It is, hence, to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other orientations than described or illustrated herein. It is to be noticed that the term “comprising”, used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps. It is thus to be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the scope of the expression “a device comprising means A and B” should not be limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments. Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention. Furthermore, while some embodiments described herein include some, but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art. In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. In the context of the present invention, a photocharge that is integrated by a photosensitive element, such as a photodiode, upon illumination is understood as a definite amount of electrical charge, which may be expressed in terms of a fundamental charge, for instance as a plurality of photogenerated electrons. In a first aspect, the present invention relates to a stacked image sensor, for instance a vertically stacked CMOS image sensor, which provides image or video frame acquisition at increased dynamic range and also increased internal data readout and processing rates relative to an external readout speed of the device, i.e. the speed at which units of data (row or full frame) are transferred off the image sensor chip. If a conventional image sensor device runs at its maximum external frame rate, it also runs at its lowest internal row time—the time necessary to address a row of pixels, read out the addressed row of pixels, perform analog-to-digital conversion (A/D conversion) of the read out pixel data and transfer a complete row of pixel data off the chip (external readout). In a stacked image sensor according to the invention, running at the maximum external frame rate does not stand in the way of implementing pixel row readout operations, as well as further data processing operations using the read out pixel data, internally at a much higher speed. An exemplary way of exploiting the faster running internal pixel row readout and pixel data processing operations in embodiments of the invention is to operate the image sensor device in a dual or multiple subframe exposure mode, in which pixel row data relating to the multiple subframe exposures is combined into a high dynamic range (HDR) image frame. FIG.1shows a stacked image sensor100as a layered structure in which an upper first substrate101is vertically stacked onto a lower second substrate103. The first substrate101of the image sensor100comprises an array of pixels102and can be configured to work under front-illumination conditions or back-illumination conditions. The pixel array102is organized into a plurality of pixel subarrays102-1to102-4, for instance by vertically dividing the pixel array into independently addressable blocks of pixel rows, wherein each block contains a plurality of consecutive pixel rows of the pixel array. Blocks of contiguous pixel rows are only one specific example of dividing the pixel array102into a set of pixel subarrays. Other ways of assigning individual pixel rows of the pixel array to one of the pixel subarrays exist, for example a random assignment of the individual pixel rows to one of the plurality of subarrays or an interleaved assignment according to which contiguous pixel rows of the pixel array are assigned to different pixel subarrays, e.g. i-th pixel row being allocated to the n-th pixel subarray out of N pixel subarrays, where n=(i mod N). An interleaved assignment of pixel rows to a set of pixel subarrays, and the resulting interleaved connection of pixel rows to different readout blocks, may have the additional advantage of preserving a uniform rolling shutter effect over the whole pixel array. The second substrate103contains control and readout circuitry for selecting, controlling, and reading out pixel rows of the pixel array, as well as for processing the read out pixel data from the pixel rows of the array. The readout circuitry is organized into a plurality of blocks104-1to104-4, referred to as readout blocks, which correspond in number to the pixel subarrays. More specifically, each readout block is matched with and electrically connected to exactly one of the pixel subarrays. The electrical connections between pixel subarrays and readout blocks may be fixed connections, which are easier to implement, or reconfigurable connections, which provide more flexibility, e.g. reconfiguration of the way pixel subarrays are connected to the readout blocks in case of vertical and/or horizontal windowing is applied to the pixel array of the image sensor, to optimize imaging speed. This association between pixel subarrays and readout blocks enables parallel control of each pixel subarray, concurrent readout of k different pixel rows associated with k different pixel subarrays and also parallel processing of the read out pixel row data. A typical number k of pixel subarrays and corresponding readout blocks may be k=2, . . . , 8, 16, . . . , 128. More than one pixel row being read out at the same time, the image sensor is configured for parallel operation. As explained in more detail hereinbelow, readout operations and data processing operations performed on the read out pixel row data are pipelined in respect of each pixel subarray-readout block pair. By way of example, the image sensor may comprise 16 readout blocks and a pixel array with several thousands of pixel rows, e.g., 6 k pixel rows, in which two vertically adjacent pixels are shared (i.e., 2×1 vertical sharing). During a first row time, the first readout block reads the pixel data of row 0, the second readout block reads the pixel data of row 2, the third readout block reads the pixel data of row 4, etc. Next, during a second row time, the first readout block reads the pixel data of row 1, the second readout block reads the pixel data of row 3, the third readout block reads the pixel data of row 5, etc. During a third row time, the first readout block reads the pixel data of row 32, the second readout block reads the pixel data of row 34, the third readout block reads the pixel data of row 36, etc. and, during a fourth row time, the first readout block reads the pixel data of row 33, the second readout block reads the pixel data of row 35, the third readout block reads the pixel data of row 37, etc. This process continues until the last pixel row of the pixel array has been read out. The control circuitry preferably comprises a plurality of row drivers105-1to105-4, matched in number with the plurality of readout blocks104-1to104-4, and control logic (not shown) for controlling the plurality of row drivers, e.g. controlling the sequencing of row control signals (e.g. row select control signal, reset control signal, charge transfer gate control signal) driven by the row drivers. Owing to the vertical stacking of the two substrates101and103, row drivers105-1to105-4may be located on the second substrate103and mainly extend in a (pixel) row parallel direction x, i.e. in the direction of increasing pixel column numbers, as shown inFIG.1, so that they are underneath the pixel array and overlapped by pixel rows of the pixel array when viewed from the top of the image sensor (e.g. illumination side). Alternatively, the row drivers may be located on the second substrate103and mainly extend in a direction of increasing (pixel) row numbers y so that they are to the left and/or right of the pixel rows and substantially free of overlap by pixel rows of the pixel array when viewed from the top of the image sensor. In yet alternative embodiments of the invention, the row drivers may be located on the first substrate101, e.g., to the left and/or right of the pixel rows. Providing the row drivers of the second substrate has the advantage that row addressing noise, which can interfere with the pixel signals, can be reduced and further that a larger surface area of the first substrate is available for imaging. Although shown as individual blocks inFIG.1, one or more (e.g., all) row drivers105-1to105-4may be further subdivided in the x-direction, meaning that multiple row drivers are used to drive a single pixel row. Such a subdivision has the benefit that the readout access time can be reduced (reduction of RC time constant) compared to conventional row drivers that are exclusively arranged to the left/right of the pixel rows. The second substrate103may also comprise additional circuitry, such as serialization circuitry and I/O drivers, configured for generating and outputting a data stream relating to an image frame. Vertical stacking of the first and second substrate101,102can be achieved by die-to-die bonding, or die-to-wafer or wafer-to-wafer bonding with subsequent wafer dicing. The first and second substrate are electrically interconnected, e.g. via through silicon vias (TSVs) or direct or hybrid bonding techniques (e.g. copper-to-copper interconnects), such that a set of pixel row data signals, relating to a plurality of pixel rows located in respective pixel subarrays of the first substrate, is read out concurrently by the corresponding set of readout blocks in the second substrate. Each pixel row data signal is transferred on column bit lines from the first substrate to the second substrate, wherein column bit lines are understood as extending through the interconnect layer between the first and second substrate. FIG.2shows a possible circuit architecture of an active pixel that is configured to have two different charge-to-voltage conversion gains, hereinafter referred to as dual gain pixel or dual conversion gain pixel. The pixel200comprises a photoelectric element201, preferably a pinned photodiode, a buffered charge-voltage converter202, and a transfer gate203connected between the photoelectric element and the charge-voltage converter. The buffered charge-voltage converter includes a floating diffusion node202-1as a first charge accumulation element of the charge-voltage converter, a source-follower transistor202-2having a gate terminal electrically connected to the floating diffusion node202-1, and a combination of gain switch202-3and gain capacitance202-4as a second charge accumulation element of the charge-voltage converter. The pixel200further includes a reset transistor204, connected between positive voltage supply VDD and the floating diffusion node202-1, for resetting the floating diffusion node to a predetermined voltage level close to VDD each time the reset transistor is switched on by a corresponding reset control signal VRST, thereby erasing the previously stored pixel data on the floating diffusion node. Furthermore, the source-follower transistor202-2is controllably connectable to a bit line206via a row select control signal VRS that is applicable to the gate of a row selection transistor205of the pixel. Although the row selection transistor205is connected between the source-follower transistor and voltage supply VDD inFIG.2, it is understood by those skilled in the art that different arrangements of the row selection transistor are possible, for example arrangements in which the row selection transistor is connected between the source-follower transistor202-2and VOUT on bit line206. When the pixel200is illuminated, the photoelectric element201starts integrating a photocharge which is generated in response to the received amount of irradiation. The integrated photocharge, or at least a portion thereof, is transferred to the floating diffusion node202-1of the buffered charge-to-voltage converter202upon activation of the transfer gate203by a suitable charge transfer control signal VTX, i.e., a transfer pulse. Control circuitry of the image sensor (not part of the pixel circuitry) sets an amplitude, e.g. voltage amplitude, of the transfer pulse such that either a partial transfer of the generated photocharge to the floating diffusion node, or floating diffusion node and connected gain capacitance, is triggered or a complete transfer the generated photocharge to the floating diffusion, or floating diffusion node and connected gain capacitance, takes place. In general, a higher amplitude of the transfer pulse leads to a more pronounced decrease of the potential barrier separating the charge well associated with the photoelectric element and the charge well associated with the floating diffusion node (with or without the connected gain capacitance) and therefore causes more photocharge carriers to be transferred away from the photoelectric element. The capacitance associated with the floating diffusion node allows temporary storage of the transferred photocharge and converts the deposited photocharge into a voltage signal that is sensed by the source-follower transistor202-2. When the row select transistor205is switched on, i.e., when the pixel is selected for readout, a current set by an external current source starts to flow on the corresponding bit line206and through the row select and the source follower transistor. The voltage VOUT at the source terminal of the source-follower transistor is directly following the voltage signal applied to its gate terminal. If the gain switch202-3is open (e.g., low voltage at VCG), a first charge-to-voltage conversion gain is determined by the capacitance value of the floating diffusion node. If the gain switch202-3is toggled into a closed state (e.g., high voltage at VCG), then a portion of the photocharge originally stored at the floating diffusion node is flowing onto the additional gain capacitance202-4. The additional gain capacitance and the floating diffusion node capacitance are now connected in parallel, resulting in a larger overall capacitance available for the storage of the transferred photocharge. This in turn leads to a drop of the voltage signal that is sensed at the gate terminal of the source-follower transistor and directly translates into a lower, second charge-to-voltage conversion gain. In an alternative embodiment, the additional gain capacitance can also be connected to positive voltage supply instead of ground, or the additional gain capacitance may correspond to the floating diffusion node of a neighboring pixel. In the latter alternative, the pixels are thus configured to dynamically share their floating diffusion nodes with at least one neighboring pixel of a different row, wherein the shared floating diffusion node of a pixel is temporarily connected to the neighboring pixel and acts as the additional gain capacitance. This has the advantage that a more compact design can be obtained, which does not require a separate pixel component for the gain capacitance. Row control circuitry of the image sensor, of which the pixel200forms part, is configured to control the gain switch202-3via a dedicated conversion gain control signal VCG applicable to the control terminal of that gain switch. The pixel200can thus be controlled to apply a first or a second charge-to-voltage conversion gain with respect to the integrated photocharge during readout. The charge-to-voltage converter is buffered, because the readout is non-destructive, i.e., the transferred and stored photocharge is not destroyed or altered by the readout action. FIG.3illustrates the different circuit components composing each readout block104of the second substrate, while the plurality of readout blocks is configured for parallel operation. Every readout block104comprises at least one analog-to-digital conversion unit106for sampling and digitizing pixel row data of the corresponding pixel subarray, pixel memory logic107for processing samples of the digitized pixel row data, and a pixel memory unit108for buffering the processed samples of digital pixel row data which are output by the pixel memory logic107. The dataflow between the components, or the components and peripheral I/O circuitry, is indicated by arrows. In particular, intermediate pixel row data stored in the pixel memory unit108can be accessed by the pixel memory logic (PML)107in order to conditionally combine the currently processed sample of digital pixel row data with a previously processed sample of digital pixel row data that has been buffered in the pixel memory unit108. Such a combination of two processed samples of digital pixel row data can then be written back to the pixel memory unit108, from which it is either accessed by the peripheral I/O circuitry, e.g. in cases where the buffered combination of processed samples of digital pixel row data constitutes the final sample to be transferred off the imager sensor chip, or is again accessed by the PML, e.g. in cases where the buffered combination of processed samples of digital pixel row data constitutes an intermediate result not yet ready for output. In the latter case, the PML may use the retrieved, intermediate combination of processed samples of digital pixel row data and the currently processed sample of digital pixel row data to calculate an updated or final combination of processed samples of digital pixel row data. In embodiments of the invention, the PML may combine two or more samples of digital pixel row data by adding or subtracting the samples. Moreover, processing operations performed by the PML on a sample of digital pixel row data may include scaling a sample of digital pixel row data, e.g. prior to combining it with another sample, pixel-wise comparing a sample of digital pixel row data to a threshold value, computing a change of representation for the sample of digital pixel row data (e.g. from Gray code to binary), or combinations thereof. The processing functionality of the PML is not limited to the given examples and may be extended to perform additional processing operations on the obtained samples of digital pixel row data, which may depend on particular applications or mode of operations for the image sensor. Although embodiments of the invention are not restricted to readout blocks without an analog column bit line signal amplification stage, which can be a part of the A/D conversion unit, it is preferable to directly obtain digital samples of the pixel data in a selected row of pixels by directly connecting an A/D converter to the column bit lines, without any intervening amplifier. This has the advantage of reducing the conversion time and allows to obtain shorter unit time slots, which in turn enable an increased number of subframe exposures to be taken within a single full frame period. It is also possible to further reduce the time due to conversion by reducing the bit resolution of the A/D converters (ADC) in the conversion unit, at the cost of reading out less accurate pixel data. Typical embodiments of the invention implement ADCs with 12 bit resolution. In terms of readout speed and given the goal of achieving multiple subframe exposures in a single full frame period, fast ADCs are preferable, e.g., SAR-ADCs if the efficient use of silicon area is not critical. However, embodiments may also use other ADC types, e.g., single-sloped ADCs if a more energy-efficient and/or area-efficient implementation is striven for, e.g. in embodiments which implement per-pixel-ADCs. Ramp-based ADC architectures, e.g., including single-sloped ADCs, have a simple layout and, compared to other ADC architectures, use a smaller design area per pixel column. They are preferred in embodiments of the invention in which bit resolution can be traded for speed of operation, e.g. via the number of clock cycles for the ADC counters, whereby a flexible and dynamic adjustment of the bit resolution per subframe (e.g. 12.7 bits, not necessarily integers) and hence of the number of subframe exposures to be accommodated in a full frame period of the image sensor is obtainable. The pixel memory units of all readout blocks act as a global frame buffer. In particular embodiments of the invention, this global buffer has a data buffering capacity that is smaller than the size of a full image frame, e.g., can only hold a portion of a full image frame as generated by the image sensor. This is possible, for instance, if pixel row data is retrieved fast enough from the global buffer for off-chip transfer such that buffer overflow is prevented. It is then allowed to map different pixel rows to a same location in the pixel memory unit without losing the pixel row data. The intermediate storage of pixel row data in the pixel memory units typically is on time intervals larger than the combined time for selecting the row of pixels and converting the pixel data of the selected row in the A/D conversion unit but less than a full frame period. Preferably, the pixel memory units108on the second substrate of the image sensor chip are provided as blocks of SRAM unit cells, e.g., used as a memory bank of a banked SRAM memory. The memory units on the second substrate, e.g., the SRAM memory units, are preferably managed independently from each other on a per-readout block level. The memory unit corresponding to each readout block may be subdivided into even smaller memory subunits, e.g., similar to the subdivision of the row drivers. Per-readout block level managed pixel memory units, or subdivisions thereof, are advantageous in view of their smaller physical size and address space, which makes read/write operations to the pixel memory units faster. It may also prove useful for yield reason, e.g., the memory unit corresponding to each readout block could be built with some redundancy so as to allow the independent handling of memory defects. FIG.4is a flow diagram which illustrates the parallelized pipelined architecture of the stacked image sensor during image or video acquisition. Pipelining encompasses the following stages related to each pixel row of the image sensor: pixel row reset, pixel row exposure, pixel row data readout and analog-to-digital conversion, a complete fetch-process-write (F-P-W) cycle performed on the digitized pixel row data by the pixel memory logic (PML), write-back of the digitally processed pixel row data to the pixel memory unit for intermediate storage, and access to the pixel memory to produce a global I/O stream of pixel row data when transferring one frame (or consecutive frames) of processed image data off the sensor chip. As described above, the digital processing applied by the PML may comprise the conditional combining of two samples of digital pixel row data, e.g., if the current sample of digital pixel row data supplied by the A/D conversion unit satisfies a predetermined or programmable condition, e.g. surpassing a threshold value. In such cases, the current sample of digital pixel row data supplied by the A/D conversion unit and a previously obtained sample of digital pixel row data buffered in the pixel memory unit are loaded into the PML during the prefetch cycle and the combination of the two samples (e.g. sample addition and, optionally, a subsequent compression of the sum) is performed during the process cycle of the PML. The result of the processing operation is then written back to the pixel memory unit during the write cycle of the PML. For example, in the operating mode that uses partial transfers of the integrated photocharge for all but the last one subframe exposure (partial transfer mode), the pixel readings from the low gain channel are combined (e.g., cumulative sum) under the condition that the pixel readings to be added are not relating to the last subframe exposure. The intermediate partial sums are stored in the pixel memory unit. The low gain pixel reading from the last subframe exposure is only added to the partial sum if the condition of the corresponding high gain pixel reading surpassing a threshold is fulfilled. Then the updated partial sum becomes the final sum and is used as an output of the image sensor device. If the corresponding high gain pixel reading does not surpass the threshold, then only the high gain pixel reading is used as an output. Alternatively, if compression is enabled (e.g. operating the image sensor device in the partial transfer mode with compression), then an output in compressed format is obtained as a combination (e.g. applied compression algorithm) of the high gain pixel reading and the preceding partial sum of all the low gain pixel readings (i.e. pertaining to all but the last one subframe exposure). The compressed output data can be sent immediately off-chip and, therefore, does not need to be written back into the pixel memory unit. As a further example, if the image sensor device is operated in the full transfer mode, i.e. transferring the integrated photocharge in full at the end of each subframe exposure, then the pixel readings in the low gain channel and the high gain channel are summed separately over the number of subframe exposures (e.g. by updating independent partial sums for the low gain and high gain channel respectively). If compression is enabled in this operating mode, e.g., full transfer mode with compression, then the partial sums for the low gain channel and the high gain channel can be input to a compression algorithm at the end of each subframe exposure and only the compressed partial sum needs to be written back to the pixel memory unit. This has the advantage of lowering storage requirements but necessitates additional computation for the decompression during readback. FIG.4further illustrates that pipelining is applied to the sequence of pixel rows contained in one pixel subarray, whereas different pipelines work in parallel in respect of different pixel subarrays. In other words, a separate pipeline is implemented for each pixel subarray and corresponding readout block, yielding K-way pixel row data acquisition and processing for a total of K independent pixel subarray/readout block combinations. For the purpose of streaming the processed frame data off the image sensor chip, the accesses to pixel memory are time-interleaved so that pixel row data pertaining to different pixel subarrays do not overlap. The electronic rolling shutter of the image sensor works well in conjunction with the fully pipelined architecture as the (partial) exposure of pixel rows and the reading out of pixel row data after (partial) exposure is carried out sequentially. For the sake of clarity, the flow diagram ofFIG.4includes only two pixel subarray, comprising three pixel rows each. In typical embodiments of the invention, there can be more than two readout blocks and pixel subarrays present, e.g., between two and sixteen, and each pixel subarray typically contains hundreds of pixel rows. FIG.5is a flow diagram that illustrates the pipelining of pixel row data in the case of multiple subframe exposures, in this example two subframe exposures SF1and SF2that are of equal sub-exposure time and contiguous in time. Contiguity in time is beneficial for reduced rolling shutter distortions in the final images. For the sake of clarity,FIG.5only shows the data pipelining for a single pixel subarray and corresponding readout block; as mentioned earlier in this application, embodiments of the invention provide for multiple, parallelly working pipelines with regard to the plurality of pixel subarrays and corresponding readout blocks. Embodiments of the invention are also not limited to two subframe exposures—for instance, three, four, or more than four subframe exposures may compose a full frame—and subframe exposures do not have to be equal in duration. Subframe exposures do not have to be contiguous in time, provided that the image sensor is operated in a mode that does not use partially transferred photocharges. After termination of the first subframe exposure SF1, pixel data of each row K, K+1, . . . , K+4 is read out and converted into digital pixel row data in step502-1, processed by the PML in step503-1and written to pixel memory unit in step504-1. At this point, the processed pixel row data is stored in the pixel memory unit and is not yet used as an output, e.g., as part of a final image frame that is transferred off the image sensor chip. The following steps are then repeated for the second subframe exposure SF2: pixel data of each row K, K+1, . . . , K+4 relating to the second subframe exposure SF2is read out after subframe exposure termination, converted into digital pixel row data in step502-2, processed by the PML in step503-2, and written to the pixel memory unit in step504-2. However, processing by the PML in step503-2now includes the conditional use of the previously obtained sample of digital pixel row data as additional input operand, wherein the previously obtained sample has been buffered in the pixel memory unit during the time elapsed between the end of step504-1and the start of step503-2. After processing by the PML of the samples of two samples of digital pixel row data relating to subframe exposures SF1and SF2is completed in step503-2, the processing result (e.g. combination of the two samples, e.g. sum or difference) is written back to the pixel memory unit in step504-2and subsequently read out therefrom in step505for the purpose of transferring the processed row of pixel data, as part of the final full image frame, off the image sensor chip. It can be seen inFIG.5(e.g., dotted vertical lines for guidance) that although multiple pixel rows are processed in parallel, the different pipeline stages are temporally balanced such that each pipeline stage only operates on the pixel data of a single row of pixels. In particular, the data path for the pixel row data is organized such that no two rows addressed simultaneously for the readout and conversion of their pixel data. In embodiments of the invention, the overall exposure period for a full image frame of the image sensor, i.e. the sum of all subframe exposure periods, is controllable via the reset control signal, e.g. by controlling the moment in time relative to the full frame period at which the photoelectric elements of a pixel row are reset and exposed to incident light thereafter. The ratio between the first subframe exposure period and the second subframe exposure period is controllable via the readout control signals in respect of the first subframe exposure. More specifically, the first subframe exposure period ends and the second subframe exposure period immediately begins as soon as a pixel row has been selected for readout and a transfer pulse has been applied to the transfer gates of the pixels of that row, which induces a partial transfer of the already generated photocharges in the respective pixels. In contrast thereto, the second subframe exposure period ends as soon as a pixel row has been selected for the second time within the same frame interval for readout and a transfer pulse of larger magnitude compared to the transfer pulse relating to the first subframe exposure has been applied to the transfer gates of the pixels of that row, whereby a complete transfer of all the remaining photocharges in the respective pixels is initiated. If in embodiments of the invention more than two subframe exposures are taking place, then the applied row select signal and transfer pulse at the end of each but the last one subframe exposure determine the duration of that subframe exposure. Moreover, the amplitude of the transfer pulse to be applied in respect to each but the last one subframe exposure is adapted to cause only a partial transfer of the photocharge present in the pixels' photoelectric elements, whereas it is increased in respect of the last subframe exposure such that a complete transfer of the remaining photocharge is triggered. Preferably, the amplitude of the transfer pulse is kept constant for each but the last one subframe exposure. Furthermore, embodiments are not restricted to solely read out the pixel's buffered photocharge signal, referred to as a pixel's signal level, i.e. the voltage signal generated by the buffered charge-voltage converter in response to the transferred photocharge present on one or both charge accumulation elements, but preferably include the further readout of the pixel's buffered reset signal at both high and low conversion gain, referred to as the pixel's high gain and low gain reset level, i.e. the voltage signal generated by the buffered charge-voltage converter in response to the residual charge that is still present on the first charge accumulation element, or first and second charge accumulation element, after having reset the pixel. This has the advantage that correlated double sampling (CDS) can be performed by the readout blocks of the image sensor. FIG.6is a timing diagram which describes in more detail the timing and time resources required by each pipeline stage. For a better understanding of the present figure, the exemplary timing diagram only considers sixteen pixel rows (Row 0 to Row F) per pixel subarray. Embodiments of the invention may contain much more pixel rows per pixel subarray, e.g., hundreds of pixel rows or even more than one thousand pixel rows. According to the timing diagram ofFIG.6, each full frame period, e.g., Frame 0, Frame 1, etc., is divided into a plurality of unit time slots, e.g. the time slots labelled ‘0’ or ‘1’ in the first line of the diagram. It is noted that consecutive unit time slots are assigned to either an even position, labelled as ‘0’, or an odd position, labelled as ‘1’. The even and odd time slot positions are associated with a first and a second rolling shutter operation respectively. Importantly, the control sequences for the first and second rolling shutter, i.e., reset and readout select, are time-interleaved with row control signals relating to the first and second rolling shutter operation being supplied only during the even time slots and odd time slots respectively. The unit time slot which marks the start of each subframe exposure in respect of a particular pixel row of the pixel subarray is labelled by the letter ‘S’, while the unit time slot which marks the end of that subframe exposure in that row is carrying the letter ‘E’. The start of the first subframe exposure may correspond to the falling edge of a row-wise applied reset control signal that steps, one by one, through the pixel rows composing the subarray and resets the photoelectric elements of the pixels in that row to a predetermined voltage level. In contrast thereto, the second or any further subframe exposure, if contiguous in time with the preceding subframe exposure, does not require delivery of an extra reset signal to the pixels' photoelectric elements in order to start, but begins seamlessly after the preceding subframe exposure has ended with a partial transfer of the photocharge generated in the photoelectric element. It is observed that this does not exclude the delivery of a reset signal to only the first and second charge accumulation element of each pixel, which removes the previously transferred photocharge and thus makes room for another subframe exposure reading. Contrary to a full or complete transfer of the photocharge generated during a conventional exposure interval, a partial transfer only skims off the portion of the photocharge present in the potential well associated with the photoelectric element that exceeds a programmable threshold potential (e.g., threshold voltage level). The programmable threshold is determined by the magnitude of the transfer pulse that is supplied to the transfer gates of the pixels. As can be seen in the timing diagram, a first rolling shutter sequence is starting from Row 0 and advances incrementally up to Row F, wherein a next row is selected at every second unit time slot. As a result, the first rolling shutter control sequences and the associated first subframe exposure are always timed to be aligned with the even unit time slots (i.e., group ‘0’). Likewise, the second rolling shutter control sequences and the associated second subframe exposure are always occupying the odd unit time slots (i.e., group ‘1’), whereby any interference between the two concurrently running electronic rolling shutters is avoided. Here, an interference between two or more rolling shutters that operate in parallel on the pixel rows of a pixel subarray is understood as an attempt to select and read out the pixel row data (signal level or reset level) of two different rows of the same subarray simultaneously. In the present timing diagram a double subframe exposure SF0, SF1per frame period is chosen, but a larger number of subframe exposures can be accommodated within the full frame period. For instance, four subframe exposures may compose the total exposure time within a single full frame acquired by the image sensor, in which case unit time slots are assigned positional numbers ‘0’ to ‘3’ (e.g. position modulus four) and each positional group ‘0’ to ‘3’ is associated with the row control signals relating to only one out of four time-interleaved rolling shutter sequences. The duration of the unit time slot is typically determined by the pipeline stage with the largest latency. In the present embodiment, for example, the unit time slot corresponds to the combined duration of settling time of the signals present on the bit lines, sample and hold, and the time required to perform analog-to-digital conversion by the A/D conversion unit in respect of a fast sequential measurement of the pixel signal levels both at high conversion gain and low conversion gain. If CDS is applied, the signals present on the bit lines include both the pixel reset level and the pixel signal level, meaning that the unit time slot is the sum of settling time, sample and hold time, and the time for A/D conversion in respect of fast sequential CDS measurement in the high gain readout channel and the low gain readout channel. Nonetheless, for the purpose of implementing image sensors at even higher speed, more pipelining may be added in the readout path and the unit time slot may be subdivided or redefined in order to realistically reflect the presence of the added pipeline stages. A fast sequential measurement of the high and low gain pixel signal levels can be performed by reducing the resolution of the ADC component in the A/D conversion unit, e.g., two 12 bit conversions can be performed in the same time as a single 14 bit conversion. Alternatively, the A/D conversion unit may comprise two parallelly working ADCs instead of a single ADC allocated to the pixel rows ifFIG.6. Within each unit time slot, the A/D conversion unit is thus capable of converting the pixel data of exactly one row of pixels in the pixel subarray into the digital domain. The row of pixels that is undergoing A/D conversion during a particular unit time slot is indicated by its row number in the pixel subarray (e.g., numerals ‘0’ to ‘F’ in the ADC line ofFIG.6). The converted pixel data is available precisely one time slot after the corresponding subframe exposure period has ended (e.g., indicated by letter ‘E’). There can be moments at which the A/D conversion unit is idle and does not perform any pixel row data conversion (e.g., blank unit time slots in the ADC line inFIG.6). Each subframe exposure period SF0, SF1as well as the full frame period can thus be expressed as an equivalent number of unit time slots. For example, in the embodiment referred to inFIG.6, the first subframe exposure SF0lasts for fifteen time slots, the second subframe exposure lasts for thirteen time slots, and each full frame (Frame 0, Frame 1) is composed of thirty-two time slots. The shorter the unit time slot is relative to the full frame period, and the shallower the pipeline depth is, the more subframe exposures can be accommodated in a single full frame period, which is considered as fixed over time. For instance, the shortest possible single frame period is fixed by the maximum achievable external I/O rate at which preprocessed HDR frame data can be transmitted from the image sensor chip to external devices, e.g., external storage devices such as external RAM or hard drive. A typical value for the maximum achievable external I/O rate in embodiments of the present invention may be 120 fps for double subframe exposure mode of operation, but depends on other factors too, e.g., such as number of pixel rows and readout blocks and ADC bit resolution. For the example inFIG.6, this means that a pixel row worth of frame data is transferred every 520.8 μs, corresponding to two consecutive time slots in the readout row (last row inFIG.6) containing the same numeral and thus referring to data obtained from the same pixel row, while one unit time slot lasts only for 260.4 μs. These values are given for illustrative purposes and do not necessarily reflect frame data rates and unit time slot durations of actually manufactured image sensors, which may comprise many more pixel rows as compared to the example ofFIG.6. For example, an image sensor with megapixel resolution may have unit time slot duration of about 15 μs, which allows for a maximum internal subframe rate of 240 fps at 14 bit ADC resolution (ADC resolution can be traded for speed or higher number of subframes). This exemplary image sensor thus supports outputting preprocessed HDR frame data to external devices (i.e., off-chip) at an I/O rate of 120 fps, in case of double subframe exposure mode, and 60 fps, in case of quadruple subframe exposure mode, each at 14 bit ADC resolution. As indicated inFIG.6, the final frame data read from the on-chip pixel memory may be transmitted in a compressed format. Here, compression relates to the fact that the combined pixel signal levels for high and low conversion gain settings of the pixel, which leads to the improved dynamic range of the image sensor, may occasionally exceed the available bit depth of the pixel memory. In such events, a scaling operation is performed prior to writing back to the pixel memory the combination of high gain and low gain pixel signal levels, whereby the scaled signal combination again fits into the available bit depth of the pixel memory. For example, a combination of a 12 bit wide high gain pixel signal level and a 12 bit wide low gain pixel signal level may result in a 13 bit wide combined HDR value, which does not fit into a 12 bit wide pixel memory without causing overflow. In these cases, the combined HDR value is rescaled (scaling factor <1) such that it fits again into the 12 bit wide pixel memory. Instead of a simple scaling operation, a more detailed compression algorithm may be used that combines pixel reading for the high gain channel and the low gain channel differently in different signal subranges, e.g., exploiting varying noise properties in the different signal subranges to alter a precision of the high gain and/or low gain pixel readings. Compressed output data can be sent off-chip at a rate twice as high as compared to the high gain data and the low gain data being sent off-chip separately, e.g., one time slot per row of pixels versus two time slots in the last line ofFIG.6. As can be further seen from the timing diagram inFIG.6, the processed high gain and low gain pixel row data is written to different addresses of the pixel memory, thereby allowing separate readout of pixel data obtained for either high conversion gain or low conversion gain in operation modes of the image sensor that does not use the dual gain functionality of the pixels, e.g. simple low gain or high gain operation without extended DR. Such operation modes can also include different HDR modes of the image sensor which do not rely on partial transfer of the generated photocharge within a single full frame period. For example, offline or online blending of multiple subframe exposures or multi-frame exposures with different exposure times and/or conversion gains into one HDR image frame (single frame or multi-frame exposure bracketing), which can take place on the image sensor chip or on external data processing means. The case of combining multiple subframe exposures or multiple full frame exposures with at least two exposure time settings into a HDR image frame is also known as multiple-exposure operation and can be performed on an image sensor chip according to the invention, additionally or alternatively to the dual gain conversion by the pixels, to obtain HDR image frames. Different exposure time settings result in different but fully deterministic slopes and knee points between slopes in the linearity plot of the image sensor (digital number as a function of illumination). Hence, in image frames with large intra-scene dynamic range, the pixel output signals obtained through unequal exposure time settings can be easily re-linearized, internally or externally off-chip, to yield a linear HDR signal without calibration. The ratio of subframe exposure periods of a pair of successive subframes, together with the current conversion gain, controls the change in the response slope. FIG.11illustrates the different response slopes and knee points for a single conversion gain (e.g., only low gain) and a total of four subframe exposures per full image frame. In this example, the exposure periods of the subsequent subframes are increasing, e.g., as Texp1=128 Trow, Texp2=32 Trow, Texp3=8 Trow and Texp4=2 Trow for a full exposure time of 10.2 ms, where a row time (Trow) is equal to about 60 μs. The corresponding sensor output after linearization is shown inFIG.12. The location of the knee points on the vertical axis (raw response) may be fully programmable, e.g., by introducing a programmable clipping block in the signal path for clipping the pixel signals in the digital domain. As a particular example of combined multi-frame bracketing and multi-exposure, one can cite the mode of operation in which a first full frame with increased DR is acquired via two subframe exposures at low conversion gain and with large exposure time ratio, and a second full frame with increased DR is acquired via two subframe exposures at high conversion gain and also with a large exposure time ratio. Eventually, the first and second full frames can be composed offline into a final image frame with even further increased DR. In contrast to the separate storage locations for high and low conversion gain pixel data, processed digital pixel data obtained in respect of two different pixel rows in the same subarray, e.g. Row 0 and Row 8, Row 1 and Row 9, etc., is mapped to the same address of the pixel memory in order to save memory capacity requirements and associated chip area. Moreover, in embodiments of the invention in which the HDR image frame is generated as a result of combining multiple subframe exposures with partial photocharge readout and different conversion gain settings, described in more detail hereinbelow, only one storage location (e.g., by address or line) in pixel memory for the high conversion gain and the low conversion gain pixel row data is needed. Therefore, the allocation of two lines of pixel memory per pixel row in the subarray inFIG.6is seen as optional and a more area- and energy-efficient solution may only allocate a single line of pixel memory per pixel row and subarray. In the latter case, the overall storage capacity of the pixel memory can be seen to be smaller than the storage size associated with a full frame of image data. According toFIG.6, immediately after A/D conversion has completed in respect of a pixel row selected for readout, i.e., exactly one unit time slot later, the processed pixel row data for high and low conversion gain is stored in the pixel memory while the A/D conversion unit proceeds with subsequently selected row of pixels. Here the assumption is made that the processing of converted, digital pixel row data and the write operation of the processed pixel row data can be performed within one unit time slot, because these two steps have individual latencies that are shorter than one unit time slot. More specifically, the pixel row data (e.g. pixel signal level minus the pixel reset level in case of CDS) that has been obtained for the low conversion gain channel of the pixels after a first partial transfer may be written to the pixel memory unconditionally, as soon as A/D conversion of the pixel row has completed at the end of the first subframe exposure SF0in each frame. During intermediate subframe exposures, occurring between the first and the last subframe exposure if more than two subframe exposures are programmed, each subsequent low gain pixel row data (e.g. pixel signal level minus the pixel reset level in case of CDS), obtained in respect of each further partial transfer, may be subject to processing by the PML and the result of this processing is then written back to the pixel memory. In addition to managing the dataflow from and to the pixel memory, the processing by the PML may comprise performing basic clipping and/or scaling operations on the digitized pixel data (e.g., after CDS). In preferred embodiments, the processing by the PML also comprises conditionally combining, e.g., conditionally adding, the processed or unprocessed (raw) pixel row data of the current subframe to the buffered pixel row data in the pixel memory. In such a case, the buffered previous data is fetched from the pixel memory by the PML, wherein the fetching may be performed while the pixel row data of the current subframe is still undergoing A/D conversion. The condition to be satisfied for the PML to carry out the combining of pixel row data relating to different subframes may involve the step of comparing the pixel data to a first programmable threshold level TLG. Depending on the result of this comparison, the pixel data of the current subframe is combined with the pixel data of a previous subframe that is buffered in the pixel memory, e.g., when the pixel data is lower than the threshold TLG, or is discarded, e.g. when the pixel data is larger than the threshold TLG. Discarding of pixel data may occur, for instance, if the image sensor device is operated in short intermediate subframe exposure mode, in which only the pixel data of the shortest subframe exposure is kept. The event of discarding pixel data may be monitored during each full frame exposure period to select only the pixel data of the shortest intermediate subframe for output. If the pixel memory does not yet contain valid data for the current image frame (i.e., comprising the plurality of subframes), the combining step may be replaced by a direct write to the pixel memory. Alternatively, the pixel memory can be initialized to a default value, e.g., zeros, at the beginning of each new image frame. Eventually, the processed or raw pixel row data related to the high gain channel and the low gain channel, e.g. the pixels' CDS signals for the high gain channel and the low gain channel, is sent to the sensor output interface directly at the end of the last subframe exposure (e.g. after the full photocharge transfer in subframe SF1inFIG.6), or is conditionally processed by the PML, depending on which output format has been selected. The conditional processing by the PML for the last subframe may include the following steps: The high gain CDS signal of each pixel is compared to a second programmable threshold value, THG, and if it is lower than the threshold THG, the high gain CDS signals of the pixels in one row is written to the pixel memory. In this case, the previously stored pixel data for that row is overwritten. Alternatively, the previously stored pixel data for that row could be fetched by the PML and combined with the currently processed high gain pixel data, e.g. by compressing high gain pixel row data and low gain pixel row data into a single row of data, and the result of this PML operation is stored in the pixel memory as the final result for the pixel data of that row in the current frame. If, instead, the threshold value THG is exceeded, the high gain data for the pixel row may be discarded and the low gain data for that pixel row is used by the PML instead, e.g., by combining it (e.g. adding and optionally also compressing) with the previously stored pixel row data after fetching from the pixel memory through the PML. The second full frame in the timing diagram ofFIG.6also comprises two subframe exposures, identical to the first full frame. In the present embodiment, the time interval between the end of the last subframe exposure SF1of the first full frame and the beginning of the first subframe exposure SF0of the second full frame, during which no rolling shutter exposure takes place, is chosen as short as possible compared to a full frame period, with the result of obtaining the longest possible overall frame exposure. Indeed, the two subframe exposures extend over almost the entire full frame period (e.g. 87.5%) and cannot be extended further without avoiding interframe interference of the electronic rolling shutters, e.g. avoiding that the second rolling shutter of the first frame and the first rolling shutter of the second frame attempt to simultaneously read out different rows of the subarray, i.e. two unit time slots labelled ‘E’ being exactly coincident in time. The moments in time tA and tB at which a delayed, second electronic rolling shutter starts stepping through the rows of the subarray before an earlier, first electronic shutter has ended, are situated at the end and near the mid-point of the frames respectively. This is a further indicator for the long overall frame exposure and the nearly balanced exposure times of the two subframe exposures. However, embodiments of the invention are not limited to long frame exposures, but can equally be adjusted to perform shorter frame exposure, for instance to better adapt the image sensor to the distortion-free capturing of fast moving objects. A detailed timing diagram for the shortest possible overall frame exposure consisting of two contiguous subframe exposures SF0, SF1of equal subframe-exposure periods (three unit time slots each) is shown inFIG.7for a pixel subarray with an equal number of rows, unit time slot duration and full frame period as inFIG.6. Furthermore, full frames may be acquired continuously in embodiments of the invention, or a predetermined number of consecutive frames may be acquired, as indicated inFIG.6andFIG.7. In embodiments of the invention, each new full frame generally starts with a reset operation on the photoelectric elements of the pixel row the first time it is selected in the new frame. In contrast toFIG.6, the moments in time tA and tB at which a delayed, second electronic rolling shutter starts stepping through the rows of the subarray before an earlier, first electronic shutter has ended, are both situated near the end of the frames and are separated only by the very short subframe exposure period. FIG.13shows a timing diagram for the quadruple subframe exposure operation (partial or full transfer mode) of an image sensor device according to the invention. Subframe exposures SF0to SF3are contiguous in time and have equal subframe-exposure periods. In the partial transfer mode, only the fourth subframe exposure SF3is read out with the high gain configuration of the pixels. The readout pipeline is analogous to the ones described hereinabove, e.g., as inFIG.6andFIG.7. In the above-described embodiments, the first and the second threshold (TLG and THG) preferably are programmable, e.g., by software or by user directly, as a function of overall exposure time for the image frame to be acquired and the number of subframes, but may be fixed values in other embodiments. The first and the second threshold (TLG and THG) are generally constant across the pixel array, but they do not have to be of the same value, e.g. TLG and THG can differ in magnitude, and they are typically independent of the potential barriers set by the partial and the complete charge transfer pulses applied to the pixel transfer gates. In the above-described embodiments, the readout, A/D conversion, digital data processing and pixel memory write stages were performed on entire rows of image sensor pixels. However, depending on the trade-off between speed, area, power consumption for an image sensor according to the invention, blocks of pixel-related data (e.g. reset levels and signal levels) do not necessarily have to correspond to pixel rows, but could be performed on pixel groups (e.g. a row segment or group of columns in a row, e.g. even/odd channels) or even on a pixel-by-pixel basis instead. In embodiments of the invention, a state machine may be used that triggers, for each unit time slot, the A/D conversion of pixel data as well as the processing of digitized pixel data in the parallelly running PML. Once triggered by the state machine, the A/D conversion unit may use its own clock, which typically is the fastest running clock of the components comprised by the control and readout circuitry. Other dedicated clocks with their specific clock speeds may be used in the components of the control and readout circuitry, e.g., a sequencer clock, a clock for regulating memory access, etc., which generally run at a lower speed than the clock of the A/D conversion unit, e.g. a factor 5 to 10 slower. It is also possible to operate the image sensor in a LOFIC-like mode (lateral overflow integration capacitor), in addition to the multiple subframe and partial readout operation already mentioned. In the LOFIC-like mode, the photocharge of an oversaturated photodiode spills over into the second charge accumulation element, or the combination of first and second charge accumulation element. The bias voltages of the transfer gate and the gain switch are set accordingly. In the LOFIC-like mode, especially adapted to high illumination conditions, the low gain channel of each pixel is read out twice: the overflow charge on the sense node is read out first (defining the overflow level), followed by a reset operation and reset reading of the sense node (e.g. voltage reset with respect to the connected first and second charge accumulation element) and another readout of the sense node in respect of the fully or partially transferred photocharge from the photosensitive element (defining the photodiode/signal level in the low gain channel). Correlated double sampling (CDS) is preferably performed for the photodiode level in the low gain channel, using the reset reading for pixel noise correction. Digital double sampling may be implemented for the overflow level, using again the reset reading for correction. Alternatively, a reset operation and corresponding reset reading may be performed at the start of each new image frame and used to implement true CDS for the overflow level. In the LOFIC-like mode, the low gain pixel data may first be compared to a threshold value for the low gain channel (TLG) before the pixel data is written to the pixel memory unit at the end of each but the final subframe exposure. If the low gain pixel data is lower than the threshold value TLG, then the overflow signal level is ignored, e.g. by setting it to zero to avoid noise or dark current contributions, or else if the low gain pixel data is greater than the threshold value TLG, then the overflow signal and the pixel signal for the low gain channel are summed directly and the partial sum for the respective subframe exposure is stored in the pixel memory unit. Direct summation is possible since the overflow signal and the low gain pixel signal are both obtained with respect to the same low conversion gain setting of the pixel. For the final subframe exposure only, the pixel is read out in the high gain channel, between the readout operations for the overflow signal and the low gain pixel signal respectively. Only if the high gain pixel data is less than a threshold value for the high gain channel (THG), is the high gain pixel data retained for the sensor output operation, e.g., sent off-chip separately. Otherwise, the high gain pixel data is ignored for the purpose of image sensor data output, or is transmitted in compressed format together with the low gain pixel data. As for the non-final subframe exposures, the low gain pixel data may first be compared to the threshold value for the low gain channel (TLG) also for the final subframe exposure, to decide whether the overflow signal should be added to the low gain pixel data, before adding the result to the partial sum read back from the pixel memory unit. In a second aspect, the present invention relates to an operating method of an image sensor according to the first aspect, which yields image frames with an increased dynamic range. The method exploits the fact that all but the last one subframe exposure of a plurality of subframe exposures, each having an exposure time smaller than a full frame period, can be read out partially and all the subframe exposure combined to effectively increase the full well capacity of the photoelectric elements and to limit the increase in the readout noise associated with multiple exposures. Moreover, the image sensor does not saturate even in high illumination conditions. In conjunction thereto, the conversion gain of the images sensor pixels is switched between a high conversion gain and a low conversion gain to obtain an optimum signal-to-noise ratio under either low-light or strong-light exposure conditions relative to each pixel. Conventional image sensors using multiple exposures with a full transfer of the photocharge generated in the photoelectric element of the pixel require the floating diffusion node to have the same associated full well capacity (FWC) as the photoelectric element. This limits the FWC of the photoelectric element, e.g., PPD, if a pixel with good charge-to-voltage conversion gain is sought. In embodiments of the invention, inducing only a partial transfer of the generated photocharge from the photoelectric element to the charge accumulation elements overcomes this limitation and the FWC of the photoelectric element can be made larger than the FWC associated with the first charge accumulation element, and possibly also larger than the combined FWC associated with the first and second charge accumulation element. In addition thereto, embodiments of the invention limit the noise related to the multiple readout of each pixel by only transferring a portion of the generated photocharge from the photoelectric element to the charge accumulation element at the end of each one but the last subframe exposure and initiating a complete transfer of the remaining photocharge exclusively for the last subframe exposure. Whereas conventional methods relying on the addition of N subframe exposure readings (full transfer of the integrated photocharge) increase the resulting readout noise by a factor of sqrt(N), the readout noise (e.g. dark noise) occurs only once at the end of the last subframe exposure in embodiment of the invention, when the high conversion gain is applied to read out pixel signal levels under low-light conditions. Under such conditions the intermediately generated photocharge is not affected by the partial transfer operation. In case of higher illumination conditions, a part of the intermediately generated photocharge is transferred and converted in the low conversion gain setting of the pixel and consecutive readouts at low conversion gain are added to a final result. In such cases, the high gain path is not used, thus benefiting from the larger FWC associated with the low conversion gain setting of the pixel. FIG.8toFIG.10illustrate the inventive method for three different illumination conditions: low-light, high-light and oversaturating condition. For each of the three illumination conditions, the image sensor is operated accordingly to generate HDR image frames from multiple subframe exposures. For the purpose of illustration, only two subframe exposures of approximately equal exposure duration are assumed, but embodiments of the invention can use more than two subframe exposures. Likewise, the end of the first subframe exposure being programmed (through the rolling shutter sequences) to happen at or near the mid-point of the composite full frame exposure time (i.e., the sum of all subframe exposure times) is not essential to the invention. For instance, the ratio of a first and a second subframe exposure time can very well be 1:9, even for subframe exposures that are not contiguous in time. It is also noted that the (composite) full frame exposure time ‘Texp’ can be smaller than the full frame period, depending on the exposure settings of the image sensor. As shown in the preceding figures, a delayed reset signal can erase the photocharge in the photoelectric element at the start of the first subframe exposure, which can be delayed with respect of the start of the corresponding full frame time interval. Reference is now made to the low-light illumination conditions inFIG.8. After an initial reset of the pixel photoelectric element (first vertical solid line at 0% of the frame exposure time Texp), the photocharge does not built up quickly enough to be affected by the partial transfer pulse applied at mid-exposure (second vertical solid line at about 50% of Texp) to the pixel transfer gate. As a result, none of the generated photocharge is transferred to the first and connected second charge accumulation element (e.g., floating diffusion node and additional gain capacitance) when the pixel is switched into the low gain configuration during readout. The high gain channel is not used for the first subframe. After correlated double sampling—that is subtracting the pixel reset level from the pixel signal level prior to A/D conversion—the converted pixel data delivered at the output of the A/D conversion unit is therefore zero. This value is unconditionally written to the pixel memory, without any further processing by the PML. At the end of the second subframe exposure, the reset levels of for the low gain and high gain setting of the pixel are read out. This is followed by a readout of the signal level in the high gain configuration after a complete transfer of the photocharge from the photoelectric element to the first charge accumulation element of the charge-voltage-converter has been performed (third vertical solid line at 100% of Texp). Then the conversion gain of the pixel is switched to the low gain channel and the signal level for the same photocharge, this time present on the first and second charge accumulation element, is determined. Correlated double sampling is applied both for the high gain channel and the low gain channel, to cancel the kTC noise of the respective gain channel, and the A/D conversion unit converts the reset noise-corrected signal levels (pixel data) for the high gain and the low gain into the digital domain. It is noted that all these readings happen in the same unit time slot and are performed in the following order: reset level for low gain, reset level for high gain, signal level for high gain, signal level for low gain. Next, the PML compares the pixel data relative to the high gain setting to a threshold value for the high gain, THG. If the pixel data is lower than the THG threshold, which is indeed the case for the low-light illumination condition depicted inFIG.8, the high gain pixel data is stored in the pixel memory and replaces the data currently stored therein. If, however the high gain pixel data had exceeded the high gain threshold value THG, then the high gain pixel data would have been discarded and only the low gain data would have been taken into account. It follows that for the low-light illumination conditions referred to inFIG.8only the high gain pixel data is retained and stored as the final data to be output by the image sensor chip. In alternative embodiments of the invention, instead of discarding the high gain pixel data obtained for the last subframe if the second threshold value THG is exceeded, it is possible to combine the high gain pixel data with the low gain data into a single output word. For instance, a 13 bit wide data word for the low gain may be combined with a 14 bit wide data word for the high gain to obtain a single compressed output word, e.g., 16 bit wide. A level-dependent compression algorithm may internally amplify the low gain signals to approximately match the high conversion gain. For very low illumination, only the high gain data is included in the compressed output word, whereas for very high illumination only the low gain data is included. In the intermediate region, e.g. the transition between very low and very high illumination, the number of bits of the high data that is retained in the compressed output data word is reduced step by step, while the number of significant bits of the low gain data in the output word is progressively increased. Moreover, inevitable pixel-to-pixel and sensor-to-sensor variations in the ratio between the high conversion gain and the low conversion gain, leading to differences between the high gain sample and the low gain sample for each pixel, can be taken into account by replacing shot noise dominated bits in the high gain data word by this difference information. The generation of compressed output words has the benefit that the total I/O bandwidth can be reduced (i.e. minimizing the amount of data to be sent off-chip), while the respective low gain and high gain image data is still available with regard to an enlarged range of illumination levels. It is also advantageous for a smooth signal transition from high gain image data to low gain image data in the externally (e.g. off-chip) reconstructed image frame (e.g. applying a corresponding decompression operation to the compressed output word). The threshold level THG can be relatively low compared to the full well capacity associated with the first charge accumulation element. As little as a few tens or a few hundreds of electrons may be left in the photoelectric element after a partial transfer for the noise on the pixel's signal level to be dominated by the shot noise limit, which justifies the readout in the low gain configuration of the pixel for stronger signal levels, but encourages the readout in the high gain configuration of the pixel for weaker signal levels at the end of the final subframe exposure. The threshold value THG may be provided externally by the user, programed into the image sensor device by the user, or set to a default value and determines when to use the high gain pixel data or the low gain pixel data as the output. It generally reflects the measured or expected amount of shot noise, and possibly noise contributions form variations in the transfer pulse amplitude, above which applying a high readout gain does not lead to a significantly more advantageous signal-to-noise ratio as compared to the low readout gain. Finding a good value for the amplitude of the partial transfer pulses is typically the result of balancing two conflicting requirements: on the one hand, the integrated photocharge remaining in the photosensitive element after all partial transfers is preferably sufficiently large so as to be dominated by the intrinsic shot noise when the readout channel is switched to the low gain, but on the other hand, not too much of the integrated photocharge should remain in the photosensitive element after each non-final subframe exposure, in order to not overly limit the pixel's FWC during the following subframe exposure. Still with reference to the low-light illumination conditions depicted inFIG.8, an alternative way of providing pixel data at the image sensor output may comprise sending one full image frame of pixel data pertaining to the high gain readout channel and, independently, sending another one full image frame of pixel data pertaining to the low gain readout channel. The two image frames (high gain and low gain) may be combined off-chip, e.g. in an image blending unit (HDR synthesizer) of a camera comprising the image sensor or in another device. This provides the user with more flexibility when combining the low gain image and the high gain image into a single HDR image. In this alternative output format, the high gain pixel data is sent to the sensor's I/O circuitry directly and, therefore, is not stored in the pixel memory. The low gain pixel data currently stored in the pixel memory (preceding subframe exposure) is not replaced and is available for further processing thereof, e.g. for combination with the low gain pixel data of current (last) subframe exposure. Turning now to the bright-light illumination conditions under consideration inFIG.9, one notices that the photocharge generated up to mid-exposure, i.e. the end of the first subframe exposure, is affected by the partial transfer pulse that is delivered to the pixel transfer gate. As a result, after having read out the reset level relative to the first and second charge accumulation element when the pixel is configured to have a low conversion gain, a portion of the so far generated photocharge is transferred to the charge accumulation elements and the induced signal voltage is read out. Again, correlated double sampling is applied and the reset noise-corrected signal level is converted by the A/D conversion unit. The obtained digital value of the low gain pixel data is written to the pixel memory. At the end of the second subframe exposure, the complete transfer of the photocharge remaining in the photoelectric element is effected and the reset and signal levels of the pixel are read out as for the low-light conditions referred to inFIG.8. Furthermore, correlated double sampling is applied to obtain the high gain and low gain pixel data after A/D conversion. In the case of bright-light illumination conditions depicted inFIG.9, upon comparison of the high gain pixel data with the high gain threshold value THG by the PML, the THG threshold value is exceeded, which leads to the decision to discard the high gain pixel data. Therefore, only the low gain pixel data is taken into account by the PML. More specifically, the PML fetches the low gain pixel data relating to the preceding, first subframe from the pixel memory, adds the fetched, previous low gain pixel data and the currently supplied low gain pixel data, and writes the result of the addition back to the pixel memory. In embodiments of the invention, the threshold value for the high gain channel, THG, and the threshold value for the low gain channel, TLG, preferably are programmable values that can be changed by the user or are determined as a function of the number of subframes in each full frame, the pulse amplitude of the partial charge transfer pulses applied to the transfer gates, and the ratios between the individual subframe exposure times. In exemplary embodiments of the invention, the pixels' FWC associated with the high gain channel may be about 10 ke− and the FWC associated with the low gain channel may be about 40 ke˜. Partial transfer pulses (TX) may be selected to leave a photocharge between 500 e− and 1000 e− in each of the pixels' photodiodes. Having regard to oversaturating illumination conditions,FIG.10illustrates that the generated photocharge exceeds the well capacity (upper horizontal line) associated with the photoelectric element, e.g. the PPD, which causes the photocharge in excess to spills from the photoelectric element into the charge well associated with at least the first charge accumulation element of the pixel's buffered charge-voltage converter, e.g. connected first and second charge accumulation element. The presence of an overflow charge on the first charge accumulation element, or on the interconnected first and second charge accumulation element, may be detected for each pixel prior to the delivery of the reset pulse at the end of each subframe exposure. Hence, an oversaturation regime for one or more pixels of the image sensor can be detected. For the illumination conditions referred to inFIG.10, only the pixel row data (reset-noise corrected) relating to the low conversion gain setting of the pixel is used in all the subframe exposures, because the complete photocharge transfer from the saturated photoelectric element to the first charge accumulation element always causes the THG value to be exceeded. In consequence, only the pixel data relating to low conversion gain configuration of the pixel is added by the PML for all the subframe exposures comprised by the full image frame, and the resulting partial or final sum is optionally compressed, before the intermediate or final data value is written into the pixel memory. Optionally, in the method described above, the overflow level of the photocharge receiving charge accumulation element(s) is determined after each subframe exposure prior to the delivery of the reset pulse marking the start of sampling a new signal and/or reset level. Additional, unoccupied and/or pixel memory locations marked as invalid for external readout may store the determined pixel overflow level in addition to the pixel signal level and/or pixel reset level. The so obtained and stored overflow level for each pixel may be processed by the PML too. For instance, the PML may fetch the cumulative overflow level relating to preceding subframe exposures from the pixel memory and add it to the overflow determined in respect of the current subframe exposure, e.g., a further intermediate subframe exposure or the final subframe exposure in an image frame. The cumulative overflow level after the final subframe exposure may then be added to the final low gain pixel data (e.g., cumulative sum of low gain pixel data over all the subframe exposures in the image frame). Alternatively, in the event that an overflow of photocharge is detected, the photoelectric element and the charge accumulation elements may simply be reset without processing and/or storing the specific overflow level. The detection of photocharge overflow events may still be useful to inform the user, or internal components of the image sensor, that an image frame was acquired under overexposure conditions. Although embodiments of the invention preferably implement correlated double sampling to reduce reset noise, the inventive method does not rely on correlated double sampling and signal level may be sampled directly, without sampling a preceding reset level. In case that the overflow level of a pixel is also determined as described above, digital double sampling may be implemented to correct the overflow level for the residual reset noise present on the charge accumulation element(s) after the initial or each subsequent reset of the pixel in each image frame. Furthermore, more than two subframe exposures may take place during a full frame interval of the image sensor. For instance, at least three subframe exposures may occur during the full frame interval, of which the first two consecutive subframe exposures are situated in the mid-portion relative to the cumulative exposure time across all subframes in one full frame. In such embodiments, the partial exposure time allocated to the second subframe exposure may be so short as to allow the capturing of very bright signals by the image sensor. In the exemplary embodiments described above, if the PPD as photoelectric element can store 40 ke− (full well capacity—FWC), then the FWC can be effectively increased to approximately 80 ke− (ignoring determining the overflow level, otherwise it would be approximately 160 ke˜) over the duration of the cumulative (i.e. total or full) exposure time of the image frame consisting of two substantially equal subframe exposures, provided that only a very small percentage of the integrated photocharge remains in the PPD after the partial transfer. Under very bright illumination conditions even the extended FWC of about 80 ke− is insufficient to prevent the pixel output from saturating. In contrast thereto, if the second out of the three or more subframe exposures is chosen short relative to the full exposure time, e.g. about one eight of the full exposure time of the image frame, then the pixel photocharge that can be collected during this relatively short, second subframe exposure will be larger than about 80/8=10 ke− and have an associated SNR of more than 40 dB, assuming similar bright illumination conditions as in the previous case of a double-exposure (i.e. beyond saturation level). As the FWC is not reached the short, second subframe exposure, it is possible to extend the DR of the image sensor even further by selecting the pixel data originating from the short subframe exposure as the only pixel data relevant for the output. More specifically, the critical level for the photocharge integrated during the first subframe exposure, e.g., about 80*7/8/2=35 ke− in the present example, defines the low-gain threshold level TLG for the digitized pixel data above which saturation conditions are detected in respect of the first or last subframe exposure. Accordingly, the pixel data pertaining to the shortest subframe exposure period (i.e., the second in this case) is selected as the relevant output data if the low-gain threshold level TLG is exceeded. The ratio of subframe exposure periods can be accurately calculated in all embodiments of the invention and the acquired pixel data can thus be linearized. Embodiments of the invention in which a short intermediate subframe exposure is provided benefit from even higher DR values (e.g., short exposure of about one eight of full exposure time adds approximately 18 dB of DR) Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
85,993
11943557
DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, example embodiments will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like elements throughout. Although the terms “first,” “second,” “third,” etc., may be used herein to describe various elements, components, regions, layers, and/or sections, these terms are only used to distinguish one element, component, region, layer, or section, from another region, layer, or section. Thus, a first element, component, region, layer, or section, discussed below may be alternatively termed a second element, component, region, layer, or section, without departing from the scope of this disclosure. Additionally, when an element is referred to as being “on,” “connected to,” “coupled to,” or “adjacent to,” another element, the element may be directly on, connected to, coupled to, or adjacent to, the other element, or one or more other intervening elements may be present. FIG.1is a block diagram of an image processing system according to some example embodiments. Referring toFIG.1, an image processing system10may include an image sensor module100and an image processing device200. For example, the image processing system10may include and/or be included in, e.g., a personal computer, an Internet of things (IoT) device, and/or a portable electronic device. The portable electronic device may include, e.g., a laptop computer, a cellular phone, a smartphone, a tablet personal computer (PC), a personal digital assistant (PDA), an enterprise digital assistant (EDA), a digital still camera, a digital video camera, an audio device, a portable multimedia player (PMP), a personal navigation device (PND), an MP3 player, a handheld game console, an e-book, a wearable device, and/or the like. The image processing system10may be mounted on and/or in an electronic device, such as a drone and/or an advanced driver's assistance system (ADAS); and/or on (and/or in) an electronic device provided as a component of an electronic vehicle, furniture, a manufacturing facility, a door, and/or various kinds of measuring equipment. The image sensor module100may sense an object as an image, process the sensed image and/or store the sensed image in a memory, and store the processed image in the memory. In some embodiments, the image sensor module100may include an image sensor110, a memory120, a signal processor130, and an interface140. In some example embodiments, the image sensor module100may be implemented in and/or include a plurality of semiconductor chips. However, embodiments are not limited thereto, and the image sensor module100may be implemented in and/or include a single semiconductor chip. The image sensor module100may capture an external object and generate image data. For example, the image sensor110of the image sensor module100may convert an optical signal of an object, which is incident through a lens LS, into an electrical signal. The image sensor110may include a pixel array, in which a plurality of pixels are arranged in two (or more) dimensions, and output image data, which includes a plurality of pixel values corresponding to the respective pixels of the pixel array. The pixel array may include a plurality of row lines, a plurality of column lines, and a plurality of pixels arranged in a matrix, each of the pixels being connected to one of the row lines and one of the column lines. Each of the pixels may include at least one photoelectric conversion element (and/or photosensitive device). The photoelectric conversion element may sense light and convert the light into a photocharge. For example, the photoelectric conversion element may include a photosensitive device, such as an inorganic photodiode, an organic photodiode, a Perovskite photodiode, a phototransistor, a photogate, a pinned photodiode, which includes an organic or inorganic material, and/or the like. In some example embodiments, each of the pixels may include a plurality of photoelectric conversion elements. In some example embodiment, image data generated by the image sensor110may include raw image data, which includes a plurality of pixel values resulting from digital-to-analog conversion of a plurality of pixel signals output from the pixel array, and/or image data obtained by pre-processing the row image data. In some example embodiments, the image sensor110may include a drive and read circuit, which controls the pixel array and converts pixel signals received from the pixel array into pixel values. For example, the drive and read circuit may include a row driver, a readout circuit, a ramp signal generator, a timing controller, and/or the like. The drive and read circuit may generate raw image data including pixel values corresponding to the received pixel signals. In some example embodiments, the image sensor110may further include a processing logic, which pre-processes raw image data. The image sensor110may transmit the raw image data and/or pre-processed image data to the memory120and/or the signal processor130. The memory120may include a memory bank122, a processor-in-memory (PIM) circuit124, and a control logic126. The memory bank122may include a plurality of banks Bank1 through BankN. Each of the banks Bank1 through BankN may include a memory cell array including a plurality of memory cells. A bank may be variously defined. For example, a bank may be defined as a configuration including memory cells and/or a configuration including memory cells and at least one peripheral circuit. The memory120may store image data generated by the image sensor110and/or image data processed by the signal processor130. In some example embodiments, the memory bank122may store the image data received from the image sensor110and/or the signal processor130in at least one of the banks Bank1 through BankN. The memory bank122may read image data stored therein under the control of the image sensor module100and transmit the image data to the signal processor130and/or the interface140. The memory120may perform a calculation process on the image data, e.g., received from the image sensor110and/or the image data stored therein, using the PIM circuit124. In some example embodiments, the PIM circuit124may perform calculation processes related with various kinds of image processing operations using processing elements PEs. In some example embodiments, the PIM circuit124may perform various image processing operations, such as an operation using, e.g., an image enhancement algorithm, a classification operation, and/or a segmentation operation, e.g., on image artifacts of image data. For example, operations using an image enhancement algorithm may include white balancing, de-noising, de-mosaicking, re-mosaicking, lens shading, and/or gamma correction, but are not limited thereto. According to some example embodiment, the PIM circuit124may perform, as an example image processing operation, a pattern density detection operation, in which the pattern of a plurality of image regions of image data is analyzed and pattern density data of the image data is generated and/or the PIM circuit124may perform an optical flow detection operation, in which a plurality of frames of image data are analyzed and optical flow data indicating the time-sequential motion of an object between the frames is generated. In some example embodiments, the image processing operation may be implemented by neural network-based tasks, and the PIM circuit124may perform at least some neural network-based calculation processes. For example, in some embodiments, the PIM circuit124may include (and/or be included in) a neural network processing unit (NPU). A neural network may include a neural network model based on at least one selected from an artificial neural network (ANN), a convolution neural network (CNN), a region with CNN (R-CNN), a region proposal network (RPN), a recurrent neural network (RNN), a stacking-based deep neural network (S-DNN), a state-space dynamic neural network (S-SDNN), a deconvolution network, a deep belief network (DBN), a restricted Boltzmann machine (RBM), a fully convolutional network, a long short-term memory (LSTM) network, a classification network, a plain residual network, a dense network, a hierarchical pyramid network, a fully convolutional network, and/or the like. For example, the kinds of neural network models are not limited to those above. A method, performed by the PIM circuit124, of performing a neural network-based calculation process will be described in detail with reference toFIG.2. In some example embodiments, the processing elements PEs of the PIM circuit124may read image data from the banks Bank1 through BankN of the memory bank122and perform an image processing operation (e.g., at least one of the image processing operations described above) on the image data. The memory120may store image data, on which a calculation process has been performed using the PIM circuit124, and/or calculation data, which has been generated through the calculation process, in the memory bank122. The memory120may also provide the image data that has undergone a calculation process and/or the calculation data to the signal processor130. The memory120may also output the image data that has undergone a calculation process and/or the calculation data to an external device of the image sensor module100through the interface140. The control logic126may control the memory bank122and the PIM circuit124. In some example embodiments, the control logic126may decode a command and an address, which are provided to the memory120, and control the memory bank122and the PIM circuit124such that a memory operation is performed according to a decoding result. For example, commands provided to the memory120may include a command related with a memory operation such as a data write or read operation and a command related with a calculation operation. The control logic126may include (and/or be included in) processing circuitry, such as hardware including logic circuits; a hardware/software combination such as a processor executing software; and/or a combination thereof. The control logic126may control the memory bank122to perform a memory operation of writing or reading data on a storage region corresponding to an address according to the decoding result and/or control the PIM circuit124to perform a calculation operation based on data written to a storage region corresponding to an address. The memory120may include dynamic random access memory (DRAM) such as double data rate (DDR) synchronous DRAM (SDRAM), low power DDR (LPDDR), synchronous DRAM (SDRAM), graphics DDR (GDDR), Rambus DRAM (RDRAM), and/or the like. However, example embodiments are not limited thereto. For example, a memory device may include non-volatile memory such as flash memory, magnetic RAM (MRAM), ferroelectric RAM (FeRAM), phase-change RAM (PRAM), and/or resistive RAM (ReRAM). The memory120may correspond to a single semiconductor chip and/or may correspond to a single channel in a memory device, which includes a plurality of channels each having an independent interface. The memory120may correspond to a memory module and/or correspond to a single memory chip mounted on a module board when a memory module may include a plurality of chips. The signal processor130may perform a calculation process on image data received from the image sensor110and/or the memory120. For example, the signal processor130may include a central processing unit (CPU), a microprocessor, and/or a microcontroller unit (MCU). In some example embodiments, the signal processor130may perform calculation processes related with various kinds of image processing operations. For example, like the PIM circuit124of the memory120, the signal processor130may perform various image processing operations, such as white balancing, de-noising, de-mosaicking, re-mosaicking, lens shading, gamma correction, a classification operation, a segmentation operation, and/or the like. For example, in some example embodiments, the signal processor130may receive image data, on which a certain image processing operation has been performed by the memory120, and perform other image processing operations on the received image data. For example, the signal processor130may receive image data, on which de-noising has been performed by the memory120, and perform at least one selected from white balancing, de-mosaicking, re-mosaicking, lens shading, gamma correction, a classification operation, and/or a segmentation operation on the received image data. In some embodiments, the signal processor130may receive image data from the image sensor110and perform various image processing operations on the image data. Thereafter, the signal processor130may transmit processed image data to the memory120. The memory120may store the image data received from the signal processor130. The image sensor module100may output image data through the interface140. For example, in some example embodiments, the interface140may output image data stored in the memory120and/or image data processed by the signal processor130. The image sensor module100may also output calculation data, which results from a calculation operation of the memory120, through the interface140. For example, the interface140may include a mobile industry processor interface (MIPI) based camera serial interface (CSI). The kind of the interface140is not limited thereto and may be implemented according to various protocol standards. The image processing device200may include an interface210and a processor220. The image processing device200may receive image data and/or calculation data from the image sensor module100through the interface210. For example, the interface210may be connected to and/or configured to communicate with the interface140. Like the interface140, the interface210may include a MIPI but is not limited thereto. The image processing device200may store the image data and/or the calculation data in a memory (not shown). The processor220may perform various image processing operations, for example, based on the image data and/or the calculation data received through the interface210. According to the present example embodiment, as an example of an image processing operation, the processor220may perform object detection on at least one object included in an image and/or perform segmentation thereon for object detection. For example, in some example embodiments, the processor220may receive pattern density data and optical flow data from the image sensor module100together with raw image data and/or pre-processed image data. The processor220may detect and/or segment an object included in an image by analyzing the image data using the pattern density data and the optical flow data. The processor220may include and/or be included in various calculation processing devices such as a CPU, a graphics processing unit (GPU), an application processor (AP), a digital signal processor (DSP), a field-programmable gate array (FPGA), a neural network processing unit (NPU), an electronic control unit (ECU), an image signal processor (ISP), and/or the like. According to the present example embodiment, the image sensor module100may increase the calculation speed of an image processing operation by performing a calculation process using the memory120to perform the calculation process. For example, since the bandwidth between the memory bank122and the PIM circuit124in the memory120is usually higher than a bandwidth between the memory120and the signal processor130, when the calculation process is performed using the memory120, the calculation speed may be increased. When the calculation speed is increased, a neural network calculation having more layers may be performed during the same time as before. Accordingly, the accuracy of the calculation operation of the image sensor module100may be increased. For example, in some example embodiments, the image sensor module100provides pattern density data and optical flow data, which are generated using the memory120, to the image processing device200; and the image processing device200performs object detection based on the pattern density data and the optical flow data received together with image data. Accordingly, the calculation speed of the object detection operation of the image processing system10may be increased. In the embodiment ofFIG.1, the processing elements PEs may include various numbers and arrangements of the processing elements. For example, each processing element may be arranged in correspondence to one bank or at least two banks. Although it is illustrated inFIG.1that the memory bank122is separated from the PIM circuit124for ease of understanding and illustration, the memory bank122and the PIM circuit124may at least partially be merged with each other. An example of this will be described in detail with reference toFIG.11.FIG.2illustrates an example of a neural network structure. The PIM circuit124inFIG.1may be applied to the implementation of at least part of the structure of a neural network NN ofFIG.2. Referring toFIG.2, the neural network NN may include a plurality of layers, for example, first through n-th layers L1through Ln. The neural network NN with such multilayer architecture may be referred to as a deep neural network (DNN) and/or a deep learning architecture. Each of the first through n-th layers L1through Ln may include a linear layer and/or non-linear layer. In some example embodiments, at least one linear layer may be combined with at least one non-linear layer, thereby forming a single layer. For example, the linear layers may include a convolution layer and a fully-connected layer, and the non-linear layers may include a pooling layer and an activation layer. For example, the first layer L1may correspond to a convolution layer, the second layer L2may correspond to a pooling layer, and the n-th layer Ln may correspond to a fully-connected layer as an output layer. The neural network NN may further include an activation layer and/or may further include other layers performing other kinds of calculations. Each of the first through n-th layers L1through Ln may receive, as an input feature map, an image frame, and/or a feature map generated in a previous layer and may generate an output feature map and/or a recognition signal REC by performing a calculation on the input feature map. At this time, the feature map refers to data, which represents various features of input data. For example, first through n-th feature maps FM1, FM2, FM3, and FMn may have a two-dimensional and/or a three-dimensional matrix (e.g., a tensor) form, which includes a plurality of feature values. The first through n-th feature maps FM1through FMn may have a width W (e.g., a column), a height H (e.g., a row), and a depth D, which may respectively correspond to the x-axis, the y-axis, and the z-axis in a coordinate system. At this time, for example, the depth D may be referred to as the number of channels. The first layer L1may generate the second feature map FM2by performing a convolution on the first feature map FM1and a weight map WM. The weight map WM may have a two-dimensional and/or three-dimensional matrix form including a plurality of weights. The weight map WM may be referred to as filter and/or a kernel. The weight map WM may filter the first feature map FM1. The depth (e.g., the number of channels) of the weight map WM may be same as the depth (e.g., the number of channels) of the first feature map FM1. A convolution may be performed on the same channels in both the weight map WM and the first feature map FM1. The weight map WM may be shifted on the first feature map FM1by traversing the first feature map FM1using a sliding window. During a shift, each weight included in the weight map WM may be multiplied by and/or added to all feature values in an area where the weight map WM overlaps the first feature map FM1. One channel of the second feature map FM2may be generated by performing a convolution on the first feature map FM1and the weight map WM. Although only one weight map WM is shown inFIG.2, a plurality of weight maps WM may be convolved with the first feature map FM1so that a plurality of channels of the second feature map FM2may be generated. For example, the number of channels of the second feature map FM2may correspond to the number of weight maps. The second layer L2may generate the third feature map FM3by changing a spatial size of the second feature map FM2through pooling. The pooling may be referred to as sampling and/or downsampling. A two-dimensional pooling window PW may be shifted on the second feature map FM2by a unit of the size of the pooling window PW, and a maximum value among feature values (or an average of the feature values) in an area, in which the pooling window PW overlaps the second feature map FM2, may be selected. As such, the third feature map FM3may be generated by changing the spatial size of the second feature map FM2. The number of channels of the third feature map FM3may be the same as the number of channels of the second feature map FM2. The n-th layer Ln may combine features of the n-th feature map FMn and categorize a class CL of the input data. The n-th layer Ln may also generate the recognition signal REC corresponding to the class CL. The structure of the neural network NN is not limited to the example described above. Some of the first through n-th layers L1through Ln may be omitted from the neural network NN, or an additional layer may be added to the neural network NN. According to some example embodiments, the processing elements PEs of the PIM circuit124inFIG.1may constitute and/or correspond to at least one of the convolution layer, the fully-connected layer, the pooling layer, and the activation layer of the neural network NN. For example, some of the processing elements PEs of the PIM circuit124may be configured to perform a convolution calculation on image data read from the banks Bank1 through BankN, and some of the processing elements PEs of the PIM circuit124may be configured to perform a pooling calculation on a convolution result. According to some example embodiments, the processing elements PEs of the PIM circuit124may be embodied as a neural network model, which is trained to perform a pattern density detection operation and/or an optical flow detection operation. FIG.3is a block diagram illustrating the configuration of a memory, according to some example embodiments. In some example embodiments, the memory120ofFIG.3may correspond to the memory120inFIG.1. Referring toFIG.3, the memory bank122may transmit image data IDT to the PIM circuit124. In some example embodiments, the memory bank122may receive image data IDT (e.g., from the image sensor110inFIG.1), divide the image data IDT (e.g., into a certain size), and store the image data IDT (e.g., in at least one of the banks Bank1 to BankN). The memory bank122may read the image data IDT from at least one of the banks Bank1 through BankN and transmit the image data IDT to the PIM circuit124. In the some example embodiments, the PIM circuit124may include a first processing element group PEG1 performing a first calculation operation and a second processing element group PEG2 performing a second calculation operation. Each of the first processing element group PEG1 and the second processing element group PEG2 may include at least one processing element PE. In some example embodiments, the first calculation operation and the second calculation operation may correspond to image processing operations on the image data IDT. The first processing element group PEG1 may generate first calculation data ODT1 by performing the first calculation operation based on the image data IDT. The second processing element group PEG2 may generate second calculation data ODT2 by performing the second calculation operation based on the image data IDT. In some example embodiment, each of the first processing element group PEG1 and the second processing element group PEG2 may be embodied as a neural network module performing the first and/or second calculation operations. In some example embodiments, the first calculation data ODT1 and the second calculation data ODT2 may be stored in the memory bank122and/or an external device (e.g., the image processing device200inFIG.1) of the memory120. According to some example embodiments, the first processing element group PEG1 may be embodied as a neural network module, which is trained to perform an optical flow detection operation as the first calculation operation. The first processing element group PEG1 may generate optical flow data as the first calculation data ODT1 by performing an optical flow detection operation on the image data IDT. In some embodiments, the optical flow detection operation needs a plurality of frames of the image data IDT. Accordingly, the image sensor100inFIG.1may store a plurality of frames of the image data IDT, which are generated at different time points, in the memory bank122; and the first processing element group PEG1 may receive the frames of the image data IDT from the memory bank122and perform a flow detection operation. The second processing element group PEG2 may be embodied as a neural network module, which is trained to perform a pattern density detection operation as the second calculation operation. The second processing element group PEG2 may generate pattern density data as the second calculation data ODT2 by performing a pattern density detection operation on the image data IDT. Although it is illustrated inFIG.3that the memory bank122is separated from the PIM circuit124for easy understanding, the memory bank122and the PIM circuit124may at least partially be merged with each other. An example of this will be described in detail with reference toFIG.11. FIG.4Ais a block diagram for describing operations of an image sensor module and an image processing device, according to some example embodiments. The image sensor module100and the image processing device200inFIG.4Amay respectively correspond to the image sensor module100and the image processing device200inFIG.1. Referring toFIG.4A, the image sensor module100may include an optical flow module125, which corresponds to a neural network model trained to perform an optical flow detection operation, and a pattern density module126, which corresponds to a neural network model trained to perform a pattern density detection operation. The optical flow module125may correspond to the first processing element group PEG1 inFIG.3, and the pattern density module126may correspond to the second processing element group PEG2 inFIG.3. The optical flow module125may receive the image data IDT from the memory bank122and generate optical flow data OF by performing an optical flow detection operation on the image data IDT. The pattern density module126may receive the image data IDT from the memory bank122and generate pattern density data PD by performing a pattern density detection operation on the image data IDT. The image sensor module100may transmit the image data IDT, the optical flow data OF, and the pattern density data PD to the image processing device200. Referring toFIG.4A, the processor220of the image processing device200may include a depth information module222generating depth information and an object detection module224performing object detection. As a non-limiting example, the depth information module222and the object detection module224may be embodied as neural network models. The depth information module222may generate depth information DI regarding the image data IDT based on the optical flow data OF and the pattern density data PD, which are received from the image sensor module100. For example, the depth information DI may include a depth value corresponding to a distance from the image sensor110to an object included in the image data IDT. In some example embodiments, the depth information module222may generate the depth information DI regarding the image data IDT based on a characteristic by which the farther the object, the higher the pattern density and a characteristic by which the farther the object, the less the position change over time. The depth information module222may provide the depth information DI to the object detection module224. The object detection module224may receive the image data IDT and the optical flow data OF from the image sensor module100and the depth information DI from the depth information module222. The object detection module224may perform object detection on the image data IDT, e.g., based on the optical flow data OF and the depth information DI. The object detection module224may generate object information OI (which may include various kinds of information about an object detected in the image data IDT) as an object detection result. For example, the object information OI may include three-dimensional (3D) information (which may include a 3D bounding box surrounding an object), the shape of the object, a distance to the object, and/or the position of the object; and/or two-dimensional (2D) information including, e.g., an edge of the object. A method, performed by the object detection module224, of generating the object information OI will be described in detail with reference toFIGS.7through10below. Although it is described that the object detection module224performs object detection based on the image data IDT, the optical flow data OF, and the depth information DI in the embodiment ofFIG.4A, other various kinds of information (e.g., the pattern density data PD) may be additionally considered when the object detection is performed. Although it is illustrated and described that the processor220includes both the depth information module222and the object detection module224in the embodiment ofFIG.4A, embodiments are not limited thereto. For example, the image processing device200may include the processor220including the depth information module222and a separate (e.g., second) processor including the object detection module224. FIG.4Bis a block diagram for describing operations of an image sensor module and an image processing device, according to some example embodiments.FIG.4Billustrates a modified embodiment ofFIG.4A. Hereinafter, redundant descriptions given above with reference toFIG.4Aare omitted. Referring toFIG.4B, a processor220aof an image processing device200amay include an object detection module224aconfigured to perform an object detection operation. The object detection module224amay receive the image data IDT, the optical flow data OF, and the pattern density data PD from the image sensor module100. The object detection module224amay perform object detection on the image data IDT based on the optical flow data OF and/or the pattern density data PD. The object detection module224amay generate the object information OI regarding the image data IDT. An example method of generating the object information OI will be described in detail with reference toFIGS.7through10below. InFIGS.4A and4B, each of the depth information module222and the object detection module224and/or224amay be implemented by firmware and/or software and may be loaded to a memory (not shown) of the image processing device200and/or200aand then executed by the processor220or220a. However, embodiments are not limited thereto. Each of the depth information module222and the object detection module224or224amay be implemented by hardware or a combination of software and hardware. As described above, the image sensor module100may generate the optical flow data OF and the pattern density data PD with respect to the image data IDT using the memory120that performs a calculation process. The image processing device200and/or200amay perform object detection based on the image data IDT, the optical flow data OF, and/or the pattern density data PD, which are received from the image sensor module100. FIG.5is a block diagram for describing operations of a plurality of image sensor modules and an image processing device, according to some example embodiments. The example embodiments illustrated inFIG.5are based on a modified embodiment ofFIG.4A. Hereinafter, redundant descriptions given above with reference toFIG.4Aare omitted. According to some example embodiments, there may be a plurality of image sensor modules. For example, referring toFIG.5, there may be a first image sensor module100_1and a second image sensor module100_2. In the example embodiments, the first and second image sensor modules100_1and100_2may be adjacent to each other and capture images in similar directions to each other and may be referred to as a stereo camera. For example, the first image sensor module100_1may capture a left-eye image and the second image sensor module100_2may capture a right-eye image. Each of the first and second image sensor modules100_1and100_2may be the same as and/or similar to the image sensor module100inFIG.4A. In some example embodiments, the first image sensor module100_1may include a first image sensor110_1and a first PIM circuit120_1. The first image sensor110_1may generate first image data IDT1, and the first PIM circuit120_1may generate first optical flow data OF1 and first pattern density data PD1 based on the first image data IDT1. The second image sensor module100_2may include a second image sensor110_2and a second PIM circuit120_2. The second image sensor110_2may generate second image data IDT2, and the second PIM circuit120_2may generate second optical flow data OF2 and second pattern density data PD2 based on the second image data IDT2. Referring toFIG.5, a processor220bof an image processing device200bmay include a depth information module222band an object detection module224b. The depth information module222bmay receive the first and second optical flow data OF1 and OF2 and the first and second pattern density data PD1 and PD2 from the first and second image sensor modules100_1and100_2and generate the depth information DI based on the received data. For example, the first optical flow data OF1 and the first pattern density data PD1 may correspond with a left-eye image (e.g., the first image data IDT1), and the second optical flow data OF2 and the second pattern density data PD2 correspond with a right-eye image (e.g., the second image data IDT2). The depth information module222bmay generate the depth information DI. For example, the depth information module222bmay determine a difference between the left eye-image and the right-eye image, wherein the difference decreases the farther the object is from the first and second image sensor modules100_1and100_2. The depth information module222bmay provide the depth information DI to the object detection module224b. The object detection module224bmay receive image data (e.g., the first image data IDT1) and/or optical flow data (e.g., the first optical flow data OF1) from at least one of the first image sensor module100_1and/or the second image sensor module100_2; and the depth information DI from the depth information module222b. The object detection module224bmay generate the object information OI by performing object detection on the first image data IDT1 based on the first optical flow data OF1 and the depth information DI. However, the example embodiments are not limited thereto. For example, the object detection module224bmay receive the second image data IDT2 and the second optical flow data OF2 from the second image sensor module100_2and the depth information DI from the depth information module222band perform object detection. FIGS.6A and6Bare diagrams for describing a method of generating optical flow data according to a shooting mode of an image sensor module, according to some example embodiments. With reference toFIGS.6A and6B, the following description includes the PIM circuit124of the memory120inFIG.1with the optical flow module125inFIG.4A. In some example embodiment, the image sensor module100may provide the functions of a normal shooting mode and/or a burst shooting mode. In the normal shooting mode, an object is shot in a reference time unit and image data is generated. In the burst shooting mode, an object may be shot multiple times in succession in the reference time unit and a plurality of pieces of image data are generated. Referring toFIG.6A, when the image sensor module100operates in the normal shooting mode, a single frame of image data may be generated in the reference time unit (e.g., every 1/60 second). For example, an image sensor (e.g., the image sensor110ofFIG.1and/or first and second image sensors110_1and110_2ofFIG.5) may capture a plurality of frames (e.g., at least the first through fourth frames Frame1 through Frame4), and, the PIM circuit124may generate the first optical flow data OF1 and the second optical flow data OF2 using, e.g., the first through fourth frames Frame1 through Frame4. For example, when the PIM circuit124performs an optical flow detection operation using three frames, the PIM circuit124may generate the first optical flow data OF1 using the first through third frames Frame1 through Frame3 and generate the second optical flow data OF2 using the second through fourth frames Frame2 through Frame4. For example, in some example embodiments, it may take three-sixtieths ( 3/60) of a second (e.g., 1/20 of a second) for the PIM circuit124to generate a piece of optical flow data. Referring toFIG.6B, when the image sensor module100operates in the burst shooting mode, a plurality of (e.g., three) frames of image data may be generated in the reference time unit (e.g., every 1/60 second). The PIM circuit124may generate first through fourth optical flow data OF1 through OF4 using the plurality of frames (e.g., first through twelfth frames Frame1 through Frame12). For example, the PIM circuit124may perform an optical flow detection operation using three frames. For example, it may take 1/60 of a second for the PIM circuit124to generate a piece of optical flow data. As described above, when the image sensor module100supports the function of the burst shooting mode, the calculation speed of the optical flow detection operation may be increased. In addition, a neural network model (e.g., the optical flow module125inFIG.4A) trained to perform an optical flow detection operation may be embodied in a simpler structure as the difference between pieces of image data input to the neural network model decreases. Accordingly, when optical flow detection is performed using a plurality of frames shot in the burst shooting mode, the optical flow module125inFIG.4A, including a simpler structure, may be included in the PIM circuit124. Although it has been illustrated and described that the PIM circuit124performs an optical flow detection operation using three frames in the embodiments ofFIGS.6A and6B, embodiments are not limited thereto. For example, the PIM circuit124may perform an optical flow detection operation using fewer or more than three frames. FIG.7is a block diagram illustrating the configuration of an object detection module, according to some example embodiments. For example, an object detection module300ofFIG.7may be the same as or similar to the object detection module224,224a, and/or224binFIGS.4A,4B, and/or5. Referring toFIG.7, the object detection module300may include a pre-processor310, a mask generator320, a masking unit330, a feature extractor340, and a detector350. The pre-processor310may receive and down sample the image data IDT and generate and output a plurality of pyramid images PI. The pre-processor310may generate a first pyramid image by down sampling the width and length of the image data IDT by a certain (and/or otherwise determined) factor, and a second pyramid image may be generated by down sampling the first pyramid image by the certain factor. For example, the pre-processor310may generate the pyramid images PI, which are derived from the image data IDT and have sizes gradually reduced from the size of the image data IDT. The mask generator320may receive the depth information DI, and generate (and output) a plurality of pieces of mask data MK for the pyramid images PI based on the depth information DI. The mask data MK may be used to mask the remaining region of an image excluding a meaningful region. In an example embodiment, the meaningful region of each pyramid image PI may be different according to resolution. For example, relatively high resolution is required to detect a distant object, whereas relatively low resolution may be sufficient for detection of a near object. Accordingly, as the number of times of down sampling performed to generate the pyramid images PI decreases (e.g., as the resolution increases) the meaningful region may include an image region corresponding to a distant object. Contrarily, as the number of times of down sampling performed to generate the pyramid images PI increases (e.g., as resolution decreases) the meaningful region may include an image region corresponding to a near object. In an example embodiment, the mask generator320may identify depth values corresponding to the resolution of each pyramid image PI based on the depth information DI and generate the mask data MK including the depth values. For example, the mask generator320may identify high depth values when the resolution of the pyramid image PI is high and may identify low depth values when the resolution of the pyramid image PI is low. Accordingly, a region including the depth values may correspond to the meaningful region of the pyramid image PI. Alternatively, the mask generator320may be configured to receive the pattern density data PD instead of the depth information DI and generate and output the mask data MK for the pyramid images PI based on the pattern density data PD. A pattern density has a characteristic of being higher when an object is near than when the object is far away. Accordingly, a high pattern density may correspond to a high depth value, and a low pattern density may correspond to a low depth value. The masking unit330may receive the pyramid images PI and the pieces of the mask data MK and generate and output a plurality of masked images IMK by applying the pieces of the mask data MK to the pyramid images PI. In some example embodiments, the masking unit330may identify the remaining region excluding the meaningful region of each pyramid image PI based on a piece of the mask data MK corresponding to the pyramid image PI, and generate a masked image IMK by masking the remaining region. The feature extractor340may receive the masked images IMK and output a plurality of pieces of feature data FD for the masked images IMK. For example, the feature data FD may include the feature map, the class CL, and/or the recognition signal REC described above with reference toFIG.2. In some embodiments, the feature data FD may be constituted of various types of data including a feature of an unmasked region of each pyramid image PIM and/or the image data IDT. The detector350may receive the pieces of the feature data FD, identify, and/or detect at least one object included in the image data IDT based on the pieces of the feature data FD, and generate the object information OI including various kinds of information about the object. In some example embodiments, the detector350may additionally receive the optical flow data OF and may detect an object based on the optical flow data OF and the pieces of the feature data FD. Because the optical flow data OF includes information about a time-sequential motion of an object, the detector350may increase the accuracy of object detection by using the optical flow data OF. FIGS.8A and8Bare diagrams illustrating mask data according to some example embodiments.FIG.8Ais an example diagram illustrating the mask data MK generated based on the depth information DI, andFIG.8Bis an example diagram illustrating the mask data MK generated based on the pattern density data PD. Referring toFIG.8A, the mask generator320may identify depth values, which respectively belong to certain depth ranges, based on the depth information DI and generate one of a plurality of mask data (e.g., first through fourth mask data MK1 through MK4) based on the depth values in each depth range. For example, in some example embodiments, the mask generator320may identify first depth values in a first depth range, which has a relatively high average value, from the depth information DI and generate the first mask data MK1 including the first depth values. The mask generator320may identify second depth values in a second depth range, which has a lower average value than the first depth range, from the depth information DI and generate the second mask data MK2 including the second depth values. The mask generator320may identify third depth values in a third depth range, which has a lower average value than the second depth range, from the depth information DI and generate the third mask data MK3 including the third depth values. The mask generator320may identify fourth depth values in a fourth depth range, which has a lower average value than the third depth range, from the depth information DI and generate the fourth mask data MK4 including the fourth depth values. Referring toFIG.8B, the mask generator320may identify density values, which respectively belong to certain density ranges, based on the pattern density data PD and generate one of the first through fourth mask data MK1 through MK4 based on the density values in each density range. For example, in some example embodiments, the mask generator320may identify first density values in a first density range, which has a relatively high average value, from the pattern density data PD and generate the first mask data MK1 including the first density values. The mask generator320may identify second density values in a second density range, which has a lower average value than the first density range, from the pattern density data PD and generate the second mask data MK2 including the second density values. The mask generator320may identify third density values in a third density range, which has a lower average value than the second density range, from the pattern density data PD and generate the third mask data MK3 including the third density values. The mask generator320may identify fourth density values in a fourth density range, which has a lower average value than the third density range, from the pattern density data PD and generate the fourth mask data MK4 including the fourth density values. FIG.9is a diagram for describing an operation of generating masked images, according to some example embodiments. Referring toFIG.9, first to fourth masked images IMK1 to IMK4 may be generated by respectively applying the first through fourth mask data MK1 through MK4 to first through fourth pyramid images PI1 through PI4. For example, the masking unit330may mask the remaining region of each of the first through fourth pyramid images PI1 through PI4 excluding a meaningful region thereof based on corresponding one of the first through fourth mask data MK1 through MK4. In some example embodiments, the first through fourth mask data MK1 through MK4 may be applied according to the resolution of each of the first through fourth pyramid images PI1 through PI4. For example, the masking unit330may apply the first mask data MK1, which is constituted of depth values (and/or density values) of which the average is high, to the first pyramid image PI1 having a high resolution. The masking unit330may apply the fourth mask data MK4, which is constituted of depth values (and/or density values) of which the average is low, to the fourth pyramid image PI4 having a low resolution. The first masked image IMK1 is constituted of a first region C1 corresponding to a portion of the first pyramid image PI1, the second masked image IMK2 is constituted of a second region C2 corresponding to a portion of the second pyramid image PI2, the third masked image IMK3 is constituted of a second region C3 corresponding to a portion of the third pyramid image PI3, and the fourth masked image IMK4 is constituted of a fourth region C4 corresponding to a portion of the fourth pyramid image PI4. Each of the first through fourth regions C1through C4 may correspond to a portion of a corresponding one of the first through fourth pyramid images PI1 through PI4, which is not masked by a corresponding one of the first through fourth mask data MK1 through MK4. FIG.10is a diagram for describing operations of the feature extractor340and the detector350, according to some example embodiments. Referring toFIG.10, the feature extractor340may include first through fourth feature extractors340_1through340_4. The first through fourth feature extractors340_1through340_4may respectively receive the first to fourth masked images IMK1 to IMK4, and generate (and output) first through fourth feature data FD1 through FD4 with respect to the first to fourth masked images IMK1 to IMK4. For example, the first feature extractor340_1may generate the first feature data FD1 based on the first masked image IMK1, the second feature extractor340_2may generate the second feature data FD2 based on the second masked image IMK2, the third feature extractor340_3may generate the third feature data FD3 based on the third masked image IMK3, and the fourth feature extractor340_3may generate the fourth feature data FD4 based on the fourth masked image IMK4. Although it is illustrated and described that the first through fourth feature extractors340_1through340_4are separate, embodiments are not limited thereto. In some example embodiments, the detector350may receive the first through fourth feature data FD1 through FD4 respectively from the first through fourth feature extractors340_1through340_4and perform object detection based on the first through fourth feature data FD1 through FD4. FIG.11is a block diagram of a portion of a memory, according to some example embodiments. The memory400ofFIG.11may correspond (and/or be included in) to the memory120inFIG.1. Referring toFIG.11, the memory400may include a bank group410, a processing element group420, and a local bus430. In some example embodiments, the bank group410may include a plurality of banks (e.g., first through fourth banks Bank1 through Bank4), and the processing element group420may include a plurality of processing elements (e.g., first through fourth processing elements PE1 through PE4) respectively corresponding to the plurality of banks. The processing element group420may further include a fifth processing element PE5 independent of the bank group410. In some example embodiments, the first through fourth banks Bank1 through Bank4 may be connected to the first through fourth processing elements PE1 through PE4 according to the corresponding relationship therebetween. For example, referring toFIG.11, the first bank Bank1 may be connected to the first processing element PE1, the second bank Bank2 may be connected to the second processing element PE2, the third bank Bank3 may be connected to the third processing element PE3, the fourth bank Bank4 may be connected to the fourth processing element PE4, etc. In a storage operation of the memory400, the bank group410may store data transmitted through the local bus430. In some example embodiments, the memory400may receive image data from the image sensor110inFIG.1, and at least one of the first through fourth banks Bank1 through Bank4 may store at least part of the image data. For example, the image data may be divided into a certain size and stored in at least one of the first through fourth banks Bank1 through Bank4. In a calculation operation of the memory400, each of some processing elements (e.g., the first through fourth processing elements PE1 through PE4) of the processing element group420may perform a calculation operation based on data stored in a bank corresponding thereto in the bank group410. For example, referring toFIG.11, the first processing element PE1 may perform a calculation operation based on data stored in the first bank Bank1, the second processing element PE2 may perform a calculation operation based on data stored in the second bank Bank2, the third processing element PE3 may perform a calculation operation based on data stored in the third bank Bank3, and the fourth processing element PE4 may perform a calculation operation based on data stored in the fourth bank Bank4. At this time, the first through fourth processing elements PE1 through PE4 may perform calculation operations in parallel. For example, in some example embodiments, each of the first through fourth processing elements PE1 through PE4 may perform a convolution calculation in a neural network calculation based on image data stored in a corresponding bank, though the example embodiments are not limited thereto. In some example embodiments, a processing element (e.g., the fifth processing element PE5) independent of the bank group410in the processing element group420may perform a calculation operation based on the calculation results of the processing elements described above. For example, the fifth processing element PE5 may perform a pooling calculation in the neural network calculation based on the calculation results of the first through fourth processing elements PE1 through PE4, though the example embodiments are not limited thereto. The fifth processing element PE5 may receive the calculation results of the first through fourth processing elements PE1 through PE4 through the local bus430and perform a pooling calculation based on the calculation results. In some example embodiments, the calculation results of the processing element group420may be stored in the bank group410. For example, the calculation results of the first through fourth processing elements PE1 through PE4 may be respectively stored in the first through fourth banks Bank1 through Bank4. The calculation result of the fifth processing element PE5 may be stored in at least one of the first through fourth banks Bank1 through Bank4. Locations in which the calculation result of the processing element group420is stored, are not limited to those described above and may be set independently of the corresponding relationship between processing elements and banks. For example, the calculation result of the first processing element PE1 may be transmitted to and stored in the second bank Bank2 through the local bus430. Although it is illustrated in the embodiment ofFIG.11that the first through fourth banks Bank1 through Bank4 are respectively connected to the first through fourth processing elements PE1 through PE4, embodiments are not limited thereto. For example, at least one of the first through fourth banks Bank1 through Bank4 may be configured as an independent bank that is not connected to a processing element. Data stored in the independent bank may also be transmitted to the first through fourth processing elements PE1 through PE4 through the local bus430. In some example embodiments, the first through fourth processing elements PE1 through PE4 may receive the data from the independent bank through the local bus430and immediately perform a calculation process. Alternatively, the data of the independent bank may be stored in the first through fourth banks Bank1 through Bank4 connected to the first through fourth processing elements PE1 through PE4 through the local bus430, and the first through fourth processing elements PE1 through PE4 may read the data of the independent bank from the first through fourth banks Bank1 through Bank4 and perform a calculation process. To perform the calculation operation described above, the control logic126inFIG.1may control the memory bank122inFIG.1and the PIM circuit124inFIG.1based on address information and calculation order information. For example, the control logic126may read data from an independent bank based on address information regarding the independent bank and transmit the data to the first processing element PE1. The first processing element PE1 may also be set to perform a calculation process on first image data read from the first bank Bank1. Accordingly, the control logic126may control the first processing element PE1 (e.g., to perform a calculation process on the data of the independent bank) before or after a calculation operation is performed on the first image data read from the first bank Bank1, based on the calculation order information. In the embodiment ofFIG.11, the number of banks included in the bank group410and the number of processing elements included in the processing element group420are just examples, and embodiments are not limited thereto. Fewer or more banks and/or processing elements may be included. Although it is illustrated and described in the embodiment ofFIG.11that the memory400includes a processing element, e.g., the fifth processing element PE5 inFIG.11, performing a pooling calculation, embodiments are not limited thereto. For example, the memory400may not include a processing element that performs (and/or that is dedicating to performing) a pooling calculation. FIG.12is a block diagram illustrating the structure of a memory, according to some example embodiments. The memory500ofFIG.12may correspond to the memory120inFIG.1and/or the memory400ofFIG.11.FIG.12illustrates the structure of a bank and a processing element, which are connected to each other in the memory500, and may be applied to, for example, the structure of the first bank Bank1 and the first processing element PE1 inFIG.11. Referring toFIG.12, the memory500may include a memory cell array510, an address buffer520, a row decoder530, a column decoder540, a sense amplifier550, an input/output (I/O) gating circuit560, a processing element570, a data I/O circuit580, and a control logic590. The memory cell array510may include a plurality of memory cells arranged in a matrix of rows and columns. The memory cell array510may include a plurality of word lines WL and a plurality of bit lines BL, wherein the word lines WL and the bit lines BL are connected to the memory cells. For example, each of the word lines WL may be connected to a row of memory cells, and each of the bit lines BL may be connected to a column of memory cells. The address buffer520receives an address ADDR. The address ADDR includes a row address RA, which addresses a row of the memory cell array510, and a column address CA, which addresses a column of the memory cell array510. The address buffer520may transmit the row address RA to the row decoder530and the column address CA to the column decoder540. The row decoder530may select one of the word lines WL connected to the memory cell array510. The row decoder530may decode the row address RA received from the address buffer520, select a word line WL corresponding to the row address RA, and activate the word line WL. The column decoder540may select some of the bit lines BL of the memory cell array510. The column decoder540may generate a column selection signal by decoding the column address CA received from the address buffer520and select bit lines BL corresponding to the column selection signal through the I/O gating circuit560. The sense amplifier550may be connected to the bit lines BL of the memory cell array510. The sense amplifier550may sense a voltage change of bit lines BL and amplify (and output) the voltage change. The bit lines BL sensed and amplified by the sense amplifier550may be selected through the I/O gating circuit560. The I/O gating circuit560may include read data latches, which store data of bit lines BL selected by a column selection signal, and a write driver, which writes data to the memory cell array510. The data stored in the read data latches may be provided to a data pad DQ through the data I/O circuit580. Data provided to the data I/O circuit580through the data pad DQ may be written to the memory cell array510through the write driver. The data pad DQ may be connected to a local bus (e.g., the local bus430inFIG.11) of the memory500. The processing element570may be between the I/O gating circuit560and the data I/O circuit580. The processing element570may perform a calculation operation based on data read from the memory cell array510or data received from the data I/O circuit580. The data I/O circuit580and the processing element570may include (and/or be included in) hardware including logic circuits; a hardware/software combination such as a processor executing software; and/or a combination thereof. For example, the processing element570may include and/or be included in an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), and programmable logic unit, a microprocessor, application-specific integrated circuit (ASIC), etc. The processing element570may write a calculation result to the memory cell array510and/or provide the calculation result to the data pad DQ through the data I/O circuit580. The control logic590may receive a clock signal CLK (and/or a command CMD), and generate control signals CTRLS for controlling the operation timing, a memory operation, and/or a calculation operation of the memory500. The control logic590may read data from the memory cell array510and write data to the memory cell array510using the control signals CTRLS. The control logic590may also control the processing element570to perform a calculation process using the control signals CTRLS. Although it is illustrated and described in the embodiment ofFIG.12that the control logic590controls the memory operation and calculation operation of the memory500, the example embodiments are not limited thereto. For example, the memory500may include a separate element, e.g., a processing controller, which generates control signal for controlling the calculation operation of the memory500. It is illustrated and described in the embodiment ofFIG.12that the memory500includes the processing element570, but the example embodiments are not limited thereto. For example, when there is no processing element connected to a bank, the processing element570may be omitted from the embodiment ofFIG.12. FIG.13is a detailed diagram illustrating the structure of a memory, according to some example embodiments. In detail,FIG.13is a detailed diagram illustrating the structure of the memory500ofFIG.12. Hereinafter, redundant descriptions given above with reference toFIG.12are omitted. Referring toFIGS.12and13, the memory500may further include various elements related to a calculation operation. For example, the processing element570may include ALUs respectively corresponding to a plurality of bit lines BL1 through BLK of the memory cell array510. Each of the ALUs may include a plurality of multiplicity circuits (e.g., first through third multiplication circuits MC1, MC2, and MC3) and a plurality of addition circuits (first and second addition circuits AC1 and AC2). For example, the plurality of multiplication circuits (e.g., first through third multiplication circuits MC1, MC2, and MC3) may respectively perform multiplications of weights and pieces of data read from a corresponding bit line and adjacent bit lines and output a plurality of multiplication results. For example, referring toFIG.13, the second multiplication circuit MC2 may perform a multiplication of a second weight and data read from a bit line corresponding to an ALU and output a second multiplication result. The first multiplication circuit MC1 may perform a multiplication of a first weight and data read from a bit line on the left of the corresponding bit line and output a first multiplication result. The third multiplication circuit MC3 may perform a multiplication of a third weight and data read from a bit line on the right of the corresponding bit line and output a third multiplication result. In some example embodiments, the first through third weights may be the same as or different from each other. Data read from a corresponding bit line and adjacent bit lines may correspond to data stored in a read data latch Latch1 through the sense amplifier550. The first addition circuit AC1 may perform an addition of the first through third multiplication results of the first through third multiplication circuits MC1, MC2, and MC3 and output a first addition result. The second addition circuit AC2 may perform an addition of the first addition result and the data read from the corresponding bit line and output a second addition result. At this time, the data read from the corresponding bit line may correspond to data that is transmitted from the memory cell array510without passing through the sense amplifier550and the read data latch Latch1. As described above, a calculation operation using an ALU is performed using data received from adjacent bit lines as well as data of a bit line corresponding to the ALU and may thus be applied to a convolution calculation. The method described above may be extended to an embodiment of receiving data from bit lines, which are adjacent to the corresponding bit line and included in different banks from each other. In some example embodiments, an ALU, which is located in an edge of each of the first and second banks Bank1 and Bank2, adjacent to each other, among a plurality of ALUs of the first or second bank Bank1 or Bank2, may be connected to an ALU, which is located in an edge of an adjacent bank, through a data line. For example, an ALU in the right edge of the first bank Bank1 may be connected to an ALU in the left edge of the second bank Bank2 through a data line. The data line may connect the first (or third) multiplication circuit MC1 (or MC3) of each ALU to an adjacent bit line. The data I/O circuit580may include a calculated data latch Latch2, which stores the second addition result output from the second addition circuit AC2, and a data selector, which selects data to be provided to the data pad DQ. The calculated data latch Latch2 may store the second addition result output from the second addition circuit AC2. In some example embodiments, the data selector may include at least one multiplexer. AlthoughFIGS.12and13illustrate the structure of a bank and a processing element that are connected to each other, the example embodiments are not limited thereto. For example, when a bank is not connected to a processing element, the processing element570inFIGS.12and13and the calculated data latch Latch2 of the data I/O circuit580may be omitted. FIG.14is a flowchart of an operating method of an image sensor module, according to some example embodiments. The operating method ofFIG.14may be, for example, performed using the image sensor module100inFIG.1. Referring toFIGS.1and14, the image sensor module100may obtain image data (e.g., through the image sensor110) in operation S100. The image sensor module100may store the image data (e.g., in a plurality of banks of the memory120) in operation S200. In some example embodiments, the image sensor module100may divide the image data into a plurality of image regions and store the image regions in a plurality of banks. For example, the image sensor module100may store a first image region of the image data in a first bank and a second image region of the image data in a second bank. The image sensor module100may generate optical flow data with respect to the image data, using processing elements of the memory120, in operation S300. The image sensor module100may also generate pattern density data with respect to the image data, using the processing elements of the memory120, in operation S400. In some example embodiments, the generating the optical flow data and/or the pattern density data may correspond to a neural network-based calculation processing operation. The image sensor module100may output the image data, the optical flow data, and the pattern density data in operation S500. In some example embodiments, the image sensor module100may output the image data, the optical flow data, and the pattern density data to the image processing device200. The image processing device200may perform object detection on the image data based on the optical flow data and the pattern density data. FIG.15is an exploded perspective view of an image sensor module, andFIG.16is a plan view of the image sensor module. Referring toFIGS.15and16, an image sensor module100amay have a stack structure of a first chip CH1, a second chip CH2, and a third chip CH3. A pixel core (e.g., at least one photoelectric conversion element and a pixel circuit) of each of a plurality of pixels included in a pixel array of the image sensor110inFIG.1may be formed in the first chip CH1. A driver and read circuit including a logic circuit (e.g., a row driver, a readout circuit, a ramp generator, and/or a timing controller) may be formed in the second chip CH2. The memory120inFIG.1may be formed in the third chip CH3. The first through third chips CH1, CH2, and CH3 may be electrically connected to one another through a connecting member or a through via. However, the example embodiments are not limited thereto. For example, the image sensor module100amay be implemented in a single semiconductor chip. As shown inFIG.16, the first through third chips CH1, CH2, and CH3 may respectively include a pixel array, a logic circuit, and the memory120inFIG.1in the central portions thereof and each include a peripheral region in the outer edge thereof. Through vias TV extending in a third direction (e.g., a Z direction) may be arranged in the peripheral region of each of the first through third chips CH1, CH2, and CH3. The first chip CH1 may be electrically coupled to the second chip CH2 through the through vias TV. Wirings extending in a first direction (e.g., an X direction) or a second direction (e.g., a Y direction) may be formed in the peripheral region of each of the first through third chips CH1, CH2, and CH3. FIG.17is a block diagram of an electronic device according to some example embodiments. Referring toFIG.17, an electronic device1000may include an application processor1100, a camera module1200, a working memory1300, a storage1400, a display device1600, a user interface1700, and a wireless transceiver1500. The application processor1100may include a system-on-chip (SoC), which generally controls operations of the electronic device1000and runs an application program, an operating system, and/or the like. The application processor1100may provide image data received from the camera module1200to the display device1600and/or store the image data in the storage1400. The image sensor module100described with reference toFIGS.1through16may be applied to the camera module1200. The camera module1200may include a memory1210performing a calculation process. The memory1210may perform calculation processing (e.g., pattern density detection and optical flow detection) on image data, which is stored in a bank of the memory1210, using a PIM circuit. The processor220,220a, and/or220b(described with reference toFIGS.1through16) may be applied to the application processor1100. The application processor1100may receive image data and calculation data (e.g., pattern density data and optical flow data) from the camera module1200and perform additional calculation processing (e.g., object detection) on the image data based on the calculation data. In an example embodiment, when the electronic device1000is an autonomous vehicle, the application processor1100may control the driving unit of the autonomous vehicle based on object information obtained by performing object detection. The working memory1300may include volatile memory (such as DRAM or static RAM (SRAM)), and/or non-volatile resistive memory (such as FeRAM, RRAM, and/or PRAM). The working memory1300may store programs and/or data, which are processed or executed by the application processor1100. The storage1400may include non-volatile memory such as NAND flash memory and/or resistive memory. For example, the storage1400may be provided as a memory card such as a multimedia card (MMC), an embedded MMC (eMMC), a secure digital (SD) card, and/or a micro SD card. The storage1400may store image data received from the camera module1200or data processed or generated by the application processor1100. The user interface1700may include various devices, such as a keyboard, a button key panel, a touch panel, a fingerprint sensor, and a microphone, which may receive a user input. The user interface1700may receive a user input and provide a signal corresponding to the user input to the application processor1100. Though illustrated as separate, in some embodiments, the display device1600and the user interface1700may be at least partially merged. For example, the display device1600and the user interface1700may be (and/or include) a touch screen. The wireless transceiver1500may include a transceiver1510, a modem1520, and an antenna1530. While the inventive concepts have been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims.
73,455
11943558
The same reference numerals refer to the same parts throughout the various figures. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular embodiments, procedures, techniques, etc. in order to provide a thorough understanding of the present technology. However, it will be apparent to one skilled in the art that the present technology may be practiced in other embodiments that depart from these specific details. It is known that video recordings are made up a series of frames or group of pictures displayed at a speed rate to create motion. These frames of images or video can be characterized as digital frame data, which can be buffered in the playing back of the video. The frame rate (expressed in frames per second or fps) is the frequency (rate) at which these consecutive frames appear on a display. This can be applied equally to film and video cameras, computer graphics, and motion capture systems. Frame rate may also be called the frame frequency, and be expressed in hertz. Real-time recording and/or playback of video is typically performed at a rate of thirty (30) fps. It is desirable in several situations to speed up or slowdown the playback of the video. This is typically conducted while keeping the recording and playback frames per second at 30 fps in order to maintain compatibility with the existing components, such as the display devices, etc. For example, if a viewer wanted to speed up the playback of a video by a certain percentage from the standard real-time playback speed while keeping 30 fps, the information or data of a specific number of frames is required to be played back in a time segmented for 30 frames. A scheme to create this is to skip one frame, from the recorded video, out of every predetermine number of frames so that the appropriate number of frames of video are displayed at 30 fps. It is noted that these known systems and methods are provided as a post-recording process, which skips frames from a 30 fps recording. The recording is initially written to memory in real time at 30 fps, with no special effects. The present technology solves the problem of requiring “post production editing” to insert the time modification special effects, which can be time and resource costly, especially for amateur filmmakers. Furthermore, the present technology solves the problem of pre-setting the motion recording speed to either fast motion or slow motion where user cannot adjust the motion recording speed in real time during the recording process. Even still further, the present technology solves the problem of presetting the motion recording speed where a user cannot adjust the motion recording speed continuously and vary from fast motion to slow motion in real time during the recording process. The present technology alleviates and solves the issue requiring hardware support for every device. By using the software algorithm to simulate slow motion, it is not device dependent and the resulting file is much smaller than hardware supported slow motion video. While the above-described devices fulfill their respective, particular objectives and requirements, the aforementioned devices or systems do not describe a real time video special effects system and method that allows creating special effects in video recordings while recording is in progress. The present technology additionally overcomes one or more of the disadvantages associated with the prior art by adding or removing frames from the frame strip provided by the camera in real time. Still further, there is no known interface for the user to change the speed of recording and the duration to apply the special effects in real time while recording is in progress. Furthermore, the scene has to be relatively fixed, with the camera not panning or following the action. The algorithm associated with this known system uses a motion sensor while the camera remains steadily fixed on a scene and the subject has to traverse the scene while the rest of the scene remains fixed. The present technology can utilize a graphical user interface associated with the electronic device that modifies the frames from a camera in real time prior to recording or saving. A need exists for a new and novel real time video special effects system and method that can be used for creating special effects in video recordings while recording is in progress. In this regard, the present technology substantially fulfills this need. In this respect, the real time video special effects system and method according to the present technology substantially departs from the conventional concepts and designs of the prior art, and in doing so provides an apparatus primarily developed for the purpose of creating special effects in video recordings while recording is in progress. Users of the present technology can in “real time” produce videos that contain the special effect of user controlled variable time modification, aka fast motion or slow motion, by using the user interface programmed into the device's apps that run on their supported operating systems, and other embedded devices. The produced video is taken in one-shot, with all of the time modification commands entered in real time while recording. For exemplary purposes, the present technology can utilizes a set video frame rate to 30 fps, resulting in 30 frames per second while recording. In some or all embodiments of the present technology, a user can utilize a fast forward option of the present technology, which results in dropping frames according to the set fast forward rate (like 1×, 2×, 3×, etc.). If the user sets 2× fast forward video then the present technology can append the 1st frame in writer and skips the 2nd frame, then write the 3rd frame, and then skip the 4th frame and so on. The resultant video that is recorded is at the predefined fast forward speed in real time while retaining a 30 fps. In some or all embodiments, a user can utilize a slow motion option of the present technology, which results in appending a same frame twice thereby repeating this frame so the final video that is recorded is in slow motion. For example, if the user sets 2× slow video then the present technology can append the 1st frame in writer, and the same frame append to the next time/frame slot. The resultant video that is recorded is at the predefined slow motion speed in real time while retaining a 30 fps. The present technology allows the user to control the recording device's (and any other video recording device) recording speed and other camera settings while recording through the use of the custom user interface, such that when the user plays the video immediately after the present technology algorithm has processed the commands, the playback speed of the scenes correspond with the commands during recording. The present technology accomplishes this with software simulation without having to increase the recording device's frame rate and is not device dependent and works across all platforms. An additional aspect of the present technology can be to increase the frame rate of the recording device while recording is in progress. This requires application programming interface (API) access to limited number of supported hardware and there is no industry standard API, which limits the number of supported devices. The display shows the current time recording rate, from normal speed to 3× faster, or −3× slower (can be 4×, 5× or more). The user can control the recording rate by utilizing the interface. Numerous advantages exist with the present technology, such as an easy to use custom user interface, wherein the user can add special effects of time modification into the video in real time while recording is in progress. This is an advantage over existing technology because the user can produce a video with the special effects (variable fast and slow motion recording speeds) while recording of that video is in progress. This reduces the time and costs to produce videos with these kinds of special effects by not requiring a separate video editing software and or paying a video editor to edit and produce a comparable video. User can enjoy viewing the videos they created with the special effects immediately once they have completed recording and brief processing time for the device to process adding the special effects and automatically producing a new video with the special effects implemented. Another advantage of user's manual control of the special effect in real time is that the user can pan along with the movement of the scene, and capture the peak moment of the action and use continuously variable slow/fast motion at just the right time and for as long as desired, and then return back to normal speed as the user is recording. Still another advantage is that the present technology is not hardware dependent for the slow or fast motion special effect to work. The software algorithm simulates the slow or fast motion. Even still another advantage is that with the manual user interface, the camera does not have to remain stationary while pointing at a stationary scene for an AI software to determine the “action” to apply the special effects thereto. Another advantage is that the present technology can accept input from, but not limited to, a remote camera feed, a joystick, a retina scanner, a body suit controller, on-screen subject gestures and a tactile user interface. In some or all embodiments, the present technology can also be applied to add time modifying special effects to pre-existing videos. The user can control the time variable in the playback by using the same familiar easy to use left-right on a compatible device, such as a smartphone or tablet, to control and modify the values for the playback time value, from −3× to 4× in this case. It can be appreciated that there are additional abilities to the factor of time modification once digital processing technology has advanced sufficiently to be able to interpolate data and images in between frames captured one the video. When the user slides towards the 4×, the recorded speed is played back faster than normal, up to 4× faster. When the user slides towards the −3×, the recorded speed is played back slower than normal, up to 3× slower. In some or all embodiments, the raw video data can include data such as, but not limited to, streaming video data, video, audio, depth, object identification, histogram, and combination thereof. In some or all aspects, the processing unit can be configured or configurable to preclude the raw video data from being written to the memory unit from the camera, such that the present technology can intercept the raw video data. In some or all embodiments, the input can be one or more desired speed rate values that the modified speed rate is based on. Where the modified speed rates can be one of less than the native speed rate or greater than the native speed rate. If the modified speed rate is less than the native speed rate, then the processing unit can be configured or configurable to add at least one frame to the raw video data to create the modified video data. If the modified speed rate is greater than the native speed rate, then the processing unit can be configured or configurable to remove at least one frame from the raw video data to create the modified video data. If the input is not a request to change the native speed, then the processing unit can be configured or configurable to keep all the frames from the raw video data and write the raw video data to memory. In some or all embodiments, the interface can be a graphical user interface including a portion configured or configurable to generate the input that is associated with the native speed rate or the modified speed rate. The graphical user interface can be configured or configurable to display the output video recording data in real time with receiving the raw video data from the camera. The output video recording data can be configured or configurable to include a combination of the raw video data and the modified video data, with a transitioning between the raw video data and the modified video data being dependent on the input. It can be appreciated that the interface can be a joystick or can utilize a joystick. In yet another aspect, the interface can be operable associated with at least one computer-readable storage media storing instructions that, when executed by the processing unit or a processor of a computer system, causes the processing unit to direct the raw video data from the camera to the processing unit and as well as to the memory unit in real time with receiving the raw video data from the camera, and to write the raw video data from the processing unit to the memory unit or apply at least one algorithm to the raw video data to create the modified video data and write the modified video data from the processing unit to the memory unit. According to yet another aspect of the present technology, the present technology can be a method of recording a video at one or more speed rates in real time with receiving the raw video data from the camera. The method can include the steps of receiving, by at least one processing unit, raw video data at a native speed rate from a camera in real time with capturing images at least in part corresponding with the raw video data from the camera, and receiving an input from at least one interface that is operably associated with the processing unit. The method can include determining, by the processing unit, if the input is associated with changing the native speed rate of the raw video data and if so modifying the raw video data to create modified video data at one or more modified speed rates that are different to the native speed rate in real time with receiving the raw video data from the camera. The method can further include writing, by the processing unit, output video recording data to at least one memory, wherein the output video recording data is one of the raw video data at the native speed rate, the modified video data at the modified speed rate, and a combination of the raw video data and the modified video data. Some or all embodiments of the present technology can include determining if the modified speed rate is less than the native speed rate, and if so then modifying the raw video data can include adding at least one new frame to the raw video data to create the modified video data. In some or all embodiments, the method can include adding the new frame by copying at least one raw frame to create the new frame, and adding the new frame to the raw video data adjacent to the raw frame. In some or all embodiments, the new frame to be added can be a plurality of new frames each being a copy of at least one raw frame from the raw video data, with the new frames being added to the raw video data adjacent to the raw frame that was copied. In some or all embodiments, the method can include adding the new frame by frame blending at least two raw frames to create the new frame, and adding the new frame to the raw video data between the two raw frames. In some or all embodiments, the new frame(s) to be added can be a plurality of new frames each being a blend of at least two raw frames from the raw video data, with the new frames being added to the raw video data between the raw frames that was blended. In some or all embodiments, each of the new frames can be added to the raw video data adjacent to the raw frame or adjacent to a second raw frame of the raw video data. Some or all embodiments can include the step of determining if the modified speed rate is greater than the native speed rate, and if so then modifying the raw video data can include removing at least one first raw frame from the raw video data to create the modified video data. In some or all embodiments, the removing of the first raw frame can include selecting the first raw frame to be removed, and then removing the first raw frame from the raw video data to create the modified frame. In some or all embodiments, the interface can be a graphical user interface including a portion configured or configurable to generate the input that is associated with the native speed rate or the modified speed rate, and wherein the interface is configured or configurable to display the output video recording data. Some or all embodiments can include the output video recording data being a combination of the raw video data and the modified video data. With the modified video data configured or configurable to include multiple subsets each having a speed rate dependent on the input. Where a transitioning between the raw video data and any one of the subsets or between any of the subsets is dependent on the input, and wherein the output video recording data is displayed in the graphical user interface in real time with receiving the raw video data from the camera. In some or all embodiments, the present technology can include an extreme slow motion subroutine at constant high recoding fps. This subroutine can be utilized for slow motion speed ranges greater than or equal to −8×, by passing through an unchanged video stream or make copies of each frame a predetermined number of times. In some or all embodiments, the present technology can include a segment time compression and expansion subroutine that provides an algorithm for slow motion and fast motion by speeding up or slowing down the playback time during video processing after the recording has stopped. This subroutine can set the device's recording and/or playback fps, and set video segment playback fps to equal the recording fps using an algorithm that utilizes in part the segment playback fps and record fps. In some or all embodiments, the present technology can include a variable playback speed record subroutine that provides an algorithm for slow motion and fast motion by speeding up or slowing down the playback frame rate while video recording is in progress. This algorithm can produce a normal video with the fast/slow motion commands embedded in the video's metadata. In some or all embodiments, the present technology can include a variable playback speed playback subroutine that provides an algorithm for playing a video file with slow motion and fast motion special effects by speeding up or slowing down the playback frame rate while video playback is in progress. Some or all embodiments can include the graphical user interface being configured or configurable by the processing unit to revert from playing at the modified playing speed on the graphical user interface the video being captured to playing the video being captured at the normal speed. In some or all embodiments, the graphical user interface can be configured or configurable by the processing unit to revert from playing at the modified speed on the graphical user interface the video being captured to playing the video being captured at the normal playing speed in response to a user input received by the graphical user interface. In some or all embodiments, the graphical user interface can be configured or configurable by the processing unit to seamlessly change the playing speed on the graphical interface of the video being recorded from the normal playing speed to a modified playing speed. In some or all embodiments, the graphical user interface can be displayed on a display of the electronic device, and the graphical user interface can include multiple regions with a first region being configured or configurable to display the video being captured at the normal playing speed, and a second region being configured or configurable to display the video being captured at the modified playing speed. Some or all embodiments of the graphical user interface can include a first affordance including at least one selectable value from a plurality of values. In some or all embodiments, the selectable value can be selected by a gesture on the display of the electronic device selected from the group consisting of a tap, a multiple tap, a touch holding, a sliding, a pinch, and a touch holding and sliding. In some or all embodiments, the plurality of values of the first affordance can include varying speed rates associated with slow motion speed, fast motion speed and normal speed. In some or all embodiments, the graphical user interface can include a second affordance configured or configurable to provide a second input to the processing unit and usable in determining a change in zoom factor of the raw video data. In some or all embodiments, the first affordance can be a slide bar associated with the varying speed rates, or the second affordance can be a slide bar indicating associated with varying zoom factors. In some or all embodiments, the second affordance can be displayed in the graphical user interface in an orientation different to an orientation of the first affordance. In some or all embodiments, at least one of the first affordance and the second affordance is in part arranged over the video display region. Some or all embodiments of the graphical user interface can include a second video display region configured to display a second video feed that can be different to the video feed displayed in the display region and can be one of the raw video data at the native speed rate, the modified video data at the modified speed rate, and a combination of the raw video data and the modified video data. In some or all embodiments, the graphical user interface can include a record affordance configured or configurable to provide at least one record input receivable and usable by the processing unit in at least determining if a recording operation is to be started or stopped. The record affordance can have a generally circular configuration with a first annular region configured or configurable to display a time lapse indication of the captured raw video data. Some or all embodiments of the graphical user interface can include one or more additional affordances configured or configurable to provide at least one additional input receivable and usable in initiating additional operations by the processing unit. In some or all embodiments, the additional operations are selected from the group consisting of a flash, a hands free operation, a timer, a mute operation, a rear camera operation, a setting operation associated with the electronic device, a setting operation associated with the camera, an editing operation, a scene filter operation, an “Augmented Reality” (AR) filter operation, adding music operation, a filter operation, a writing operation, and a transmission operation. There has thus been outlined, rather broadly, features of the present technology in order that the detailed description thereof that follows may be better understood and in order that the present contribution to the art may be better appreciated. Numerous objects, features and advantages of the present technology will be readily apparent to those of ordinary skill in the art upon a reading of the following detailed description of the present technology, but nonetheless illustrative, embodiments of the present technology when taken in conjunction with the accompanying drawings. As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods and systems for carrying out the several purposes of the present technology. It is, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not depart from the spirit and scope of the present technology. Even still another object of the present technology is to provide a real time video special effects system and method for creating special effects in video recordings while recording is in progress. This allows a user to control the speed rate of the video prior to and while recoding is in progress in real time while acquiring the video from the camera. These together with other objects of the present technology, along with the various features of novelty that characterize the present technology, are pointed out with particularity in the claims annexed to and forming a part of this disclosure. For a better understanding of the present technology, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated embodiments of the present technology. Whilst multiple objects of the present technology have been identified herein, it will be understood that the claimed present technology is not limited to meeting most or all of the objects identified and that some or all embodiments of the present technology may meet only one such object or none at all. Referring now to the drawings, and particularly toFIGS.1-43, some or all embodiments of the real time video special effects system and method of the present technology are shown and generally designated by the reference numeral10. As a general outline, the system comprises a camera configured to capture video of a real world scene or any video remote video feed, including video games, a graphical user interface, at least one memory; and at least one processing unit operably connected or connectable to the camera, the graphical user interface and the at least one memory. The at least one processing unit is configured to: play on the graphical user interface at normal speed the video being captured; and change the video playing speed on the graphical interface of the video being captured from the normal playing speed to a modified playing speed in response to a user input received by the graphical user interface. Referring now to some or all embodiments in more detail, new and novel real time video special effects system and method10of the present technology for creating special effects in video recordings while recording is in progress is illustrated and will be described with reference toFIG.1. More particularly, the real time video special effects system and method10can include a camera12, an image processor or processing unit14, a user interface30associated with the processing unit, a storage or memory unit18, a display unit20. At least one RAM memory and/or at least one non-volatile long term memory can be operably connected or connectable with the processing unit14. It can be appreciated that the camera12can be any device capable of capturing images and/or video, and can be associated or integrated with a microphone16. The image processing unit14is in operable communication with the camera12, microphone16, the memory unit18and/or the display unit20. The image processing unit14intercepts the raw video data from the camera12and/or microphone16, processes the raw video data in real time in possible accordance with at least one algorithm, and then records output/final video recording data in the memory unit18and/or displays the output/final video recording data in the display unit20. It can be appreciated that the system10can be configured or configurable as a complete video system of an electronic device having one or more video cameras12, one or more display devices20, and one or more integrated circuits or processors. Alternatively, it can be appreciated that the imaging processing unit14can be configured or configurable as a module or integrated circuit chip embedded in the electronic device or with a component of the electronic device. Further in the alternative, the system10can be configured or configurable as a video data processing device such as, but not limited to, a graphics processing unit (GPU), digital signal processor (DSP), Active Server Pages (ASP), central processing unit (CPU), accelerated processing unit (APU), Application Specific Integrated Circuit (ASIC). Even further in the alternative, the system10can be configured or configurable as software or programming code as part of an operating system or application running on or controlling the electronic device or camera. The electronic device including the camera12, microphone16and display unit20can be, but not limited to, smart phones, smart watches, tablets, notebooks, desktop computers, laptops, DVD players, televisions digital cameras (point and shoot, single-lens reflex, video cameras, high end audio/visual gear), eyewear, drones, gimbals and other stabilizers, selfie sticks, closed circuit video monitoring system, dash cam for cars, endoscopes, microscopes, telescopes, camera and/or display embedded circuits, wearables, “Internet of Things” (IoT), and the like. With reference toFIG.2, the processing unit14can be configured or configurable to receive an input of a user selection of a requested recording speed. The raw video data from the camera12can be diverted to the imaging processing unit14, where the program and/or algorithm modifies or retains the raw frames contained in the raw video data from the camera12. The raw frames in the data stream are either modified or retained by the imaging processing unit14in real time, and then passed to the memory unit18and/or display unit20. Examples of operations of the imaging process unit using frame adding, frame blending and frame dropping are illustrated inFIGS.3A-G. When actuated or while in operation, the imaging processing unit14intercepts the raw video data22from the camera12, which includes a series of frames #1-#n at a native frame rate for proper presentation by the display unit20. For exemplary purposes, the frame rate shown inFIG.3Acan be 30 fps. The imaging processing unit14receives the raw frames22and then can modify or retain the raw frames dependent on one or more inputs signals received by the imaging processing unit14. If the imaging processing unit14receives no input signals requesting an adjustment of the frame speed rate, then all the raw frames contained in the raw video data22are passed through to other components such as the memory unit of the electronic device, as best illustrated inFIG.3A. In some or all embodiments, if the imaging processing unit14receives a special effect input signal associated with a fast motion recording operation, which represents a speed up or fast forward displaying at 2× the native frame rate, then the imaging processing unit14appropriately modifies the raw video data22. Upon which, the raw frames22are processed using algorithm wherein every second frame is dropped, as best illustrated inFIG.3B. Raw frame #1 can be appended in writer, raw frame #2 can be skipped/dropped, then raw frame #3 can be written, and then raw frame #4 can be skipped/dropped, and so on until a modified or output video recording data24is generated in 2× fast motion speed. This process is conducted in real time, and the fast motion output video is recorded in place of the raw video data22, and/or displayed in real time. In some or all embodiments, if the imaging processing unit14receives a special effect input signal associated with a fast motion recording operation, which represents a speed up or fast forward displaying at 3× the native frame rate, then the imaging processing unit14appropriately modifies the raw video data22. Upon which, the raw frames22are processed using algorithm wherein every second and third frames are dropped, as best illustrated inFIG.3C. Raw frame #1 can be appended in writer, the raw frames #2 and #3 can be skipped/dropped, then raw frame #4 can be written, then raw frames #5 and #6 can be skipped/dropped, and then raw frame #7 can be written, and so on until a modified or output video recording data24is generated in 3× fast motion speed. This process is conducted in real time, and the fast motion output video is recorded in place of the raw video data22, and/or displayed in real time. For example, if the imaging processing unit14receives a special effect input signal associated with a slow motion recording operation, which represents a slowdown or slow motion displaying at −2× the native frame rate. Upon which, the raw frames22are processed using algorithm wherein every frame is duplicated/repeated, as best illustrated inFIG.3D. Raw frame #1 can be appended in writer, then raw frame #1 is duplicated and written, then raw frame #2 is written, then raw frame #2 is duplicated and written, then raw frame #3 is written, and then raw frame #3 is duplicated and written, and so on until a modified or output video recording data24is generated in −2× slow motion speed. This process is conducted in real time, and the slow motion output video is recorded in place of the raw video data22, and/or displayed in real time or immediately after recording has stopped and the post recording algorithm has completed processing the commands entered while recording. In some or all embodiments, if the imaging processing unit14receives a special effect input signal associated with a slow motion recording operation, which represents a slowdown or slow motion displaying at −3× the native frame rate, then the imaging processing unit14appropriately modifies the raw video data22. Upon which, the raw frames are processed using algorithm wherein every frame is duplicated/repeated at least twice, as best illustrated inFIG.3E. Raw frame #1 can be appended in writer, then raw frame #1 is duplicated twice and each is written, then raw frame #2 is written, then raw frame #2 is duplicated twice and each is written, then raw frame #3 is written, and then raw frame #3 is duplicated twice and each written, and so on until a modified or output video recording data24is generated in −3× slow motion speed. This process is conducted in real time, and the slow motion output video is recorded. In some or all embodiments, if the imaging processing unit14receives a special effect input signal associated with a slow motion recording operation, which represents a slowdown or slow motion displaying at −2× the native frame rate. Upon which, the raw frames22are processed using algorithm wherein new frames are created as a result of “blending” of two adjacent frames, as best illustrated inFIG.3F. Raw frame #1 can be appended in writer, then raw frame #1 is “blended” with raw frame #2 to create 1 new frame, #1a, and then #1 a is written, then raw frame #2 is written, then raw frame #2 is “blended” with raw frame #3 to create 1 new frame, #2a, and then #2a is written, then raw frame #3 is written, then raw frame #3 is “blended” with raw frame #4 to create 1 new frame, #3a, and then #3 a is written, and so on until a modified or output video recording data24is generated in −2× slow motion speed. This process is conducted in real time, and the slow motion output video is recorded in place of the raw video data22, and/or displayed in real time or immediately after recording has stopped and the post recording algorithm has completed processing the commands entered while recording. In some or all embodiments, if the imaging processing unit14receives a special effect input signal associated with a slow motion recording operation, which represents a slowdown or slow motion displaying at −3× the native frame rate. Upon which, the raw frames22are processed using algorithm wherein new frames are created as a result of “blending” of two adjacent frames, as best illustrated inFIG.3G. Raw frame #1 can be appended in writer, then raw frame #1 is “blended” with raw frame #2 to create 2 new frame, #1a & 1b, and then #1a & #1b are written, then raw frame #2 is written, then raw frame #2 is “blended” with raw frame #3 to create 2 new frame, #2a & #2b, and then #2a & #2b are written, then raw frame #3 is written, then raw frame #3 is “blended” with raw frame #4 to create 2 new frame, #3a & #3b, and then #3a & #3b are written, and so on until a modified or output video recording data24is generated in −3× slow motion speed. This process is conducted in real time, and the slow motion output video is recorded in place of the raw video data22, and/or displayed in real time or immediately after recording has stopped and the post recording algorithm has completed processing the commands entered while recording. It can be appreciated that additional fast and/or slow motion operations can be performed with greater fast motion or slow motion speeds than those described above. It can further be appreciated that a combination of fast motion and slow motion speeds can be implemented to a single raw video data in real time. Thus creating output/final video recording data containing portions of native speed rate, fast motion speed, slow motion speed or any combination thereof. With reference toFIGS.4and5, a companion software application can be associated with and/or executed by the image processing unit14or an electronic computing device, machine or system2that is operably associated with the image processing unit14.FIG.4is a diagrammatic representation of the image processing unit14incorporated with an integrated circuit chip26, which can be embedded with an example machine or component thereof, such as the camera12, in the form of the electronic device2, within which a set of instructions for causing the component or electronic device to perform any one or more of the methodologies discussed herein may be executed. Integrated circuit chip26containing the image processing unit14can be configured or configurable to include firmware for its operation. It can be appreciated that the integrated circuit chip26can be embedded with the camera12, the display unit20, or other components of the electronic device2. It can be appreciated that remote controls connected to the electronic device or camera through Bluetooth® or other protocols can be utilized. The integrated circuit chip26can include a computer or machine-readable medium on which is stored one or more sets of instructions and data structures (e.g., instructions) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions are configured or configurable for operation of the image processing unit14, which can receive operational instructions from the interface or GUI. The device2can further include a number of different input (including simultaneous input from multiple feeds) and/or output (I/O) systems such as, but not limited to, a touchscreen and GUI, sonar or subsonic transmitter, receiver and/or transceiver, voice command, Bluetooth®, remote controller, on-screen gesture command or infrared. The device2can further record video or images from the video recording device to a memory/storage system such as, but not limited to, an internal memory, an external memory, external solid-state drive (SSD) or the cloud. FIG.5is a diagrammatic representation of the image processing unit14incorporated with the electronic device2within which a set of instructions for causing the electronic device to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the electronic device2operates as a standalone device or may be connected (e.g., networked) to other devices. In a networked deployment, the electronic device may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The electronic device may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single electronic device is illustrated, the term “device” shall also be taken to include any collection of devices that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example electronic device2includes a processor or multiple processors (e.g., CPU, GPU, or both), and a main memory and/or static memory, which communicate with each other via a bus. In other embodiments, the electronic device2may further include a video display (e.g., a liquid crystal display (LCD)). The electronic device2may also include an alpha-numeric input device(s) (e.g., a keyboard), a cursor control device (e.g., a mouse), a voice recognition or biometric verification unit (not shown), a drive unit (also referred to as disk drive unit), a signal generation device (e.g., a speaker), a universal serial bus (USB) and/or other peripheral connection, and a network interface device. In other embodiments, the electronic device2may further include a data encryption module (not shown) to encrypt data. The image processing unit14can be a module operably associated with the drive unit, with the drive unit including a computer or machine-readable medium on which is stored one or more sets of instructions and data structures (e.g., instructions) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions may also reside, completely or at least partially, within the memory and/or within the processors during execution thereof by the electronic device2. The memory and the processors may also constitute machine-readable media. The instructions may further be transmitted or received over a network via the network interface device utilizing any one of a number of well-known transfer protocols (e.g., Extensible Markup Language (XML)). While the machine-readable medium is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the device and that causes the device to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware. It is appreciated that the software application is configured or configurable to be stored in any memory of the electronic device2or on a remote computer in communication with the electronic device2. The software application is configured or configurable to include the interface capable of allowing a user to define a custom frame speed rate of the video to be recorded without changing the default frame speed rate by the camera12. Referring now in more to methods for controlling a special effects operation of live video capturing data in real time. As outline, in some or all embodiments, the methods comprise capturing a video of real world scene with the camera; playing at normal speed on a graphical user interface the video being captured; changing the playing speed on the graphical user interface of the video being recorded from the normal playing speed to a modified playing speed in response to a user interface input received by the graphical user interface. Reference will now be made in more detail to specific processes according to some or all embodiments for controlling special effects operations of live video capturing data in real time. A possible process of the software application or interface is illustrated inFIGS.6,7and24. The interface and/or software application allows the user to select a predefined video recording speed rate in real time, without altering the raw speed rate provided by the camera. This makes the present technology not camera or device dependent.FIG.6illustrates an overall process of the present technology including the user interface, the device, and the subroutines associated with the overall process. Referring toFIG.7, the process of the present technology is described which determines if any special effects option has been requested for raw video data stream from the camera. For exemplary purposes, the special effects can be the changing of video speed rate by modifying of frames in the raw video data. The process can be configured or configurable to initiate subroutines and/or subprocesses to assist in the overall process. The present technology software application is initiated and the user interface is provided to the user. An initial step can be for the user to opens the App50. After which, step51allows the user to go into Camera Settings and selects either to use the electronic device's camera or a remote video feed. The process then proceeds to step52wherein the user starts the recording process. The camera or electronic device receives a “start” command53to initiate audio/video recording. It can be appreciated that the camera “start” command can be initiated by the present technology software application, a camera application, any other application associated with the electronic device or with a remote device in communication with the electronic device or camera. Step54in the process can be to determine if the user has appropriate permission to proceed with the process. At step56, permission attributes can be obtained from separate user settings, profiles, databases, keys, accounts, and the like. The permission attributes can be obtained from a user database58. Step60determines if the user has the appropriate permission, and if the user does not have the appropriate permission, then the process is stopped or ends (step94). If the user does have appropriate permission then the process proceeds to step62, which will get the device's supported settings, including a maximum recording frame rate frames per second (fps). Then the process sets the local or remote device's recording fps based on user permission and device support in step64, and then opens an input stream from the device in step66. Once the raw data input stream from the camera is communicated to the image processing unit, then the process will then determine if the video data stream from the camera is opened in step68, while receiving information from step62. This request can be utilized to check if image processing unit is receiving the raw video data from the camera. The raw video data stream may include an integral or peripheral microphone, and can be passed to the image processing unit and not to the memory unit or video recording device. If the process determines that the input stream is not opened, then the process is stopped or ends (step94). If the input stream is open, then the process proceeds to step70to determine if the raw video data should be saved/recorded. If the raw video data is to be saved, then the process proceeds to step72to initiate a new parallel process utilizing the write video stream subroutine as illustrated in instance1inFIG.8. Additional input data from the recording device (step74) can be saved with the raw video data. If it was determined in step70that the raw video data is not to be saved, then process proceeds to step76to determine if while the video input stream is open, and if it is open then the process proceeds to step76to determine if a special effect command has been entered by the user (step82). The special effect command can be entered in this process by way of the interface. If the user entered a special effect command, then step84is initiated to apply a special effect subroutine, as best illustrated inFIG.9. Additional input data from the recording device (step86) can be included with the application of special effects in step84. If the user has not entered a request for a special effect in step82, such as a change in video speed rate, then step88is initialized which applies other commands, such as input data from the recording device (step90) and/or input processed video and audio data with special effects (step92). Step88can include other information from step84. If the video input stream is determined to be closed in step76, then the process proceeds stops or ends (step94). If necessary, step78can initiate a new parallel process utilizing the write video stream subroutine as illustrated in instance2inFIG.8. Additional processed video and audio data (step80) can be saved with the video data from step78. The new parallel process of step78can be initiated separately and independently from steps84and/or88. After step78the process proceeds back to step76. This process can write the raw video stream using the write video stream subroutine after the raw video data stream has been either processed using apply special effect subroutine or has retained the raw video data stream. Referring toFIG.8, the write video stream subroutine is describe which provides the process to write/save/record the video data stream to one or more internal memory, to one or more removable memory in communication with the electronic device, to one or more external devices, and/or to upload to one or more cloud devices or accounts. The present technology process determines in sequence which device or devices is the video data stream to be written to, and if the user has appropriate permission for each of the steps associated with the write video stream subroutine. If the user does have the appropriate permission to write to that particular device or devices, then the process writes the video data stream to that particular device or devices in accordance with any user preferences. This subroutine starts (step104) upon initiation by a command from the process inFIG.7. This subroutine then proceeds to obtain user's preferences and permissions (step102) from the process inFIG.7or a database (steps104and106). After step102, this subroutine acquires the raw video data stream from the camera as an input (step108). The raw video data stream can be audio/video stream from the electronic device, the camera and/or the microphone, as per step110and/or audio/video stream from the device's RAM memory and/or non-volatile long term memory, as per step112. After acquisition of the raw video data stream, step114of this subroutine is initiated which determines if the user has permission to write to internal memory? If the user does have the appropriate permission and if the user preferences allows for a write/copy action to internal memory (step116), then a new process is started at step118which writes the video data stream to the internal memory. If the user does not have permission to write to the internal memory from step114, or if the user preferences in step116do not allow the write/copy action in step116, or after starting the process in step118, then this subroutine continues to determine if the user has permission to write to removable memory (step120). If the user does have the appropriate permission and if the user preferences allows for a write/copy action to removable memory (step122), then a new process is started at step124which writes the video data stream to the removable memory. If the user does not have permission to write to the removable memory from step120, or if the user preferences in step122does not allow such an action, or after starting the process in step124, then this subroutine continues to determine if the user has permission to write to external devices (step126). If the user does have the appropriate permission and if the user preferences allows for a write/copy action to external devices (step128) is requested, then a new process is started at step130, which writes the video data stream to the external devices. If the user does not have permission to write to the external devices from step126, or if the user preferences in step128do not allow the write/copy action in step128does not allow such an action, or after starting the process in step130is completed, then this subroutine continues to determine if the user has permission to write to cloud (step132). If the user does have the appropriate permission and if the user preferences allows for a write/copy action to the cloud (step134), then a new process is started at step136which writes the video data stream to the cloud. If the user does not have permission to write to the cloud from step132, or if the user preferences from step134does not allow such an action, or after starting the process in step136, then this subroutine stops or ends (step138). Referring toFIG.9, the apply special effects subroutine is described which determines if a special effects option has been requested and to the specific operation of the special effects request. This subroutine starts (step140) upon initiation by a command from the process inFIG.7. After starting, this subroutine acquires the raw video data stream from the camera as an input (step142). The raw video data stream can be audio/video stream from the electronic device, the camera and/or the microphone, as per step146. After acquisition of the raw video data stream, step148is initiated, which determines if the current speed is equal to the normal or native speed, such as but limited to Recording_fps is greater than the Playback_fps. If the user has made a speed change request, then step150initiates an advanced slow motion subroutine, as best illustrated inFIG.12. After the completion of step150, this subroutine stops or ends (step168). If the user has not made a speed change request such that the new speed is not set to normal, such as if the Recording_fps is not greater than the Playback_fps or if the Recording_fps is equal to the Playback_fps, then this subroutine proceeds to step152which determines if the current speed is equal to the normal or native speed. If the user has made a speed change request or if user has set the speed back to normal from a previously modified speed setting, then this subroutine continues to step154to write video stream to RAM memory and/or non-volatile long term memory buffer, as perFIG.3A. After step154is completed, the subroutine proceeds to step164to return video buffer (RAM memory and/or non-volatile long term memory) to a calling function, which can be as step to determine if the video stream is open or this subroutine stops or ends (step168). If the user has not made a speed change request such that the new speed is not set to normal, this subroutine will then proceed to step156, which determines if the speed change request is faster or slower than the normal speed of the raw video data stream. This can be accomplished by determining if the current speed is greater than normal. If the current speed is greater than the normal spend, then this subroutine will initiate a speed up subroutine (step158), as best illustrated inFIG.10. After the speed up subroutine is completed, this subroutine will then initiate step164to return video buffer (RAM memory and/or non-volatile long term memory) to the calling function. If the requested current speed is not greater than the normal speed, then this subroutine continues to step160to determine if the current speed is to be less than normal. If the current speed is less than the normal spend, then this subroutine will initiate a slowdown subroutine (step162), as best illustrated inFIG.13. After the slowdown subroutine is completed or if the current speed is not to be less than normal, then this subroutine will initiate step164to return video buffer (RAM memory and/or non-volatile long term memory) to the calling function. Referring toFIG.10, the speed up subroutine is described which determines if a frame dropping option and/or other plugins are required. This subroutine starts (step170) upon initiation by a command from the apply special effects subroutine (FIG.9, step158). After starting, this subroutine acquires the raw video data stream from the camera and/or from streamed input from a remote video feed as an input (step172). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step174. After acquisition of the raw video data stream, step176of this subroutine is initiated which determines if the video data input stream from the camera is open. If it is not open then this subroutine proceeds to step189, which stops or ends this subroutine. If the input stream is open then this subroutine determines if frame dropping is required (step178), and if required then continues to step180that initiates a frame dropping subroutine as best illustrated inFIG.11. If frame dropping is not required from step178or after the frame dropping subroutine of step180is completed, then this subroutine proceeds to step181to determine if the use of time compression or expansion is requested, and if required then continues to step182that initiates a time compression and expansion subprocess as best illustrated inFIG.20. If frame time compression and/or expansion is not required from step181or after the time compression and/or expansion subprocess of step182is completed, then this subroutine proceeds to step183to determine if the use of variable FPS playback is requested, and if required then continues to step184that initiates a variable FPS playback subprocess as best illustrated inFIG.21. If frame variable FPS playback is not required from step183or after the variable FPS playback subprocess of step184is completed, then this subroutine proceeds to step185to determine if other plugins or applications are requested. In the case that other plugins or application is requested, then this subroutine proceeds to step186to execute the other plugins or applications and apply their functions to the raw video stream from step178or modified video stream from any of steps180,182and/or184. For example, other plugins or applications can be, but not limited to, smoothing technology and the like. These other plugins or applications can be integrated with the present technology software application, or can be remote from the present technology but accessible and operable with present technology software application. In the case the user does not request the use of other plugins or applications from step185or after the other plugin process of step186is completed, then this subroutine will continue to step188to return data to a calling function that loops back to step176to determine if the video input stream is open. Step188can receive video/audio streams from RAM memory and/or non-volatile long term memory (step187). It can be appreciated that this apply special effects subroutine includes a looped subprocess including steps178,180,185,186and188until the input stream is determined to not be open in step176. With reference toFIG.11, the frame dropping subroutine is described which determines if and which frames are dropped to simulate the requested fast motion video. An exemplary case for this subroutine can be if the Record_fps is equal to the Playback_fps. This subroutine starts (step190) upon initiation by a command from the speed up subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step192). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step194. After acquisition of the raw video data stream, step196of this subroutine is initiated which determines if the video data input stream from the camera is open. If step196determines that the input stream is not open, then this subroutine proceeds to step198, which returns data to a calling function being step180inFIG.10. Step198can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step200). After the step198is completed, then this subroutine stops or ends (step202). While the input stream is open from step196, this subroutine determines if the speed equals 2 times faster than normal (step204). If so then step206is initialized which will drop the next frame, as perFIG.3B. After which, this subroutine proceeds to step220to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step220, this subroutine returns to step196. If the speed does not equal 2 times faster than normal (step204), then this subroutine determines if the speed equals 3 times faster than normal (step208). If so then step210is initialized which will drop the next 2 frames, as perFIG.3C. After which, this subroutine proceeds to step220and then returns to step196. If the speed does not equal 3 times faster than normal (step208), then this subroutine determines if the speed equals 4 times faster than normal (step212). If so then step214is initialized which will drop the next 3 frames. After which, this subroutine proceeds to step220and then returns to step196. If the speed does not equal 4 times faster than normal (step212), then this subroutine will sequentially continue to determine if the speed equals “n” times faster than normal (step216). If so then each “nth” step will initialize a drop the next (n−1) frames action (step218). After which, this subroutine proceeds to step220and then returns to step196. It can be appreciated that this frame dropping subroutine determines if a frame should or should not be dropped on a frame-by-frame basis. The result is a modified video stream with specific frames removed to simulate a fast motion video of predetermined speed. This modified video stream is then written/saved to memory in real time. It can be appreciated that this frame dropping subroutine includes a looped subprocess including steps204-220until the input stream is determined to not be open in step196. Referring toFIG.12, the advanced slow motion subroutine is described which determines if a frame adding option or other plugins are required. This subroutine starts (step222) upon initiation by a command from the apply special effects subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step224). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step246. After acquisition of the raw video data stream, step248of this subroutine is initiated which determines if the video data input stream from the camera is open. If step248determines that the input stream is not open, then this subroutine proceeds to step270, which stops this subroutine. While the input stream is open from step248, this subroutine determines if frame adding is required (step250), and if required then continues to step252that initiates a frame adding subroutine, as best illustrated inFIG.13. If frame adding is not required from step250or after the frame adding subroutine from step252is completed, then this subroutine proceeds to step254to determine if an increase in frames rate recording speed is required. If so, then this subroutine continues to step256, which initiates a variable frame rate subroutine or an increase frame rate subroutine, as best illustrated inFIG.14. If increase in frames rate recording speed is not required from step254or after the variable frame rate subroutine from step256is completed, then this subroutine proceeds to step258to determine if a constant high frames rate recording speed is to be used. If so, then this subroutine proceeds to step260, which initiates a constant high frame rate subroutine, as best illustrated inFIG.15. If frame constant high frames rate recording speed is not required from step258or after the constant high frames rate recording speed subroutine of step260is completed, then this subroutine proceeds to step261to determine if the use of time compression or expansion is requested, and if required then continues to step262that initiates a time compression and expansion subprocess as best illustrated inFIG.20. If frame time compression and/or expansion is not required from step261or after the time compression and/or expansion subprocess of step262is completed, then this subroutine proceeds to step263to determine if the use of variable FPS playback is requested, and if required then continues to step264that initiates a variable FPS playback subprocess as best illustrated inFIG.22. If frame variable FPS playback is not required from step263or after the variable FPS playback subprocess of step264is completed, then this subroutine proceeds to step265to determine if other special effects enhancement is requested. In the case that other special effects enhancement is requested, then this subroutine proceeds to step267, which can execute the other special effects subroutine and apply their functions to the raw or modified video stream. This other special effects subroutine can be integrated with the present technology software application, or can be remote from the present technology but accessible and operable with present technology software application. In the case the user does not request the use of other special effects enhancement from step265or after the other special effects subroutine from step267is completed, then this subroutine will continue to step266to return data to a calling function that loops back to step248to the determine if the video input stream is open. It can be appreciated that other processed audio/video data can be part of the data returned to the calling function, as per step268. It can be appreciated that this advanced slow motion subroutine includes a looped subprocess including steps250-266until the input stream is determined to not be open in step248. With reference toFIG.13, the frame adding subroutine associated with the slowdown subroutine ofFIG.12is described which determines if and which frames are added to simulate the requested slow motion video. This subroutine assumes that recording fps=playback fps. This subroutine starts (step272) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step274). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step276. After acquisition of the raw video data stream, step274of this subroutine is initiated which determines if the video data input stream from the camera is open. If step278determines that the input stream is not open, then this subroutine proceeds to step298, which returns data to a calling function being step252inFIG.12. Step298can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step300). After step298is completed, then this subroutine stops or ends (step302). While the input stream is open from step278, this subroutine determines the type of frame adding to utilize in step280, either simple frame copying (step281) or a more CPU intensive frame blending (step282). If the user has selected frame copying, then the process proceeds to step281and the algorithm and its description are unchanged. However, if the user selected “Frame Blending” and their hardware supports it, then the process proceeds to step282and the algorithm can include new or additional steps. It can be appreciated that if frame copying was selected during step280then for each of the speed “checks”, logically, the process will proceed along the left algorithm path. It can be further appreciated that if frame blending was selected during step280then for each of the speed “checks”, logically, the process will proceed along the right algorithm path. The subroutine continues to determine if the speed equals 2 times slower than normal (step283). If so, for the frame copying path, then step284is initialized which will copy the frame 1 time for a total of 2 of the identical frames, as perFIG.3D. After which, this subroutine proceeds to step296to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step296, this subroutine returns to step278. For the frame blending path, then step285is initialized which will blend the current frame with the next frame for a total of 1 new “blended” frame, as perFIG.3F. After which, this subroutine proceeds to step296. If the speed does not equal 2 times slower than normal (step283), then this subroutine determines if the speed equals 3 times slower than normal (step286). If so, for the frame copying path, then step287is initialized which will copy the frame 2 times for a total of 3 of the identical frames, as perFIG.3E. After which, this subroutine proceeds to step296and then returns to step278. For the frame blending path, then step288is initialized which will blend the current frame with the next frame for a total of 2 new “blended” frames, as perFIG.3G. After which, this subroutine proceeds to step296. If the speed does not equal 3 times slower than normal (step286), then this subroutine determines if the speed equals 4 times slower than normal (step289). If so, for the frame copying path, then step290is initialized which will copy the frame 3 times for a total of 4 of the identical frames. After which, this subroutine proceeds to step296and then returns to step278. For the frame blending path, then step291is initialized which will blend the current frame with the next frame for a total of 3 new “blended” frames. After which, this subroutine proceeds to step296. If the speed does not equal 4 times slower than normal (step289), then this subroutine will continue to determine if the speed equals “n” times slower than normal (step292). If so, for the frame copying path, then each “nth” step will copy the frame (n−1) times for a total of “n” of the identical frames. After which, this subroutine proceeds to step296and then returns to step278. For the frame blending path, then step295is initialized which will blend the current frame with the next frame for a total of (n−1) new “blended” frames. After which, this subroutine proceeds to step296. It can be appreciated that this frame adding subroutine includes a looped subprocess including steps280-296until the input stream is determined to not be open in step278. With reference toFIG.14, an example of the variable high recording fps subroutine (120 FPS) associated with the variable frame rate subroutine ofFIG.12is described. This variable frame rate subroutine can be utilized for simulating slow motion, such as but limited to, slow motion range=recording speed/playback fps=120 fps/30 fps=4. This subroutine starts (step304) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step306). The raw video data stream can be audio/video stream from the electronic device, the camera and/or the microphone, as per step308. After acquisition of the raw video data stream, step310of this subroutine is initiated to set the device's recording frame rate, for example to Recording_Frame_Rate=120 fps. After which, step312sets the device's playback frame rate, for example to Playback_Frame_Rate=30 fps. Step314of this subroutine is initiated which determines if the video data input stream from the camera is open. If step314determines that the input stream is not open, then this subroutine proceeds to step332, which returns data to a calling function being step256inFIG.12. Step332can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step334). After step332is completed, then this subroutine stops or ends (step336). While the input stream is open from step314, this subroutine determines if the recording speed equals “−4×” (step316), which can be a slow motion range of 4. If so then step318is initialized which sets the recording frame rate to 120 fps. After which, this subroutine proceeds to step330to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step330, this subroutine returns to step314. If the recording speed does not equal “−4×” (step316), then this subroutine determines if the recording speed equals “−3×” (step320). If so then step322is initialized which sets the recording frame rate to 90 fps. After which, this subroutine proceeds to step330to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step330, this subroutine returns to step314. If the recording speed does not equal “−3×” (step320), then this subroutine determines if the recording speed equals “−2×” (step324). If so then step326is initialized which sets the recording frame rate to 60 fps. After which, this subroutine proceeds to step330to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step330, this subroutine returns to step314. If the recording speed does not equal “−2×” (step324), then this subroutine will set the recording frame rate to 30 fps (step328), which can be a recording speed equal to or less than “normal”. After which, this subroutine proceeds to step330to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step330, this subroutine returns to step314. It can be appreciated that this variable high recording fps subroutine includes a looped subprocess including steps316-330until the input stream is determined to not be open in step314. With reference toFIG.15, an example of the constant frame rate slow motion subroutine associated with the constant high frame rate subroutine ofFIG.12is described. This constant frame rate slow motion subroutine can be utilized for simulating slow motion. This subroutine starts (step340) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step342). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step346. After acquisition of the raw video data stream, step348of this subroutine is initiated which gets the video stream's recording frame rates (recording fps), and then continues to step350that gets the video stream's playback frame rates (playback fps). With the recording and playback frame rates acquired, this subroutine then determines if the recording_fps=playback_fps*2 (step352). If so, then it proceeds to step354to initiate a constant high frame rate subroutine at 60 fps, as best illustrated inFIG.16. After which, this subroutine stops or ends (step368). If is not found that the recording_fps=playback_fps*2, then this subroutine proceeds to step356to determine if the recording_fps=playback_fps*4. If so, then it proceeds to step358to initiate a constant high frame rate subroutine at 120 fps, as best illustrated inFIG.17. After which, this subroutine stops or ends (step368). If is not found that the recording_fps=playback_fps*4, then this subroutine proceeds to step360to determine if the recording_fps=playback_fps*8. If so, then it proceeds to step362to initiate a constant high frame rate subroutine at 240 fps, as best illustrated inFIG.18. After which, this subroutine stops or ends (step368). If is not found that the recording_fps=playback_fps*8, then this subroutine proceeds to step364, which is generic for all other cases and initiates a constant high frame rate subroutine at higher fps. After which, this subroutine stops or ends (step368). With reference toFIG.16, an example of the constant high recording fps subroutine (60 FPS) associated with the constant high frame rate subroutine ofFIG.15is described. This constant high frame rate subroutine can be utilized for simulating slow motion, such as but limited to, slow motion range=recording speed/playback fps=60 fps/30 fps=2. “Slow motion range” is defined as the multiple factor that a slow motion effect can be created with the record and playback fps settings such that the algorithm does not have to use “frame adding” of any type. This subroutine starts (step370) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step372). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step374. After acquisition of the raw video data stream, step376of this subroutine is initiated which set the device's recording frame rate, for example to Recording_Frame_Rate=60 fps. After which, step378sets the device's playback frame rate, for example to Playback_Frame_Rate=30 fps. Step380of this subroutine is initiated which determines if the video data input stream from the camera is open. If step380determines that the input stream is not open, then this subroutine proceeds to step398, which returns data to a calling function being step354inFIG.15. Step398can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step400). After step398is completed, then this subroutine stops or ends (step402). While the input stream is open from step380, this subroutine determines if the recording speed equals “−4×” (step382). If so then step384is initialized which copies each frame in the stream 2 times for a total 3 identical frames as perFIG.3Eor blended frames as perFIG.3G. After which, this subroutine proceeds to step396to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step396, this subroutine returns to step380. If the recording speed does not equal “−4×” (step382), then this subroutine determines if the recording speed equals “−3×” (step386). If so then step388is initialized which copies each frame in the stream 1 time for a total 2 identical frames as perFIG.3Dor blended frames as perFIG.3F. After which, this subroutine proceeds to step396to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step396, this subroutine returns to step380. If the recording speed does not equal “−3×” (step386), then this subroutine determines if the recording speed equals “−2×” (step390). If so then step392is initialized which passes thru an unchanged video stream. After which, this subroutine proceeds to step396to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step396, this subroutine returns to step380. If the recording speed does not equal “−2×” (step390), then this subroutine will drop 1 of 2 frames (1/2) (step394) for a recording speed equal to “normal”. After which, this subroutine proceeds to step396to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step396, this subroutine returns to step380. It can be appreciated that this constant high recording fps subroutine (60 FPS) includes a looped subprocess including steps382-396until the input stream is determined to not be open in step380. With reference toFIG.17, an example of the constant high recording fps subroutine (120 FPS) associated with the constant high frame rate subroutine ofFIG.15is described. This constant high frame rate subroutine can be utilized for simulating slow motion, such as but limited to, slow motion range=recording speed/playback fps=120 fps/30 fps=4. This subroutine starts (step404) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step406). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step408. After acquisition of the raw video data stream, step410of this subroutine is initiated which sets the device's recording frame rate, for example to Recording_Frame_Rate=120 fps. After which, step412sets the device's playback frame rate, for example to Playback_Frame_Rate=30 fps. Step414of this subroutine is initiated which determines if the video data input stream from the camera is open. If step414determines that the input stream is not open, then this subroutine proceeds to step448, which returns data to a calling function being step358inFIG.15. Step448can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step450). After step448is completed, then this subroutine stops or ends (step452). While the input stream is open from step414, this subroutine determines if the recording speed equals “−8×” (step416). If so then step418is initialized which copies the frame 4 times for a total 5 identical frames or blended frames. After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−8×” (step416), then this subroutine determines if the recording speed equals “−7×” (step420). If so then step422is initialized which copies the frame 3 times for a total 4 identical frames or blended frames. After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−7×” (step420), then this subroutine determines if the recording speed equals “−6×” (step424). If so then step426is initialized which copies the frame 2 times for a total 3 identical frames as perFIG.3Eor blended frames as perFIG.3G. After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−6×” (step424), then this subroutine determines if the recording speed equals “−5×” (step428). If so then step430is initialized copies the frame 1 time for a total 2 identical frames as perFIG.3Dor blended frames as perFIG.3F. After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−5×” (step428), then this subroutine determines if the recording speed equals “−4×” (step432). If so then step434is initialized which passes thru an unchanged video stream. After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−4×” (step432), then this subroutine determines if the recording speed equals “−3×” (step436). If so then step438is initialized which drops 1 of 4 frames (1/4) (step438). After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−3×” (step436), then this subroutine determines if the recording speed equals “−2×” (step440). If so then step442is initialized which drops 2 of 4 frames (2/4) (step442). After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. If the recording speed does not equal “−2×” (step440), then this subroutine will drop 3 of 4 frames (3/4) (step444) for a recording speed equal to “normal”. After which, this subroutine proceeds to step446to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step446, this subroutine returns to step414. It can be appreciated that this constant high recording fps subroutine (120 FPS) includes a looped subprocess including steps416-446until the input stream is determined to not be open in step414. With reference toFIG.18, an example of the constant high recording fps subroutine (240 FPS) associated with the constant high frame rate subroutine ofFIG.15is described. This constant high frame rate subroutine can be utilized for simulating slow motion, such as but limited to, slow motion range=recording speed/playback fps=240 fps/30 fps=8. This subroutine starts (step454) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step456). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step458. After acquisition of the raw video data stream, step460of this subroutine is initiated which set the device's recording frame rate, for example to Recording_Frame_Rate=240 fps. After which, step462sets the device's playback frame rate, for example to Playback_Frame_Rate=30 fps. Step464of this subroutine is initiated which determines if the video data input stream from the camera is open. If step464determines that the input stream is not open, then this subroutine proceeds to step498, which returns data to a calling function being step362inFIG.15. Step498can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step500). After step498is completed, then this subroutine stops or ends (step502). While the input stream is open from step464, this subroutine determines if the recording speed equals “−8×” (step466). If so then step468is initialized which passes thru an unchanged video stream. After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−8×” (step466), then this subroutine determines if the recording speed equals “−7×” (step470). If so then step472is initialized which drops 1 frame out of every 8 frames (1/8). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−7×” (step470), then this subroutine determines if the recording speed equals “−6×” (step474). If so then step476is initialized which drops 1 frame out of every 4 frames (2/8). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−6×” (step474), then this subroutine determines if the recording speed equals “−5×” (step478). If so then step480is initialized which drops 3 frame out of every 8 frames (3/8). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−5×” (step478), then this subroutine determines if the recording speed equals “−4×” (step482). If so then step484is initialized which drops 1 frame out of every 2 frames (4/8). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−4×” (step482), then this subroutine determines if the recording speed equals “−3×” (step486). If so then step488is initialized which drops 5 frame out of every 8 frames (5/8). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−3×” (step486), then this subroutine determines if the recording speed equals “−2×” (step490). If so then step492is initialized which drops 3 frame out of every 4 frames (6/8). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. If the recording speed does not equal “−2×” (step490), then this subroutine will drop 7 frame out of every 8 frames (7/8) (step494). After which, this subroutine proceeds to step496to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step496, this subroutine returns to step464. It can be appreciated that this constant high recording fps subroutine (240 FPS) includes a looped subprocess including steps466-496until the input stream is determined to not be open in step464. With reference toFIG.19, an example of an extreme slow motion at constant high recording fps subroutine (240 FPS) associated with the constant high frame rate subroutine ofFIG.15is described. This constant high frame rate subroutine can be utilized for simulating extreme slow motion, such as but limited to, slow motion range of −8× to −128× speed. Constant High Recording FPS with Frame Adding Subroutine ofFIG.19illustrates an exemplary flow chart algorithm for the combination of high frames per second recording rate, “normal” playback frames per seconds, and frame adding to boost the slow motion special effect. This subroutine further illustrates speeds that are >=−8× and perfect multiples of 2, with speeds slower than −8× being best illustrated inFIG.18. This subroutine starts (step510) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step512). The raw video data stream can be audio/video stream from the electronic device, the camera and/or the microphone, as per step514. After acquisition of the raw video data stream, step516of this subroutine is initiated which set the device's recording frame rate, for example to Recording Frame Rate=240 fps. After which, step518sets the device's playback frame rate, for example to Playback Frame Rate=30 fps. Step520of this subroutine is initiated which determines if the video data input stream from the camera is open. If step520determines that the input stream is not open, then this subroutine proceeds to step544, which returns data to a calling function being step358inFIG.15. Step544can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step546). After step544is completed, then this subroutine stops or ends (step548). While the input stream is open from step520, this subroutine determines if the recording speed equals “−8×” (step522). If so then step524is initialized which passes thru an unaltered/unchanged video stream. After which, this subroutine proceeds to step542to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step542, this subroutine returns to step520. If the recording speed does not equal “−8×” (step522), then this subroutine determines if the recording speed equals “−16×” (step526). If so then step528is initialized which copies each frame 1 times for a total of 2 identical frames as perFIG.3Dor blended frames as perFIG.3F. After which, this subroutine proceeds to step542to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step542, this subroutine returns to step520. If the recording speed does not equal “−16×” (step526), then this subroutine determines if the recording speed equals “−32×” (step530). If so then step532is initialized which copies each frame 2 times for a total of 3 identical frames as perFIG.3Eor blended frames as perFIG.3G. After which, this subroutine proceeds to step542to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step542, this subroutine returns to step520. If the recording speed does not equal “−32×” (step530), then this subroutine determines if the recording speed equals “−64×” (step534). If so then step536is initialized which copies each frame 3 times for a total of 4 identical frames or blended frames. After which, this subroutine proceeds to step542to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step542, this subroutine returns to step520. If the recording speed does not equal “−64×” (step534), then this subroutine determines if the recording speed equals “−128×” (step538). If so then step540is initialized which copies each frame 4 times for a total of 5 identical frames or blended frames. After which, this subroutine proceeds to step542to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step542, this subroutine returns to step520. It can be appreciated that this constant high recording fps subroutine (240 FPS) includes a looped subprocess including steps520-542until the input stream is determined to not be open in step520. With reference toFIG.20, an example of a segment time compression and expansion subroutine is illustrated and will be described, which provides a flow chart algorithm for slow motion and fast motion by speeding up or slowing down the playback time during video processing after the recording has stopped. Frame adding/dropping can be performed in the time compression/expansion algorithm to simulate the slow motion special effect. Video files that are create with this algorithm/subroutine can be played normally in all video players and requires no metadata. This is in alternative to other video files created in the present technology. This subroutine starts (step550) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the camera as an input (step552). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step554. After acquisition of the raw video data stream, step556of this subroutine is initiated which set the device's recording FPS. After which, step558sets the playback FPS to less than or equal to (<=) the recording FPS. Step560of this subroutine is initiated which determines if the video data input stream from the camera is open. If step560determines that the input stream is not open, then this subroutine proceeds to step576. Step576can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step578). After step576is completed, then this subroutine stops or ends (step580). While the input stream is open from step560, this subroutine determines if the speed is less than “normal” (step562). If so then step564is initialized which sets video segment fps to equal the recording fps divided by the speed (Segment FPS=Record_FPS/Speed). After which, this subroutine proceeds to step574to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step574, this subroutine returns to step560. If the speed is not less than “normal” (step562), then this subroutine determines if the speed equals “normal” (step566). If so then step568is initialized which sets video segment fps to equal the recording fps (Segment FPS=Record_FPS). After which, this subroutine proceeds to step574to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step574, this subroutine returns to step560. If the recording speed does not equal “normal” (step566), then this subroutine determines if the speed is greater than “normal” (step570). If so then step572is initialized which sets video segment fps to equal the recording fps times the speed (Segment FPS=Record_FPS*Speed). After which, this subroutine proceeds to step574to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step574, this subroutine returns to step560. It can be appreciated that this segment time compression and expansion subroutine includes a looped subprocess including steps560-574until the input stream is determined to not be open in step560. An example of the segment time compression and expansion subroutine is best illustrated inFIG.21, which illustrates the results of the algorithm inFIG.20. The top bar represents the video segments582in seconds per video segment in a continuous recording. The recording video segments582, in seconds, are process by the segment time compression and expansion subroutine. The segments582are created when the user/AI changes the speed variable. The time special effects are applied to the raw video segment, and written into the processed video stream RAM, where each segment is either compressed, expanded or unchanged. The resultant playback video segments584are then provided in seconds per video segment corresponding to the recording segments time in seconds. With reference toFIG.22, an example of a variable playback speed record subroutine is illustrated and will be described, which provides a flow chart algorithm for slow motion and fast motion by speeding up or slowing down the playback frame rate while video recording is in progress. This algorithm can produce a normal video with the fast/slow motion commands embedded in the video's metadata. The metadata is data embedded in the video file that does not show up in the video recording. This subroutine starts (step590) upon initiation by a command from the slowdown subroutine inFIG.12(step264). After starting, this subroutine acquires the raw video data stream from the camera as an input (step592). The raw video data stream can be audio/video stream from the local electronic device including the camera and/or microphone, from a remote device including the camera and/or the microphone, or from other audio/video feeds, as per step594. After acquisition of the raw video data stream, step596of this subroutine is initiated which set the device's recording FPS. After which, step598sets the playback FPS to less than or equal to (<=) the recording FPS. Step600of this subroutine is initiated which determines if the video data input stream from the camera is open. If step600determines that the input stream is not open, then this subroutine proceeds to step616. Step616can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step618). After step616is completed, then this subroutine stops or ends (step620). While the input stream is open from step600, this subroutine determines if the speed is less than “normal” (step602). If so then step604is initialized which sets the segment playback fps to equal the recording fps divided by the speed for that video section (Segment FPS=Record_FPS/Speed). After which, this subroutine proceeds to step614to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step614, this subroutine returns to step600. If the speed is not less than “normal” (step602), then this subroutine determines if the speed equals “normal” (step606). If so then step608is initialized which sets the segment playback fps to equal the recording fps for that video section (Segment FPS=Record_FPS). After which, this subroutine proceeds to step614to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step614, this subroutine returns to step600. If the recording speed does not equal “normal” (step606), then this subroutine determines if the speed is greater than “normal” (step610). If so then step612is initialized which sets the segment playback fps to equal the recording fps times by the speed for that video section (Segment FPS=Record_FPS*Speed). After which, this subroutine proceeds to step614to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step614, this subroutine returns to step600. It can be appreciated that this segment time compression and expansion subroutine includes a looped subprocess including steps600-614until the input stream is determined to not be open in step600. With reference toFIG.23, an example of a variable playback speed playback subroutine is illustrated and will be described, which provides a flow chart algorithm for playing a video file with slow motion and fast motion special effects by speeding up or slowing down the playback frame rate while video playback is in progress. An application employing the algorithm inFIG.23is required to play a video produced by the algorithm inFIGS.20and21. This application must be capable of decoding the information in the metadata and/or an accompanying “video project file” and applying the speed up and slowdown commands to the playback frame rate while the video is playing. A video project contains the video file plus an accompanying file that has the special effects commands to be executed, that a custom player can decode and apply in real-time playback. If the video is played with an incompatible player, then the speed up and slowdown special effects commands in the metadata are ignored and the video plays continuously in the same speed. This subroutine starts (step622) upon initiation by a command from the slowdown subroutine. After starting, this subroutine acquires the raw video data stream from the video project file residing in device's memory as an input (step624). The raw video data stream can be audio/video stream in the video project file from the electronic device, or remote video project files, as per step626. After acquisition of the raw video data stream, step628of this subroutine is initiated which gets the video's metadata, record FPS, playback FPS and variable playback log. After which, step630extracts the playback speed (Speed) for each section of the video with the time special effects applied to the section fromFIG.20. Step632of this subroutine is initiated which determines if the video data input stream from the camera is open. If step632determines that the input stream is not open, then this subroutine proceeds to step648. Step648can receive data of the video/audio frames from RAM memory and/or non-volatile long term memory (step650). After step648is completed, then this subroutine stops or ends (step652). While the input stream is open from step632, this subroutine determines if the speed is less than “normal” (step634). If so then step636is initialized which sets the segment playback fps to equal the recording fps divided by the speed for that video section (Segment FPS=Record_FPS/Speed). After which, this subroutine proceeds to step646to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). The subroutine then proceeds to step647, which displays the Audio/Video feed from the RAM butter, and after which continues to step632. If the speed is not less than “normal” (step634), then this subroutine determines if the speed equals “normal” (step638). If so then step340is initialized which sets the segment playback fps to equal the recording fps for that video section (Segment FPS=Record_FPS). After which, this subroutine proceeds to step646to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step646, this subroutine returns to step632. If the recording speed does not equal “normal” (step638), then this subroutine determines if the speed is greater than “normal” (step642). If so then step644is initialized which sets the segment playback fps to equal the recording fps times by the speed for that video section (Segment FPS=Record_FPS*Speed). After which, this subroutine proceeds to step646to write frame(s) to buffer (RAM memory and/or non-volatile long term memory). After step646, this subroutine continues to step647to display the audio/video (A/V) feed from the RAM buffer, and then returns to step632. It can be appreciated that this segment time compression and expansion subroutine includes a looped subprocess including steps632-647until the input stream is determined to not be open in step632. A possible method of using the present technology is illustrated inFIG.24. A user can launch an application software (App) on a device capable of running the App, utilizing a user interface of the present technology. The App can open in an image composition screen, which can be as a default setting. Favorite or predetermined settings can optionally be selectable by the user. Device settings are applied and the device is in a ready state, while optionally still in the image composition screen. The user can the start recording, utilizing the device's camera, a remote camera or a remote video stream, by touching or activating a “Record” icon associated with the App or user interface. Optionally, the user can touch and hold the Record icon or button continuously to continue recording. One aspect can be that the icon or a button associated with the icon can be animated to indicate a live recording is active. While the recording is in progress, the user can enter special effect commands being to zoom in or zoom out. The video being displayed by the device is configured or configurable to show the zooming in or out special effect associated with the video in real time. While the recording is in progress, the user can enter special effect commands being to create slow motion and/or fast motion. One aspect is that there is no difference in the speed of the display of the live video on the device. The user can end recording by releasing or removing the touching of the Record icon or button. After which, the App stops recording, displays a “Review” screen, completes processing the special effect, and provides an option to save or autosave the processed video. After saving, the newly produced video can be viewed by the device or a remote device after the processing has been completed. The video can play continuously and restart after ending. The App can provide a suite of editing tools that can be utilized to further edit or modify the raw or processed video. Optionally, the video can be edited to fine tune the slow motion and fast motion effects, along with other custom design elements, and post the video. This process can be repeated until a desired video result is created. The App completes processing any new elements in the video and play back to the user after each edit is completed. This process can be repeated until a desired video result is created. After processing the video and/or any additional editing to the video is complete, the App can save a final video or edit. The App can save the final video to the device's internal memory, to an external memory and/or to the cloud. The App can further provide an option allowing the user to post the final video to social media platform. The App can upload the final video onto additional platforms and/or clouds, and display the composition screen allowing the user to start recording a new video. With reference toFIG.25, at least a portion of the interface30is described. The interface30can be, but not limited to, a GUI interface capable of providing a screen for device optimized parameters or variables. The GUI30can be configured or configurable to include a recording start/stop control32provided anywhere on the screen, and a speed selection region34provided anywhere on the screen that can provide a default or predefined frame rate speed that is used to manipulate the frame in the raw video data from the camera12. The speed selection region34can include a speed selection affordance or control indicator35that can travel along the speed selection region34via control by the user to indicate the current or selectable speed. The GUI interface30can also include regions anywhere on the screen for controlling zoom36, zoom and fast motion speed rate38, and/or zoom and slow motion speed rate40. In some or all embodiments, a user can touch and/or hold the start/stop control32to initiate, stop or pause the recording function of the camera. Additionally, a user can interact with the speed selection region34by touching the region with a finger or stylus-like device, and making a sliding motion along the region in any direction. The processing unit can be configured or configurable to interpret this sliding movement as a special effect input command. For example, sliding from a central area of the speed selection region34toward the right could change the speed rate from the native speed rate to 2×, 3×, 4×, “n”× the native speed rate, depending on how far right the sliding motion travels on the speed selection region34. Sliding from the central area of the speed selection region34toward the left could change the speed rate from the native speed rate to −2×, −3×, −4×, −“n”× the native speed rate, depending on how far left the sliding motion travels on the speed selection region34. In some or all embodiments, a user could control the zoom function of the camera by making a vertical sliding motion from a lower region of the GUI toward an upper region. A user could further control a combination of zoom and speed rate by making a curving vertical sliding motion from a lower region of the GUI toward an upper right or left region, depending if a zoom and fast motion or slow motion effect is desired. Alternatively, the GUI interface30can be configured or configurable to include areas, icons or windows where functions, attributes, operations, settings and/or characteristics of the camera and/or display can be controlled. Examples of these functions, attributes, operations, settings and/or characteristics can be, but not limited to, flash, hands free, timer, mute, selfie, broadcast, sharing, filters, media, stop/start recording, and the like. The GUI interface30can be configured or configurable to be used to preset a minimum, a maximum or a range of the speed rate of the raw video. Still further and optionally, the GUI interface30can be configured or configurable to include areas, icons or windows providing editing options to the video data stream. The editing options can include, but not limited to, adding text, adding drawings, adding sounds, face filters, adding decorations, creating a video loop, adding a cover, and the like. The GUI interface30can be configured or configurable to include a display of the output video recording data, which can be the raw video data and/or the modified video data, or the edited video data stream can be displayed. It can be appreciated that the output video recording data displayed by the GUI interface30can be dynamically changing in real time due to changes in the input. Thus, the present technology can display, in real time, a seamless transition between the raw video data, any number of the modified video data or subsets thereof, while the camera acquires the video and while recording is in progress. The modified video data can include any number of fast to slow motion subsets, and these subsets can be in combination with subsets of the raw video data. The displaying of the raw video data and/or any number of modified video data is accomplished live or in real time as the camera is capturing images associated with the raw video data. It can be appreciated that the present technology renders the displayed output video data as the camera captures the images and while the output video is written to memory. Consequently, allowing the user to move, pan, zoom, etc. the camera while still capturing the video and at the same time applying and displaying any number of special effects to the raw video data. In some or all embodiments, the user of the device implementing the present technology and GUI30is able to access operational functions of the present technology and/or device and/or camera and/or saved video by entering login credentials associated with a user account. FIG.26illustrates an embodiment of the GUI30of the present technology utilized on an electronic device displaying an implementation of GUI of the present technology on a touch screen.FIG.26is an exemplary “Camera View” of the device employing the GUI30while recording in normal “1×” speed. In this normal speed setting, the raw video stream from the camera is not changed and displayed in real time in the GUI30. In some or all embodiments of the present technology, the screen shot or GUI30can include a number of icons or actuatable elements representing various functions or affordances that the user can select. These affordances change icons as different “states” settings are selected for each affordance by the user. Affordances utilizable in the present technology or GUI30can be object's properties that show the possible actions users can take with it, thereby suggesting how they may interact with that object. Affordances can be deliberately constrained to enable only the correct or desired actions when actuated. The affordances utilized in the present technology can include cues to suggest actions that are possible by an interface element. The affordances utilizable in the present technology or GUI30can be, but not limited to, any actuatable element in the realm of icons, buttons, dropdown menus, actuatable regions, images, cursor actuatable elements, or touch dependent inputs. In some or all embodiments of the present technology, any of the affordances can be displayed, activated, manipulated, deactivated or hidden depending on a touch or touch release by the user on the display, which can be a touch sensitive screen or pad. These affordances can be, but not limited to: a “Flash” affordance700, which when activated (e.g. via a tap gesture), enables the user of the device to select a flash or light of the device to be on, off or automatically activated depending on light levels detected by or inputted into the device implementing the present technology; a “Hands Free” affordance702, which when activated (e.g. via a tap gesture), enables the user of the device to control aspects of the present technology utilizing gestures on the device, remote control units, speech recognition, and/or a preprogrammed sequence or scheme so that the user can initiate continuously recording without requiring the user to constantly touch with the device (A “Hands-On” mode means the user must touch the record button continuously to continue recording. Once the user releases the record button, recording stops); a “Timer” affordance704, which when activated (e.g. via a tap gesture), enables the user of the device to start and/or stop recording at a predetermined time(s) of day and/or for a predetermined time duration(s); a “Mute” affordance706, which when activated (e.g. via a tap gesture), enables the user of the device to mute or deactivate a microphone associated with the device and/or camera; a “Selfie” or “Rear” affordance708, which when activated (e.g. via a tap gesture), enables the user of the device to switch to a rearward facing or secondary camera associated with the device implementing the present technology; a “Setting” affordance710, which when activated (e.g. via a tap gesture), enables the user of the device to control operational settings of the GUI, device and/or camera; a “Go Live” affordance712, which when activated (e.g. via a tap gesture), enables the user of the device to transmit the video feed from the present technology to a remote device or server; a “Friend” affordance714, which when activated (e.g. via a tap gesture), enables the user of the device to search and/or invite friends or contacts to make a social connection; a “Media” affordance716, which when activated (e.g. via a tap gesture), opens a media folder that enables the user of the device to open and load videos from a folder created in memory of the device or a remote device or a cloud storage; a “Face Filters” affordance718, which when activated (e.g. via a tap gesture), enables the user of the device to initiate a subprocess or a third-party application that applies filtering with “Augmented Reality” (AR) functions to the video; a “Scene Filters” affordance720, which when activated (e.g. via a tap gesture), enables the user of the device to initiate a subprocess or a third-party application that applies filtering functions to the video; and/or an “Upgrades” affordance722, which when activated (e.g. via a tap gesture), enables the user of the device to upgrade aspects of the present technology and/or memory storage. It can be appreciated that additional icons, functions or affordances can be implemented with or on the GUI. Any number of the icons or affordances700-722can be positioned or positionable in predetermined or customizable locations in the GUI30. The recording start/stop control32can be provided as a button anywhere on the screen that allows the user to start, stop and/or pause the recording of video (e.g. via a tap or touch holding gesture), and the speed selection region which can be a slide bar34can be provided anywhere on the screen as a slide bar with circles and/or other shapes and markers indicating selectable playback speeds of the portion of the video in playback. The slide bar34enables the user to control the special effects aspect of the video (e.g. via a sliding gesture). The current speed indicator inFIG.26is set at “1×” indicating the record speed is “normal”. This speed factor is inputted into step82of the process illustrated inFIG.7. In this example, since the user has not entered a special effects command (speed factor “1×” or “normal”), then the process would proceed to step88, dependent in part of preceding steps. If the user activated any of the additional operation functions700-722, then these inputs are determined by step88, and the appropriate or corresponding parallel processes are initiated in step78. The record button32, the speed selection button35, the speed selection region34, the zoom level indicator/controller748, and any icons can be activated utilizing the touchscreen of the user device. InFIG.26, the video feed displayed in a first region of the GUI30is a live video feed from the respective camera or a remote video feed. Any editing or modified video stream from any initiated operation functions700-722can be displayed in one or more additional regions of the GUI30. These display regions in the GUI30can be separate and independent regions, can in part overlap, or can be overlaid. In some or all implementations, the video feed displayed in any of the regions may be previously recorded video footage. In other implementations, the video displayed in any of the regions of the GUI30can be, for example, any position on an event timeline associated with the displayed video feed. The timeline can be manipulated by the user by sliding a timeline bar causing the present technology to display the video feed from that point in time forward in any of the regions. Additionally, the raw video stream and/or editing video stream can be saved to an appropriate memory indicated by the user using the GUI30. The memory or memory devices selected by the user using the GUI30is inputted into the write video stream subroutine inFIG.8and the video stream(s) are written or copied appropriately. FIG.27illustrates an exemplary embodiment “Camera View” of the electronic device employing the GUI30of the present technology while recording in slow motion “−2×” speed. In this slow motion speed setting, the frame adding subroutine is utilized and the apparent playback speed is twice as slow as a normal video. In the “Hands-Free” mode example, the user can tap a desired speed marker or slide an indicator to a desired speed marker located on the speed selection region34. In “Hands-On” mode, the user can press and hold the “Record” button32and slide his finger to the left and the button follows directly under the user's finger, so that the button is vertically above the “−2×” affordance label in this example. It can be appreciated that the speed selection affordance or indicator35can automatically move along the speed selection region34to follow the movement of the “Record” button32. In some or all embodiments, a window724can be implemented in the GUI30that displays the raw video stream, while a majority of the GUI30displays the slow motion video stream. In the alternative, it can be appreciated that the window724can display the slow motion video stream, while the majority of the GUI30displays the raw video stream. In another alternative, it can be appreciated that the window724can display the slow motion video stream or a still frame “cover” image of the video stream, while the majority of the GUI30displays the live video stream. The current speed indicator inFIG.27is set at “−2×” indicating the record speed is slow motion. This speed factor is inputted into step82of the process illustrated inFIG.7. In this example, the user has entered a special effects command (speed factor “−2×” or “slow motion”), then the process would proceed to step84wherein the process would initiate the special effects subroutine inFIG.9. If the user activated any of the additional operation functions700-722, then these inputs are determined and the appropriate or corresponding parallel processes are initiated in step78. With the speed factor set to “−2×” using the GUI30, the apply special effects subroutine is initiated which determines if the input from the GUI30represents a fast motion command (step156inFIG.9) or a slow motion command (step160inFIG.9), or go to advanced slow motion subroutine command (step150inFIG.9). The process then initiates the appropriate subroutines corresponding to the input by the user on the slide bar34. In this example, the frame adding subroutine illustrated inFIG.13would be initiated. As the raw video stream is modified per the initiated subroutine, the GUI30displays in real time the resultant slow motion video via the device's display. The raw video stream can also be displayed via the GUI30, in conjunction with the resultant slow motion video. Additionally, the resultant slow motion video and/or the raw video stream can be saved to an appropriate memory indicated by the user using the GUI30. The memory or memory devices selected by the user using the GUI30is inputted into the write video stream subroutine inFIG.8and the video stream(s) are written or copied appropriately. FIG.28illustrates an exemplary embodiment “Camera View” of the device employing the GUI30of the present technology while recording in fast motion “3×” speed. In this fast motion speed setting, the frame dropping subroutine or time compression subroutines is utilized and the apparent playback speed is three times as fast as a normal video without frame dropping. In this example, a “Hands-Free” mode can be utilized where the user can tap a desired speed marker or slide an indicator to a desired speed marker located on the speed selection region34. In a “Hands-On” mode, the user can press and hold the record button32to record continuously and slide his finger left and right to indicate desired speed and the speed affordance or indicator35located on the speed selection region34moves accordingly. In some or all embodiments, the user can utilize a “One-Touch” mode to manipulate the video's time. In this mode, recording operation can be initiated by touching the screen, and taking a finger off the screen will stop recording operation. Alternatively, recording is in operation while touching the screen. Exemplary operation can include: moving the touching finger to the left of a middle of the screen will slow down video's time; moving the touching finger to the middle of screen returns video's time to normal speed; moving the touching finger to the right left of the middle of the screen will speed up video's time; the touching finger can quickly go from extreme left to extreme right (and vice-versa); moving the touching finger up will initiate a zoom in (telephoto) operation; moving the touching finger down will initiate a zoom out (wide angle) operation; and adjusting other settings separately live, such as but not limited to, flash700, mute706, etc., with other finger while recording is in progress and while the touching finger is on the screen. Still further, some or all embodiments can include a “Multiple Touch” mode that allows the user to individually select functions through user interface whilst video is being recorded is shown in the user interface. In some or all embodiments, the window724can be implemented in the GUI30that displays the raw video stream, while the majority of the GUI30displays the fast motion video stream. In the alternative, it can be appreciated that the window724can display the fast motion video stream, while the majority of the GUI30displays the raw video stream. In another alternative, it can be appreciated that the window724can display the fast motion video stream, while the majority of the GUI30displays the live video stream. In another alternative, it can be appreciated that the window724can display the still frame “cover image” for the fast motion video stream, while the majority of the GUI30displays the live video stream. The current speed indicator inFIG.28is set at “3×” indicating the record speed is fast motion. This speed factor is inputted into step82of the process illustrated inFIG.7. In this example, the user has entered a special effects command (speed factor “3×” or “fast motion”), then the process would proceed to step84wherein the process would initiate the special effects subroutine inFIG.9. If the user activated any of the additional operation functions700-722, then these inputs are determined and the appropriate or corresponding parallel processes are initiated in step78. With the speed factor set to “3×” using the GUI30, the apply special effects subroutine is initiated which determines if record fps=playback fps and if the input from the GUI30represents a fast motion command (step156inFIG.9) or a slow motion command (step160inFIG.9). The process then initiates the appropriate subroutines corresponding to the input by the user on the slide bar34. In this example, the speed up subroutine illustrated inFIG.10would be initiated. If record fps>playback fps and if the input from the GUI30represents a fast motion command or a slow motion command, the process then initiates the appropriate subroutines corresponding to the input by the user on the slide bar34. In this case, the speed up subroutine illustrated inFIG.12, step262initiates subroutine illustrated inFIG.20. As the raw video stream is modified per the initiated subroutine, the GUI30displays in real time the resultant fast motion video via the device's display. The raw video stream can also be displayed via the GUI30, in conjunction with the resultant slow motion video. Additionally, the resultant fast motion video and/or the raw video stream can be saved to an appropriate memory indicated by the user using the GUI30. The memory or memory devices selected by the user using the GUI30is inputted into the write video stream subroutine inFIG.8and the video stream(s) are written or copied appropriately. FIG.29illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the user has stopped recording and the system displays a review screen for the user to review and edit the captured video. The GUI can highlight the icons by removing the background image example. In some or all embodiments, the review screen can contain a number of icons representing various functions or affordances that the user can select. These icons, functions or affordances can be, but not limited to: a “Display Cover” affordance726which displays the still frame “cover image” of the video, a “Text” affordance728, which when activated (e.g. via a tap gesture), enables the user of the device to add text to the video; a “Draw” affordance730, which when activated (e.g. via a tap gesture), enables the user of the device to add images, clipart's and/or draw to the video; a “Sound” affordance732, which when activated (e.g. via a tap gesture), enables the user of the device to add sound or music to the video; the “Face Filter” affordance718; a “Decor” affordance734, which when activated (e.g. via a tap gesture), enables the user of the device to add decorations such as stickers and emoji's to the video; a “Loop” affordance736, which when activated (e.g. via a tap gesture), enables the user of the device to create loop sequence of a selected segment of the video; a “Cover” affordance738, which when activated (e.g. via a tap gesture), enables the user of the device to use a frame or segment of the video as a cover page for the video a “Tag” affordance739, which when (e.g. via a tap gesture), enables the user of the device to identify and tag other users who may nor may not appear in the video, and add “hashtags” for search engine optimization; a Media” affordance716which when activated (e.g. via a tap gesture), enables the user of the device to save the video to a folder on the device or in the cloud; a “Notes” affordance740, which when activated (e.g. via a tap gesture), enables the user of the device to save the video to a “Notes” folder or application associated with the device; a “Project” affordance, which when activated (e.g. via long press “a touch and hold” gesture on the “Notes” affordance), enables the user of the device to save the video to a “Projects” folder or application associated with the device for collaboration between other users; a “Chat” affordance742, which when activated (e.g. via a tap gesture), enables the user of the device to send the video to a contact or friend; a “Feed” affordance744, which when activated (e.g. via a tap gesture), enables the user of the device to post the video to the user's channel's timeline in the social media aspect of the app, which can also be configured to post to the user's Web or RSS feed; and/or a “Story” affordance746, which when activated (e.g. via a tap gesture), enables the user of the device to post the video to the user's story or social media page within the app or shared externally to other social media apps like Instagram®, Facebook®, Twitter®, etc. In some or all embodiments, when the Notes affordance740is pressed, a list of icons or “Projects” folders appear, each representing an available project the user can post the video to. For example, the user can add decorations in the video, as well as set other properties for social media upload into the cloud. The user can elect to save the videos in the user's “Media” folder, save to the user's “Notes” location, save to the user's “Projects” location, send the video to a “Chat” contact or group, post to their “Feed”, or post to their “Story”. The system saves the story and takes appropriate action, utilizing any one of the subroutines and/or subprocesses associated with the present technology. FIG.30illustrates an exemplary embodiment “Screen Shot” ofFIG.29where the user has stopped recording and the system displays the review screen for the user to review the captured video. It can be appreciated that multiple windows724can be utilized, each displaying a different edited video stream or still frame cover image of the edited video stream. FIG.31illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays a composition screen before recording has started. The speed range can be displayed from “−3×” to “3×” normal speed, but additional speeds and ranges can be utilized and displayed. In some or all embodiments, the composition screen can include a zoom setting region748, which can control a zoom function of the camera (e.g. via a sliding and/or tap gesture or pinch-to-zoom). The zoom setting region748can be a zoom slid bar having an indicator moveable by the user by way of touching and sliding the indicator to a desired zoom operation. For example, the zoom slid bar748can be a vertically orientated slide bar located on the left or right side of the GUI30. It can be appreciated that any gesture interface can be utilized in place of the exemplary slide bar. As the user slides his finger from top to bottom and back on the zoom slide bar748, the “zoom factor” adjusts zoom in and zoom out accordingly as illustrated. In another example, using the “pinch-to-zoom”, the user uses a multi-touch gesture to quickly zoom in and out, while the “zoom factor” adjusts up and down accordingly. The record button32can be located in a lower middle section of the GUI, with the “time speed” side bar34located therebelow. It is appreciated that the GUI30is not limited to specific locations of the record button32, speed slide bar34and any of the icons as illustrated herewith. The record button32, speed slide bar34and any of the icons can be located anywhere in the GUI, and can also be reconfigured, sized and/or moved by the user. For example, the user can provide a touch and hold gesture to any of the affordances, which thus enables the user to move or resize that selected affordance. InFIG.31, the user has tapped the “1×” speed marker or slid the indicated to the “1×” speed marker, which means the raw video is being displayed at normal speed with no special effects. It can be appreciated that the indicator “1×” can be substituted with other speed indicators such as, but not limited to, “Normal”. The user can selectively set the location of the record button32before recording commences, to set the zoom748and the speed factors34for the device once recording starts. As the user move the moveable record button, the zoom and speed factors move accordingly. As the user slides his finger side to side on the speed slide bar34, the “time speed” adjust faster or slower accordingly as illustrated. FIG.32illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays a recording screen while recording has started. In this example, the zoom factor is set to zero “0”, and the speed factor is set to fast motion “2×” being two times faster than normal speed. In some or all embodiments, while the recording operation is active, the present technology can enlarge the record button32to be visible even underneath the user's finger. A radial progressive bar can be utilized with the record button32to indicate recording has started and/or a time duration of the recording. The record button32can be colored inside to assist in viewing by the user, and it can be appreciated that the size, configuration and/or color of the record button32can be configurable by the user. In the alternative, the record button32can be moved to a location adjacent to the selected speed factor (e.g. via a touch holding gesture). In this present example, above the highlighted “2×” in the “speed scale”34. The record button32can be configurable to follow the user's finger movements as long as the user is touching the screen. The selected recording FPS, playback FPS and/or speed factor can be displayed in the GUI, as illustrated by the indicator “240I-I-” and “Fast 2×”750in the center near the top of the GUI. The FPS and/or speed factor indicator can be animated or blinking prominently to alert the user of the FPS and/or recording speed. In another embodiment, the indicator750is the maximum time length for the video segment. In some or all embodiments, the GUI30can also include “speed guidelines”752utilized and displayed vertically in dashed lines. The guidelines752are configured or configurable to guide the user's finger or pointing device to indicate when the user's touch point is approaching and then crossing the boundary for speed change. Upon the user sliding or tapping to the desired speed factor, the application program of the present technology initiates the appropriate subroutine and/or necessary algorithm to create the fast or slow motion special effect associated with the selected speed factor received by the GUI. FIG.33illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays a “Slow Motion Resolution” settings screen. In this example, the slow motion resolution is the slow motion factor supported by hardware, without frame adding. In some or all embodiments, the GUI30can include a scrollable selection754of multiple speed factor values. The selectable speed factor values in scrollable selection754(e.g. via a slide gesture) are the settings for the maximum video quality that the device supports. The selected speed factor can be highlighted to indicated which speed factor selected. FIG.34illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays an alternative advanced “Slow Motion Resolution” settings screen. In this example, the GUI30can display and utilize multiple slide bars, each controlling a different aspect or operation (e.g. via a slide gesture). When the value of one of the slides is adjusted, the corresponding values of the other slides change accordingly. In some or all embodiments, the slide bars can be horizontally stacked or vertically spaced. The slide bars can be associated with a “Video Resolution” affordance756, which when activated (e.g. via a slide gesture), enables the user to set a resolution size of the resulting video. The higher the resolution, the bigger the file, and the larger the bandwidth required to serve the files. Revenue can be received by charging users a fee relating to the desired resolution. With higher video resolution, higher rates can be charged for hosting and bandwidth costs. Another slide bar can be associated with a “Max Slow Motion” affordance758, which when activated (e.g. via a slide gesture), enables the user to set the maximum slow motion speed factor. As Video Resolution increases, the Maximum Slow Motion Effect (Max SlowMo) decreases and the Record Frames Per Second (Record FPS) decreases proportionally. Playback Frames Per Second (Playback FPS) is an independent variable and remains unchanged. Another slide bar can be associated with a “Record Frames Per Second” affordance760, which when activated (e.g. via a slide gesture), enables the user to set the recording FPS. The record FPS is the rate of which frames are captured by the camera. The higher the frame rate, the higher the slow motion effect with respect to a constant Playback FPS. As Record FPS increases, Max SlowMo increases and Video Resolution decreases proportionally. As Record FPS decreases, Max SlowMo decreases and Video Resolution increases proportionally. As the user adjust the Record FPS758higher and lower, the values for the Max SlowMo and Video Resolution automatically adjust accordingly. Playback Frames Per Second (Playback FPS) is unchanged. The user can manually override and adjust the Video Resolution and Max SlowMo to lower the maximum selected by the software. Still another slide can be associated with a “Playback Frames Per Second” affordance762, which when activated (e.g. via a slide gesture), enables the user to set the playback FPS. The Playback FPS is the rate of which frames are played by the device. The higher the Playback FPS, the lower the slow motion effect with respect to a constant Record FPS. The Playback FPS can be independent set without affecting either Recording Frames Per Second or Video Resolution. As Playback FPS increases, Max SlowMo decreases proportionally. As Playback FPS decreases, Max SlowMo increases proportionally. As the user adjust the Playback FPS762higher and lower, the values for the Max SlowMo automatically adjust accordingly. Record FPS and Video Resolution are unchanged. As Video Resolution decreases, the Max SlowMo increases and the Record FPS increases proportionally. Playback Frames Per Second (Playback FPS) is unchanged. As the user adjust the Video Resolution756higher and lower, the values for the Max SlowMo and Record FPS automatically adjust accordingly. Playback FPS is unchanged. User can select to create the original footage in high resolution but upload a lower resolution video to save on bandwidth and storage costs. The user has the option to save the high resolution original video to the local device, and/or upload to cloud for storage. Once uploaded, video files of high resolution can be resized into the proper format to optimize speed and size for the viewing device. The maximum slow motion effect (Max Slow Motion758) is a ratio of Record FPS to Playback FPS. The maximum slow motion effect uses existing frames only to create the slow motion effect when played in “real time” given the Playback FPS. It does not use frame adding or other digital enhancements or interpolated and extrapolated frames. Max SlowMo is the maximum end of the range of usable slow motion effect that is available for the user. The user may choose to use a smaller slow motion range that is less than the Max SlowMo value. Max SlowMo=Record FPS/Playback FPS The user can set the Playback FPS762independently of all other variables. In this example, keeping the Playback FPS constant illustrates the function of the feature. As the Max SlowMo increases, Record FPS increases and Video Resolution decreases proportionally. As the Max SlowMo decreases, Record FPS decreases and Video Resolution increases proportionally. As the user adjust the Max SlowMo758higher and lower, the values for the Record FPS and Video Resolution automatically adjust accordingly. Playback Frames Per Second (Playback FPS) is unchanged. For example, recording fps=120, playback fps=30. Maximum slow motion effect=4 times slower than normal speed. The GUI30can further include an “Optimize” affordance764, which when activated (e.g. via a slide gesture), enables the user to optimize the camera and/or playback settings to maximize the best video quality that the device can deliver. The user can select to optimize for video quality, file size, maximum slow motion effect, and combinations thereof. The values in the Optimize764operations can be the settings for the maximum video quality and minimum size that the device supports. These are the “limits” for the range of values that are available for the user to select from. To assist in understanding the utilization of the GUI30implementing at least in part some of the subroutines of the present technology, the following examples are provided, assume the following device supported recording frame rates:8K at 240 fps4 k at 480 fps2K at 960 fps1080 at 1920 fps The UI automatically selects the values from the sets of values based on optimize routine selected. Optionally, the selected values are automatically highlighted and aligned vertically (left, middle, right) side of the screen. Example 1 User sets the following values:Video Resolution=8KPlayback FPS=30Optimize for Quality The UI automatically selects:Max SlowMo=8×Record FPS=240 Example 2 User sets the following values:Video Resolution=4K.Playback FPS=30Optimize for Quality The UI automatically selects:Max SlowMo=16 selected from set of selectable values {16×, 32×}Record FPS=480 {240, 480} While Video Resolution is “locked in” at 4K: If user selects Record FPS=240 then Max SlowMo automatically sets to 32×. If user selects Record FPS=480 then Max SlowMo automatically sets to 16×. If user selects Max SlowMo=32×, then Record FPS automatically sets to 240. If user selects Max SlowMo=16×, then Record FPS automatically sets to 480. User can manually override and set Record FPS to 240 to decrease file size but with a 50% loss in frame resolution. Example 3 User sets the following values:Video Resolution=4K.Playback FPS=30Optimize for Size The UI automatically selects:Max SlowMo=32 selected from set of selectable values {16×, 32×}Record FPS=240 {240, 480} While Video Resolution is “locked in” at 4K: If user selects Record FPS=480 then Max SlowMo automatically sets to 16×. If user selects Record FPS=240 then Max SlowMo automatically sets to 32×. If user selects Max SlowMo=16×, then Record FPS automatically sets to 480. If user selects Max SlowMo=32×, then Record FPS automatically sets to 240. User can manually override and set Record FPS to 480 to increase frame resolution but increase file size by 100% before compression. Example 4 User sets the following values:Max SlowMo=32×Playback FPS=30Optimize for Quality The UI automatically selects:Video Resolution=2K {480, 720, 1080, 2 k}Record FPS=240 {240, 480, 960} Example 5 User sets the following values:Max SlowMo=64×Playback FPS=30Optimize for Quality The UI automatically selectsVideo Resolution=1080 {480, 720, 1080}Record FPS=1920 {240, 480, 960, 1920} Example 6: Continuing with Example 5 User sets the following values:Playback FPS=60Optimize for Quality The UI automatically selectsMax SlowMo=32×Video Resolution=1080 {480, 720, 1080} Record FPS=1920 {240, 480, 960, 1920} Example 7: Continuing with Example 6 User sets the following values:Playback FPS=120Optimize for Quality The UI automatically selectsMax SlowMo=16×Video Resolution=1080 {480, 720, 1080}Record FPS=1920 {240, 480, 960, 1920} Example 8: Continuing with Example 7 User sets the following values:Playback FPS=240Optimize for Quality The UI automatically selectsMax SlowMo=8×Video Resolution=1080 {480, 720, 1080}Record FPS=1920 {240, 480, 960, 1920} FIG.35illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays an alternative “Slow Motion Resolution” settings screen. In this example, the GUI30has all of the same features as the embodiment illustrated inFIG.34. The difference is in the presentation of the controls to the end user. All functions are accessible through both embodiments. In some or all embodiments, the UI automatically selects the values from the sets of values based on optimize routine selected. Optionally, the selected values are automatically highlighted and aligned in the same row at the top, middle or bottom of the screen. In this example inFIG.35, the GUI30can display and utilize multiple scrollable sections, with each being associated with “Video Resolutions”, “Max Slow Motion”, “Record FPS” and “Playback FPS” affordances. Each affordance can be activated by moving the scroll to the desired value (e.g. via an up-down slide gesture). The slide bars can be horizontally stacked or vertically spaced. The scrollable sections can highlight the selected value, respectively. FIG.36illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays a “Slow Motion Scale” screen. In some or all embodiments, the user can set custom settings for how the Slow Motion Scale control appears on the app and it's programmatic behavior in response to the user's input (e.g. via a left-right slide gesture). In this example, a slide bar or number line766represents the slow motion factor levels available for the user. The range displayed is from “Normal” to “7×”. There can be a “>” symbol besides the last scale value, in this case“7×”, to represent that there are additional slow motion multipliers available but not displayed. The user can then scroll through the available slow motion multipliers and select how much or how little to limit the range of slow motion factor while recording. The user can pinch at the number line and include more of the range of the slow motion to include in the live recording screen. The user can set the orientation of the button to move right or left on the line766to control the speed. As exemplary illustrated, “Normal” is on the left and “Max” is on the right. The user would then slide his/her finger on the recording control from left to right to increase the slow motion factor. A “Reverse” affordance768can be utilized and displayed on the GUI, which when activated (e.g. via a tap gesture), enables the user to reverse the display of the slide bar766. If the user selects the “Reverse” option, then “Normal” would be on the right side, and “Max” is on the left. The user's motion is to slide from right to left on the line766to increase the slow motion factor. FIG.37illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays an alternative Slow Motion Scale screen. In this example, the GUI30has all of the same features as the embodiment illustrated inFIG.36. The difference is in the presentation of the slide bar or line766to the end user. In this example, the slide bar766is orientated in a vertical configuration along a left or right side of the GUI30. All functions are accessible through both embodiments. The user can set custom settings for how the Slow Motion Scale control appears on the app and it's programmatic behavior in response to the user's input. In some or all embodiments, there can be a “{circumflex over ( )}” symbol above the last scale value, in this case“11×”, to represent that there are additional slow motion multipliers available but not displayed. The user can then scroll through the available slow motion multipliers and select how much or how little to limit the range of slow motion factor while recording. FIG.38illustrates an exemplary embodiment “Screen Shot” of the device employing the GUI30of the present technology while the system displays a “Camera View” or an “Editing View” screen. The GUI30can be configured or configurable to include a recording start/stop affordance or control32provided anywhere on the screen, and a speed selection region34provided anywhere on the screen that can provide a default or predefined frame rate speed that is used to manipulate the frame in the raw video data from the camera or from recorded video. The speed selection region34can include a speed rate/control affordance or indicator35that can travel along the speed selection region34via control by the user to indicate the current or selectable speed. The GUI interface30can also include regions anywhere on the screen for controlling zoom, zoom and fast motion speed rate, and/or zoom and slow motion speed rate. The GUI30can include vertically oriented time guidelines770that extend vertically up from each of the speed rate indicators or the speed selection region34displayed on the GUI30or the display of the device utilizing the GUI. The speed rate indicators34can be, but not limited to, −2×, −3×, −4×, −“n”×, 1×, 2×, 3×, 4× or “n”×. It can be appreciated that the time guidelines770can, in the alternative, extend horizontally across a section of the GUI30. The time guidelines770can be displayed while in camera live one-touch record mode or in-app one-touch edit mode. The time guidelines770can be utilized to assist a user in determining which speed rate is currently be applied, which speed rate is next, which speed rate is nearest a finger touching the display, and/or as a composition guide for assisting in placing scene elements within a photo and video frame. The time guidelines770can be a different color, brightness and/or line type or line style from each other, thereby providing each speed rate a unique time guideline. The GUI30can include a video display region776, and one or more affordances configured or configurable to provide at least one input receivable and usable by the processing unit in operation of the present technology. The affordances can be speed rate affordance associated with changing the speed rate of the video data. The speed rate affordances can be, but not limited to, associated with the recording start/stop affordance or control32, the speed rate indicators34, and/or the speed rate/control affordance35. The GUI30or the processing unit of the present technology can determine if the input is associated with changing a first or native speed rate of the video data, and if so to modify at least one frame in the video data to create modified video data at a modified speed rate that is different to the first speed rate in real time. It is difficult for the user to “eyeball” the distance their fingers are from the next speed rate indicator or setting as the user moves their finger left or right to engage the fast or slow motion one touch live recording and editing features of the present technology. The user's finger may have a tendency to drift right or left as they zoom in and out. Without the on-screen time guidelines770, the user must rely solely on their judgement on the placement of the main elements on the scenery of the photograph or video. In any of the embodiments of the GUI of the present technology, the user can return to a previous screen or proceed to the next screen by a sliding gesture across the screen in a left or right direction, or by a tap gesture on an icon or affordance indicating the direction of screen progression. The time guidelines770can be a visual guide so the user knows how much further is required to slide the finger or pointing device to engage the next setting for fast and slow motion. The time guidelines770can also serve as a “track” for the user to slide the finger to zoom in and zoom out while recording a video. The time guidelines770can always be on or displayed when the GUI30is operated, as illustrated inFIG.38. Referring toFIG.39, any of the time guidelines770can be activated, turned on, modified, displayed, deceived and/or turned off when recording starts, when a finger touches the display or a pointing device is activated, when the touching finger or pointing device is moved, when the speed rate affordance is moved, or when the speed rate/control affordance35is moved. It can be appreciated that the recording start/stop control32can be displayed anywhere in the GUI30at a location where the finger makes contact with the screen or the point device is activated, or can be always displayed. Still further, the time guidelines770can include at least one selectable value selected by a gesture on the display of the electronic device. The gesture can be any one or any combination of a tap, a multiple tap, a touch holding, a sliding, a pinch, and a touch holding and sliding In this exemplary operation as shown inFIG.39, the present technology is recording at 1× or native speed rate. The user touches the screen with a finger, thereby displaying the recording start/stop control32at the point of finger contact. The time guideline770nearest the finger contact point is determined by the process of the present technology and its time guideline770is activated or displayed. To engage the fast and slow motion operations of the present technology, the user can slides a finger or pointing device to the right and left, with the nearest time guideline770being displayed, thereby providing the user with a visual indication of how much further is required to slide the finger or pointing device to engage the next speed rate setting for fast and slow motion. The use of the time guidelines770can prevent the user from unwanted changing of the speed rate, or can confirm the changing of the speed rate. In an exemplary operation, if the present technology is currently recorded at a slow motion speed rate of −2×, and the user slides a finger or moves a pointing device toward the left nearing the −3× speed rate indicator, then the time guideline770associated with the −3× speed rate will be displayed. The displaying of the −3× time guideline770can be displayed when the finger is at a predetermined distance to the −3× speed rate indicator and/or a predetermined distance away from the −2× speed rate indicator. In the alternative, the GUI30or the processing unit can intuitively extend an imaginary line or region vertically from the speed rate indicator or the speed rate affordance on the slide bar34. The time guideline770can be automatically displayed when a touch or pointing device input32is at predetermined distance form this imaginary line when the touch input32is anywhere on the display. The user may slide the speed rate/control affordance35left or right along the slide bar or to a location associated with one of the speed rate indicators34to change the speed rate setting, or the user can touch the screen to activate the speed rate affordance32and then slide the finger and consequently the speed rate affordance32left or right to change the speed rat setting. During any of these instances, when the speed rate affordance is at a predetermined distance from one or more of the speed indicators alone or associated with the slide bar34, or an imaginary line extending vertically from the speed indicators, then a time guideline770can be displayed for that speed indicator. The present technology can calculate this value by determining the known coordinates of the speed rate indicators displayed in the GUI30, determining the point of contact of the finger or point of activation by the pointing device, and then determining a linear or radial distance between the speed rate indicator coordinates and the finger contact point or the pointing device activation point. The process can then determine which speed rate indicator is nearest the finger contact point or the pointing device activation point, and then display that time guideline770. The process may convert the speed rate coordinates, the finger contact point and/or the pointing device activation point into a common vector format. Referring toFIG.40, once a new speed rate has been set or activated, the GUI30can display the current speed rate with a current speed indicator772that appears on the screen. The current speed indicator772can be flashing, changing color, different color from the guidelines770and/or speed rate indicators, animated or any other characteristic to gain the user's attention. The current speed indicator772can be displayed anywhere on the screen or in the GUI30, and can change to the speed rate currently being used. The current speed indicator772can also display a speed rate nearest the speed rate affordance32,35being moved by a finger or pointing device, thereby providing additional visual indication or warning of a potential change in speed rate. The time guideline770can further assist in the visual indication of how far the finger or pointing device is from the next time guideline770, speed rate indicator or speed setting. This can be accomplished by displaying a distance between the nearest time guideline770and the speed rate affordance32,35. Another way to accomplish this is to have any one of the nearest time guideline770or the speed rate affordance32,35flash at a rate dependent on the distance between the nearest time guideline770and the speed rate affordance32,35. Referring toFIG.41, a finger time guideline774can be displayed on the screen or the GUI30. The finger time guideline774can extend vertically up from the finger contact point, the pointing device activation point, or the speed rate affordance32,35. The finger time guideline774can be a different characteristic, shape, color, brightness and/or line type or line style to that of the time guidelines770. The finger time guideline774can further assist in the visual indication of how far the finger or pointing device is from the next time guideline770, speed rate indicator or speed setting. This can be accomplished by displaying a distance between the nearest time guideline770and the finger time guideline774. Another way to accomplish this is to have any one of the nearest time guideline770or the finger time guideline774flash at a rate dependent on the distance between the nearest time guideline770and the finger time guideline774. Referring toFIG.42, in the alternative and exemplary, the time setting slide bar34can be vertically orientated, alone or in combination with a horizontally orientated slide bar34. The user may slide a finger vertically along the vertically oriented slide bar34to change the time speed setting of the video being played. While utilizing the vertical slide bar34, one or more horizontal time guidelines770can be displayed on the screen or the GUI30. It can be appreciated that the speed rate affordance32can be utilized in a vertical direction to control the speed rate setting, alone or in combination with the vertical slide bar34. Still further, the finger time guideline774can extend horizontally from the finger contact point, the pointing device activation point, or the speed rate affordance32,35. The horizontal finger time guideline774can be a different characteristic, shape, color, brightness and/or line type or line style to that of the horizontal time guidelines770. Multiple horizontal time guidelines770can be displayed, alone or in combination with vertical time guidelines, to assist in positioning or centering the object in the video display region776or in a field-of-view. It can be appreciated that the time guidelines770can be orientated at any angle on the GUI30, and can even be arcuate to combine the changing of a time speed rate and a zoom function. As illustrated inFIG.43, in the alternative and exemplary, one or more zoom affordances can be associated, linked or utilized with one or more of the time guidelines770. The zoom affordances can be usable in controlling or determining a change in zoom factor of the video data. In this situation, the user can touch or point at any one of the time guidelines770, and then slide the finger or move the pointing device up or down along the time guideline770to control a zoom-in or zoom-out function. During this up or down movement, if the finger or pointing device drifts off the time guideline770toward an adjacent new time speed setting region, then that time guideline associated with the adjacent time speed setting region can be activated. Thus, alerting the user that a change in time speed rate may be occur if the user keeps drifting toward that newly activated time guideline. To avoid the time guidelines770and/or finger guideline774from becoming an annoyance or distraction, these guidelines can be configured to disappear after a predetermined time of being first displayed, when the finger contact point or pointing device activation point has moved a predetermined distance from along a horizontal or vertical axis, or if the finger or pointing device provides multiple sequential contacts or activations. Therefore, in a possible embodiment of the invention, the system further comprises processing means, e.g., a controller, for activating and deactivating the time guidelines770and/or finger guideline774on the GUI30or display of the electronic device depending on a threshold of the distance to or from a speed rate indicator or setting34. Any or all of the time guidelines770can be activated automatically when a user raises the camera or display from a horizontal to a vertical position, when the GUI30is in operation. Furthermore, any or all of the time guidelines770can be automatically rotated so they are substantially in a vertical orientation when the electronic device displaying the GUI30is rotated between a portrait and landscape orientation. In some or all embodiments, the time guidelines770can be in any geometric shape, such as but not limited to, a square, a rectangle, an oval, a circle, a triangle or polygon. The guidelines770can be configured according to a parameter, which is configurable by a user. The parameter can be any one or any combination of color, pattern, length, thickness, flashing, brightness, shape, orientation, and display time. The guidelines770and/or its geometric shape can be configured or configurable to represent a field-of-view of the camera associated with the GUI30, thereby providing the user with a specific reference area for positioning the object being recorded or edited. In the exemplary, at least part of two or more guidelines770can be displayed in the video display region776of the GUI30in a spaced apart relationship, thereby assisting the user in centering an objected being recorded or viewed within a field-of-view of a camera or the video feed. The guidelines770and/or the finger guideline774can be implemented or implementable in or with an electronic device, a video system, a computer system, a video interface system, a graphical user interface, a non-transitory computer readable medium and/or a method utilizing any of the above. In the exemplary, some features of the guidelines770, the current speed indicator772and/or the finger guideline774can be:in a utilization for composition aide;to display how close the software application of the present technology is to switching to the next time speed rate;in a utilization as a track to guide user while zooming;to display the current recording and/or playback speed indicator on-screen while in one-touch recording mode;to display the current recording and/or playback speed indicator on-screen while in one-touch editing mode;the user can “lock” the time speed rate while zooming to ensure no accidental change in the time speed rate;a wider device screen can display more time speed rate options with or without vertical bars; and/orthe electronic device in landscape mode can display more time speed rate options with or without vertical bars. In some or all embodiments, the present technology can include artificial intelligence (AI) to identify sections in the video that can be compressed or expanded the appropriate amounts so that viewing the resulting video is more discernible or indiscernible to the viewer from the original video to a) meet the project requirements; and b) “emotioneering” effect in the appropriate content category. Emotioneering refers to a vast body of techniques which can create, for a player or participant, a breadth and depth of emotions in a game or other interactive experience, and which can immerse a game player or interactive participant in a world or a role. In some or all embodiments, the guidelines770, the current speed indicator772and/or the finger guideline774can be displayed in 2-D or 3-D, and/or can be implemented and viewed in augmented reality or virtual reality mode. In an exemplary Normal and/or 360 Live Record Mode, AI can in real-time scan and analyze the scene being recorded to make to higher accuracy of automatically adjusting the guidelines770in real time, moving them either closer together or further apart, or even bending the distance so that they are no longer parallel lines and may even intersect, depending on the scene's likelihood for the user to use more or less of the fast or slow motion speeds. This operation may be useful in 3-D Mode and/or/with the 360 Mode. In Normal and/or 360 Edit Mode, the AI can pre-scan and analyze a previously recorded video to make to higher accuracy of automatically adjusting the guidelines770in real time, moving them either closer together or further apart, or even bending the distance so that they are no longer parallel lines and may even intersect, depending on the scene's likelihood for the user to use more or less of the fast or slow motion speeds. This operation may be useful in 3-D Mode and/or/with the 360 Mode. The present technology can incorporate augmented reality, which can interact with the guidelines770. In some or all embodiments, the speed selection region34and/or the speed rate/control affordance or indicator35can be an artifact of an operating system utilizing the present technology that is displayed by default on certain electronic devices utilizing the operating system. In some or all embodiments, the speed selection region34and/or the speed rate/control affordance or indicator35may be omitted or the functionality may be different. For example, when a camera of a 360 video pans closely around a turn or an object, the relatively large size on the screen would actually display recognizable people's faces, especially as video resolutions continues to get higher with no practical limit in sight. The present technology can be implemented as a premium feature in a mobile software application on the iOS and Android mobile platforms. Industry standard best practices software development operations, “Dev Ops”, can be deployed to implement further embodiments of the present technology. When in use, the guidelines770can appear as per the user setting for the camera live record mode and in-app edit mode screens. The guidelines770can be displayed in the screens while in camera live one-touch record mode or in-app one-touch edit mode. The guidelines770can help the user as a composition guide for placing scene elements within the photo and video frame, like placement of the main subject in the center or using the rule of thirds and other compositional standards. In the alternative, it can be appreciated that the first or native speed rate of the video data can be modified or changed to the modified speed rate when a finger touches the display screen or GUI30, and then revert from the modified speed rate to the first or native speed rate when the finger is taken off the screen, or vice versa. This operation can be accomplished by the processing unit while the video data is being played on the GUI30in real time utilizing any of the video modification processes described with the present technology. This operation can further be accomplished with a pointing device instead of finger touch, in that the first or native speed rate is modified or changed to the modified speed rate when the point device is activated or aimed at the display screen, and then revert to the first or native speed rate when the pointing device is deactivated or aimed away from the display screen. Alternatively, the GUI30can be configured or configurable to utilize additional user feedback associated with the device implementing the present technology. This feedback can use vibration frequency and intensity, and 3D tactile to indicate the zoom, speed factors, and/or other operational factors. In use, it can now be understood that a user could initiate a camera operation using an electronic device that includes or is operably associated with the present technology software application, or the user could initiate camera operation using present technology software application that is operably associated with the camera. Upon operation of the present technology software application, a user interface is provided to the user for controlling the functions of the present technology software application and/or the camera. The user can initiate a recording function of the camera using the interface, at which time the present technology software application would receive any raw video data from the camera or remote video feed, which can be associated with a microphone or a peripheral microphone(s). During this operation, the raw video data from the camera and/or microphone is diverted to the present technology software application instead of a memory unit, which would normally receive the raw data from the camera. The interface provides a simple input from the user to control the recording speed rate of the raw video data received from the camera. For exemplary purposes, this input by the user on the interface can be movement across a portion of a touchscreen or pressure applied to a portion of the touchscreen. It can be appreciated that this input can come in a variety of forms such as, but not limited to, movement of a cursor, voice commands, activation of icons, operation of switches or buttons, on-screen gestures, infrasonic devices, and the like. If the user does not provide input to change the speed rate, then the raw video data from the camera is displayed and is written to memory. Alternatively, if the user does provide input to change the speed rate, then the raw video data is processed using the present technology software application and its associated algorithms in real time. The raw video data includes one or more frames, and these frames processed to create in a final video data stream that corresponds to the speed rate inputted by the user. This is accomplished utilizing the present technology software application to create a modified video data stream. This modified video data stream can be created by dropping specifically identified frames from the raw video data or adding frames to the raw video data by copying specially identified frames and adding these copied frames adjacent to their original frame or by “frame blending”, which interpolates one or more frames in between two reference frames. The number of dropped frames or added frames can be determined and repeated by the present technology software application until the desired speed rate is achieved. The present technology software application can then write the raw video data or the modified video data stream to memory, thereby providing to be displayed a video in a normal speed rate, a fast motion speed rate or a slow motion speed rate. It can be appreciated that the speed rate of the video is not modified after writing to memory, thereby recording the video in real time with or without special effects and omitting the need for postproduction editing to change the video speed rate. The present technology can be configured or configurable so that the algorithm creates a smoother time modification of the video data stream. For example, the algorithm could fill in video gaps when the user jumps from one speed to another. The algorithm can interpolate data between two or more data points, thus creating even more smoothness, for example, when going from −3× slow to 4× fast. During playback, the video can be very abrupt. This can be algorithmically corrected to smooth out the video to enhance the viewer's experience with perceived higher resolution during the transition into the beginning of each special effect, during each special effect and the transition from the special effect to normal time—occurring while the user is moving around and panning the camera as a user would need while capturing special moments (peak moments) in an active sporting event. An example of “Peak moment” is when an object being videoed jumps, it is the instant where there is no more upward momentum, but the person has not yet begun to fall. Artificial intelligence (AI) can be utilized to calculate “peak moment” of the action in a scene being recorded, and take a predetermined desired action, such as using slow motion slightly before and slightly after the “peak moment”. The present technology can be embedded to any camera device, such as action cameras like GoPro®, DSLR's, mirrorless cameras, Pro Level video gear, gimbals, tripods, on the camera and remotely triggered flash lighting, eye glass cameras, drones, webcams. The present technology can be embedded into remote controls and connected through Bluetooth® or other protocols, to existing electronic gear that does not have the present technology embedded. The user interface of the present technology can be represented in 3-D or 2-D. The user can slide a finger or stylus side to side on the touchscreen of the electronic device in one plane of motion. With a 3-D user interface, the electronic device can sense the changes in depth of the user's controllers, the amount of pressure the user is applying, and adjust the special effects appropriately. Joysticks can be employed and utilized with the present technology. The user interface could be pressure sensitive so that the user could press harder or softer on the device and the device would interpret these as controls to modify the playback speed with the fast forward and slow motion special effects. The present technology can allow for recording at sufficiently high frames per seconds with the resulting “raw” unedited video (recorded with no special effects applied) can be edited post recording, and the slow motions will remain smooth because the high recording frame rate supports it relative to a slower playback fps. It can be appreciated that brainwave sensing devices, implanted or surface attachment, or wireless remote sensing, can be utilized with the present technology to directly control the time speed special effects with a thought. Compression technology can be utilized with the present technology to improve recording at even higher frame rate to record finer details in the scenery and reduce file size. Device performance can improve and users can therefore record at even higher frame rate to record finer details in the scenery while reducing the file size. Audio processing algorithms can be utilized with the present technology to give the clearest and most understandable audios to the videos during segments where the scene speeds up and slows down. 3rd party API's from companies such as Dolby Labs, DTS, Inc., Fraunhofer Institut, Philips, Technicolor, IMAX, Sony, and others can be utilized to perform the audio processing. Data encryption algorithms can be utilized with the present technology to provide secure transmission and storage of the videos. Cryptography and blockchain technology algorithms can be utilized with the present technology to create a distributed ledger to record the original content creator of the videos produced with the present technology. The videos can be accessed by requiring cryptographic tokens to be “redeemed” for access permission. It should be understood that the particular order in which the operations in the figures have been described is merely an example and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods and/or processes described herein are also applicable in an analogous manner to the method described above with respect to the figures. For situations in which the systems, interfaces and/or methods discussed above collect information about users, the users may be provided with an opportunity to opt in/out of programs or features that may collect personal information (e.g., information about a user's preferences or usage of a smart device, biometric data, and environmental data such as location). In addition, in some or all implementations, certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be made anonymous so that the personally identifiable information cannot be determined for or associated with the user, and so that user preferences or user interactions are generalized (for example, generalized based on user demographics) rather than associated with a particular user. Data encryption can also be utilized and “tokenized” access using the blockchain technology can also be utilized to further obfuscate the user's identity. Although some of various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, middleware, software, API's or any combination thereof. While embodiments of the real time video special effects system and method have been described in detail, it should be apparent that modifications and variations thereto are possible, all of which fall within the true spirit and scope of the present technology. With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the present technology, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present technology. For example, any suitable sturdy material may be used instead of the above described. And although creating special effects in video recordings while recording is in progress have been described, it should be appreciated that the real time video special effects system and method herein described is also suitable for change frame attributes, change record frame rate, change playback frame rate, and time compression and expansion and other real-time special effects associated with any data stream in real time. Therefore, the foregoing is considered as illustrative only of the principles of the present technology. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the present technology to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the present technology.
182,473
11943559
DESCRIPTION OF EMBODIMENTS The following description sets forth exemplary methods, parameters, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. There is a need for electronic devices that provide efficient methods and interfaces for providing live video. For example, computer systems, such as those described herein provide a manner in which users can insert live objects into presentations such that live video is provided both while editing and presenting the presentation. Such techniques can reduce the cognitive burden on a user who provides live video, thereby enhancing productivity. Further, such techniques can reduce processor and battery power otherwise wasted on redundant user inputs. Below,FIGS.1A-1B,2,3,4A-4B, and5A-5Bprovide a description of exemplary devices for performing the techniques for providing live video.FIGS.6A-6Killustrate exemplary user interfaces for providing live video.FIG.7is a flow diagram illustrating methods of providing live video in accordance with some embodiments. The user interfaces inFIGS.6A-6Kare used to illustrate the processes described below, including the processes inFIG.7. The processes described below enhance the operability of the devices and make the user-device interfaces more efficient (e.g., by helping the user to provide proper inputs and reducing user mistakes when operating/interacting with the device) through various techniques, including by providing improved visual feedback to the user, reducing the number of inputs needed to perform an operation, providing additional control options without cluttering the user interface with additional displayed controls, performing an operation when a set of conditions has been met without requiring further user input, and/or additional techniques. These techniques also reduce power usage and improve battery life of the device by enabling the user to use the device more quickly and efficiently. In addition, in methods described herein where one or more steps are contingent upon one or more conditions having been met, it should be understood that the described method can be repeated in multiple repetitions so that over the course of the repetitions all of the conditions upon which steps in the method are contingent have been met in different repetitions of the method. For example, if a method requires performing a first step if a condition is satisfied, and a second step if the condition is not satisfied, then a person of ordinary skill would appreciate that the claimed steps are repeated until the condition has been both satisfied and not satisfied, in no particular order. Thus, a method described with one or more steps that are contingent upon one or more conditions having been met could be rewritten as a method that is repeated until each of the conditions described in the method has been met. This, however, is not required of system or computer readable medium claims where the system or computer readable medium contains instructions for performing the contingent operations based on the satisfaction of the corresponding one or more conditions and thus is capable of determining whether the contingency has or has not been satisfied without explicitly repeating steps of a method until all of the conditions upon which steps in the method are contingent have been met. A person having ordinary skill in the art would also understand that, similar to a method with contingent steps, a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the contingent steps have been performed. Although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. The first touch and the second touch are both touches, but they are not the same touch. The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, California Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad). In some embodiments, the electronic device is a computer system that is in communication (e.g., via wireless communication, via wired communication) with a display generation component. The display generation component is configured to provide visual output, such as display via a CRT display, display via an LED display, or display via image projection. In some embodiments, the display generation component is integrated with the computer system. In some embodiments, the display generation component is separate from the computer system. As used herein, “displaying” content includes causing to display the content (e.g., video data rendered or decoded by display controller156) by transmitting, via a wired or wireless connection, data (e.g., image data or video data) to an integrated or external display generation component to visually produce the content. In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick. The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application. The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user. Attention is now directed toward embodiments of portable devices with touch-sensitive displays.FIG.1Ais a block diagram illustrating portable multifunction device100with touch-sensitive display system112in accordance with some embodiments. Touch-sensitive display112is sometimes called a “touch screen” for convenience and is sometimes known as or called a “touch-sensitive display system.” Device100includes memory102(which optionally includes one or more computer-readable storage mediums), memory controller122, one or more processing units (CPUs)120, peripherals interface118, RF circuitry108, audio circuitry110, speaker111, microphone113, input/output (I/O) subsystem106, other input control devices116, and external port124. Device100optionally includes one or more optical sensors164. Device100optionally includes one or more contact intensity sensors165for detecting intensity of contacts on device100(e.g., a touch-sensitive surface such as touch-sensitive display system112of device100). Device100optionally includes one or more tactile output generators167for generating tactile outputs on device100(e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system112of device100or touchpad355of device300). These components optionally communicate over one or more communication buses or signal lines103. As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). Intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button). As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user. It should be appreciated that device100is only one example of a portable multifunction device, and that device100optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown inFIG.1Aare implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits. Memory102optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller122optionally controls access to memory102by other components of device100. Peripherals interface118can be used to couple input and output peripherals of the device to CPU120and memory102. The one or more processors120run or execute various software programs (such as computer programs (e.g., including instructions)) and/or sets of instructions stored in memory102to perform various functions for device100and to process data. In some embodiments, peripherals interface118, CPU120, and memory controller122are, optionally, implemented on a single chip, such as chip104. In some other embodiments, they are, optionally, implemented on separate chips. RF (radio frequency) circuitry108receives and sends RF signals, also called electromagnetic signals. RF circuitry108converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry108optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry108optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry108optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. Audio circuitry110, speaker111, and microphone113provide an audio interface between a user and device100. Audio circuitry110receives audio data from peripherals interface118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker111. Speaker111converts the electrical signal to human-audible sound waves. Audio circuitry110also receives electrical signals converted by microphone113from sound waves. Audio circuitry110converts the electrical signal to audio data and transmits the audio data to peripherals interface118for processing. Audio data is, optionally, retrieved from and/or transmitted to memory102and/or RF circuitry108by peripherals interface118. In some embodiments, audio circuitry110also includes a headset jack (e.g.,212,FIG.2). The headset jack provides an interface between audio circuitry110and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). I/O subsystem106couples input/output peripherals on device100, such as touch screen112and other input control devices116, to peripherals interface118. I/O subsystem106optionally includes display controller156, optical sensor controller158, depth camera controller169, intensity sensor controller159, haptic feedback controller161, and one or more input controllers160for other input or control devices. The one or more input controllers160receive/send electrical signals from/to other input control devices116. The other input control devices116optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some embodiments, input controller(s)160are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g.,208,FIG.2) optionally include an up/down button for volume control of speaker111and/or microphone113. The one or more buttons optionally include a push button (e.g.,206,FIG.2). In some embodiments, the electronic device is a computer system that is in communication (e.g., via wireless communication, via wired communication) with one or more input devices. In some embodiments, the one or more input devices include a touch-sensitive surface (e.g., a trackpad, as part of a touch-sensitive display). In some embodiments, the one or more input devices include one or more camera sensors (e.g., one or more optical sensors164and/or one or more depth camera sensors175), such as for tracking a user's gestures (e.g., hand gestures) as input. In some embodiments, the one or more input devices are integrated with the computer system. In some embodiments, the one or more input devices are separate from the computer system. A quick press of the push button optionally disengages a lock of touch screen112or optionally begins a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a Device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g.,206) optionally turns power to device100on or off. The functionality of one or more of the buttons are, optionally, user-customizable. Touch screen112is used to implement virtual or soft buttons and one or more soft keyboards. Touch-sensitive display112provides an input interface and an output interface between the device and a user. Display controller156receives and/or sends electrical signals from/to touch screen112. Touch screen112displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output optionally corresponds to user-interface objects. Touch screen112has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen112and display controller156(along with any associated modules and/or sets of instructions in memory102) detect contact (and any movement or breaking of the contact) on touch screen112and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen112. In an exemplary embodiment, a point of contact between touch screen112and the user corresponds to a finger of the user. Touch screen112optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen112and display controller156optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen112. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone® and iPod Touch® from Apple Inc. of Cupertino, California. A touch-sensitive display in some embodiments of touch screen112is, optionally, analogous to the multi-touch sensitive touchpads described in the following U.S. Pat. No. 6,323,846 (Westerman et al.), U.S. Pat. No. 6,570,557 (Westerman et al.), and/or U.S. Pat. No. 6,677,932 (Westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen112displays visual output from device100, whereas touch-sensitive touchpads do not provide visual output. A touch-sensitive display in some embodiments of touch screen112is described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application Ser. No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A Computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety. Touch screen112optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user optionally makes contact with touch screen112using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user. In some embodiments, in addition to the touch screen, device100optionally includes a touchpad for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch screen112or an extension of the touch-sensitive surface formed by the touch screen. Device100also includes power system162for powering the various components. Power system162optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices. Device100optionally also includes one or more optical sensors164.FIG.1Ashows an optical sensor coupled to optical sensor controller158in I/O subsystem106. Optical sensor164optionally includes charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor164receives light from the environment, projected through one or more lenses, and converts the light to data representing an image. In conjunction with imaging module143(also called a camera module), optical sensor164optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device100, opposite touch screen display112on the front of the device so that the touch screen display is enabled for use as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that the user's image is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display. In some embodiments, the position of optical sensor164can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor164is used along with the touch screen display for both video conferencing and still and/or video image acquisition. Device100optionally also includes one or more depth camera sensors175.FIG.1Ashows a depth camera sensor coupled to depth camera controller169in I/O subsystem106. Depth camera sensor175receives data from the environment to create a three dimensional model of an object (e.g., a face) within a scene from a viewpoint (e.g., a depth camera sensor). In some embodiments, in conjunction with imaging module143(also called a camera module), depth camera sensor175is optionally used to determine a depth map of different portions of an image captured by the imaging module143. In some embodiments, a depth camera sensor is located on the front of device100so that the user's image with depth information is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display and to capture selfies with depth map data. In some embodiments, the depth camera sensor175is located on the back of device, or on the back and the front of the device100. In some embodiments, the position of depth camera sensor175can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a depth camera sensor175is used along with the touch screen display for both video conferencing and still and/or video image acquisition. Device100optionally also includes one or more contact intensity sensors165.FIG.1Ashows a contact intensity sensor coupled to intensity sensor controller159in I/O subsystem106. Contact intensity sensor165optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor165receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112). In some embodiments, at least one contact intensity sensor is located on the back of device100, opposite touch screen display112, which is located on the front of device100. Device100optionally also includes one or more proximity sensors166.FIG.1Ashows proximity sensor166coupled to peripherals interface118. Alternately, proximity sensor166is, optionally, coupled to input controller160in I/O subsystem106. Proximity sensor166optionally performs as described in U.S. patent application Ser. No. 11/241,839, “Proximity Detector In Handheld Device”; Ser. No. 11/240,788, “Proximity Detector In Handheld Device”; Ser. No. 11/620,702, “Using Ambient Light Sensor To Augment Proximity Sensor Output”; Ser. No. 11/586,862, “Automated Response To And Sensing Of User Activity In Portable Devices”; and Ser. No. 11/638,251, “Methods And Systems For Automatic Configuration Of Peripherals,” which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables touch screen112when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call). Device100optionally also includes one or more tactile output generators167.FIG.1Ashows a tactile output generator coupled to haptic feedback controller161in I/O subsystem106. Tactile output generator167optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Contact intensity sensor165receives tactile feedback generation instructions from haptic feedback module133and generates tactile outputs on device100that are capable of being sensed by a user of device100. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device100) or laterally (e.g., back and forth in the same plane as a surface of device100). In some embodiments, at least one tactile output generator sensor is located on the back of device100, opposite touch screen display112, which is located on the front of device100. Device100optionally also includes one or more accelerometers168.FIG.1Ashows accelerometer168coupled to peripherals interface118. Alternately, accelerometer168is, optionally, coupled to an input controller160in I/O subsystem106. Accelerometer168optionally performs as described in U.S. Patent Publication No. 20050190059, “Acceleration-based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692, “Methods And Apparatuses For Operating A Portable Device Based On An Accelerometer,” both of which are incorporated by reference herein in their entirety. In some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device100optionally includes, in addition to accelerometer(s)168, a magnetometer and a GPS (or GLONASS or other global navigation system) receiver for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device100. In some embodiments, the software components stored in memory102include operating system126, communication module (or set of instructions)128, contact/motion module (or set of instructions)130, graphics module (or set of instructions)132, text input module (or set of instructions)134, Global Positioning System (GPS) module (or set of instructions)135, and applications (or sets of instructions)136. Furthermore, in some embodiments, memory102(FIG.1A) or370(FIG.3) stores device/global internal state157, as shown inFIGS.1A and3. Device/global internal state157includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display112; sensor state, including information obtained from the device's various sensors and input control devices116; and location information concerning the device's location and/or attitude. Operating system126(e.g., Darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. Communication module128facilitates communication with other devices over one or more external ports124and also includes various software components for handling data received by RF circuitry108and/or external port124. External port124(e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod® (trademark of Apple Inc.) devices. Contact/motion module130optionally detects contact with touch screen112(in conjunction with display controller156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module130includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module130receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module130and display controller156detect contact on a touchpad. In some embodiments, contact/motion module130uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device100). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). Contact/motion module130optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event. Graphics module132includes various known software components for rendering and displaying graphics on touch screen112or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like. In some embodiments, graphics module132stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module132receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller156. Haptic feedback module133includes various software components for generating instructions used by tactile output generator(s)167to produce tactile outputs at one or more locations on device100in response to user interactions with device100. Text input module134, which is, optionally, a component of graphics module132, provides soft keyboards for entering text in various applications (e.g., contacts137, e-mail140, IM141, browser147, and any other application that needs text input). GPS module135determines the location of the device and provides this information for use in various applications (e.g., to telephone module138for use in location-based dialing; to camera module143as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets). Applications136optionally include the following modules (or sets of instructions), or a subset or superset thereof:Contacts module137(sometimes called an address book or contact list);Telephone module138;Video conference module139;E-mail client module140;Instant messaging (IM) module141;Workout support module142;Camera module143for still and/or video images;Image management module144;Video player module;Music player module;Browser module147;Calendar module148;Widget modules149, which optionally include one or more of: weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, dictionary widget149-5, and other widgets obtained by the user, as well as user-created widgets149-6;Widget creator module150for making user-created widgets149-6;Search module151;Video and music player module152, which merges video player module and music player module;Notes module153;Map module154; and/orOnline video module155. Examples of other applications136that are, optionally, stored in memory102include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, contacts module137are, optionally, used to manage an address book or contact list (e.g., stored in application internal state192of contacts module137in memory102or memory370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone module138, video conference module139, e-mail140, or IM141; and so forth. In conjunction with RF circuitry108, audio circuitry110, speaker111, microphone113, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, telephone module138are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies. In conjunction with RF circuitry108, audio circuitry110, speaker111, microphone113, touch screen112, display controller156, optical sensor164, optical sensor controller158, contact/motion module130, graphics module132, text input module134, contacts module137, and telephone module138, video conference module139includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, e-mail client module140includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module144, e-mail client module140makes it very easy to create and send e-mails with still or video images taken with camera module143. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, the instant messaging module141includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for Internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS). In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, GPS module135, map module154, and music player module, workout support module142includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data. In conjunction with touch screen112, display controller156, optical sensor(s)164, optical sensor controller158, contact/motion module130, graphics module132, and image management module144, camera module143includes executable instructions to capture still images or video (including a video stream) and store them into memory102, modify characteristics of a still image or video, or delete a still image or video from memory102. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, and camera module143, image management module144includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, browser module147includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, e-mail client module140, and browser module147, calendar module148includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, and browser module147, widget modules149are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget149-1, stocks widget149-2, calculator widget149-3, alarm clock widget149-4, and dictionary widget149-5) or created by the user (e.g., user-created widget149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets). In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, and browser module147, the widget creator module150are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget). In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, search module151includes executable instructions to search for text, music, sound, image, video, and/or other files in memory102that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, audio circuitry110, speaker111, RF circuitry108, and browser module147, video and music player module152includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen112or on an external, connected display via external port124). In some embodiments, device100optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.). In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, and text input module134, notes module153includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions. In conjunction with RF circuitry108, touch screen112, display controller156, contact/motion module130, graphics module132, text input module134, GPS module135, and browser module147, map module154are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions. In conjunction with touch screen112, display controller156, contact/motion module130, graphics module132, audio circuitry110, speaker111, RF circuitry108, text input module134, e-mail client module140, and browser module147, online video module155includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module141, rather than e-mail client module140, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “Portable Multifunction Device, Method, and Graphical User Interface for Playing Online Videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety. Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs (such as computer programs (e.g., including instructions)), procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. For example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module152,FIG.1A). In some embodiments, memory102optionally stores a subset of the modules and data structures identified above. Furthermore, memory102optionally stores additional modules and data structures not described above. In some embodiments, device100is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device100, the number of physical input control devices (such as push buttons, dials, and the like) on device100is, optionally, reduced. The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device100to a main, home, or root menu from any user interface that is displayed on device100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad. FIG.1Bis a block diagram illustrating exemplary components for event handling in accordance with some embodiments. In some embodiments, memory102(FIG.1A) or370(FIG.3) includes event sorter170(e.g., in operating system126) and a respective application136-1(e.g., any of the aforementioned applications137-151,155,380-390). Event sorter170receives event information and determines the application136-1and application view191of application136-1to which to deliver the event information. Event sorter170includes event monitor171and event dispatcher module174. In some embodiments, application136-1includes application internal state192, which indicates the current application view(s) displayed on touch-sensitive display112when the application is active or executing. In some embodiments, device/global internal state157is used by event sorter170to determine which application(s) is (are) currently active, and application internal state192is used by event sorter170to determine application views191to which to deliver event information. In some embodiments, application internal state192includes additional information, such as one or more of: resume information to be used when application136-1resumes execution, user interface state information that indicates information being displayed or that is ready for display by application136-1, a state queue for enabling the user to go back to a prior state or view of application136-1, and a redo/undo queue of previous actions taken by the user. Event monitor171receives event information from peripherals interface118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display112, as part of a multi-touch gesture). Peripherals interface118transmits information it receives from I/O subsystem106or a sensor, such as proximity sensor166, accelerometer(s)168, and/or microphone113(through audio circuitry110). Information that peripherals interface118receives from I/O subsystem106includes information from touch-sensitive display112or a touch-sensitive surface. In some embodiments, event monitor171sends requests to the peripherals interface118at predetermined intervals. In response, peripherals interface118transmits event information. In other embodiments, peripherals interface118transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration). In some embodiments, event sorter170also includes a hit view determination module172and/or an active event recognizer determination module173. Hit view determination module172provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display112displays more than one view. Views are made up of controls and other elements that a user can see on the display. Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture. Hit view determination module172receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module172identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view. Active event recognizer determination module173determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module173determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module173determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views. Event dispatcher module174dispatches the event information to an event recognizer (e.g., event recognizer180). In embodiments including active event recognizer determination module173, event dispatcher module174delivers the event information to an event recognizer determined by active event recognizer determination module173. In some embodiments, event dispatcher module174stores in an event queue the event information, which is retrieved by a respective event receiver182. In some embodiments, operating system126includes event sorter170. Alternatively, application136-1includes event sorter170. In yet other embodiments, event sorter170is a stand-alone module, or a part of another module stored in memory102, such as contact/motion module130. In some embodiments, application136-1includes a plurality of event handlers190and one or more application views191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view191of the application136-1includes one or more event recognizers180. Typically, a respective application view191includes a plurality of event recognizers180. In other embodiments, one or more of event recognizers180are part of a separate module, such as a user interface kit or a higher level object from which application136-1inherits methods and other properties. In some embodiments, a respective event handler190includes one or more of: data updater176, object updater177, GUI updater178, and/or event data179received from event sorter170. Event handler190optionally utilizes or calls data updater176, object updater177, or GUI updater178to update the application internal state192. Alternatively, one or more of the application views191include one or more respective event handlers190. Also, in some embodiments, one or more of data updater176, object updater177, and GUI updater178are included in a respective application view191. A respective event recognizer180receives event information (e.g., event data179) from event sorter170and identifies an event from the event information. Event recognizer180includes event receiver182and event comparator184. In some embodiments, event recognizer180also includes at least a subset of: metadata183, and event delivery instructions188(which optionally include sub-event delivery instructions). Event receiver182receives event information from event sorter170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device. Event comparator184compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator184includes event definitions186. Event definitions186contain definitions of events (e.g., predefined sequences of sub-events), for example, event1(187-1), event2(187-2), and others. In some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event1(187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event2(187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display112, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers190. In some embodiments, event definition187includes a definition of an event for a respective user-interface object. In some embodiments, event comparator184performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display112, when a touch is detected on touch-sensitive display112, event comparator184performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler190, the event comparator uses the result of the hit test to determine which event handler190should be activated. For example, event comparator184selects an event handler associated with the sub-event and the object triggering the hit test. In some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type. When a respective event recognizer180determines that the series of sub-events do not match any of the events in event definitions186, the respective event recognizer180enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture. In some embodiments, a respective event recognizer180includes metadata183with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata183includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy. In some embodiments, a respective event recognizer180activates event handler190associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer180delivers event information associated with the event to event handler190. Activating an event handler190is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer180throws a flag associated with the recognized event, and event handler190associated with the flag catches the flag and performs a predefined process. In some embodiments, event delivery instructions188include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process. In some embodiments, data updater176creates and updates data used in application136-1. For example, data updater176updates the telephone number used in contacts module137, or stores a video file used in video player module. In some embodiments, object updater177creates and updates objects used in application136-1. For example, object updater177creates a new user-interface object or updates the position of a user-interface object. GUI updater178updates the GUI. For example, GUI updater178prepares display information and sends it to graphics module132for display on a touch-sensitive display. In some embodiments, event handler(s)190includes or has access to data updater176, object updater177, and GUI updater178. In some embodiments, data updater176, object updater177, and GUI updater178are included in a single module of a respective application136-1or application view191. In other embodiments, they are included in two or more software modules. It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices100with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. FIG.2illustrates a portable multifunction device100having a touch screen112in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI)200. In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers202(not drawn to scale in the figure) or one or more styluses203(not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. Device100optionally also include one or more physical buttons, such as “home” or menu button204. As described previously, menu button204is, optionally, used to navigate to any application136in a set of applications that are, optionally, executed on device100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen112. In some embodiments, device100includes touch screen112, menu button204, push button206for powering the device on/off and locking the device, volume adjustment button(s)208, subscriber identity module (SIM) card slot210, headset jack212, and docking/charging external port124. Push button206is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device100also accepts verbal input for activation or deactivation of some functions through microphone113. Device100also, optionally, includes one or more contact intensity sensors165for detecting intensity of contacts on touch screen112and/or one or more tactile output generators167for generating tactile outputs for a user of device100. FIG.3is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device300need not be portable. In some embodiments, device300is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device300typically includes one or more processing units (CPUs)310, one or more network or other communications interfaces360, memory370, and one or more communication buses320for interconnecting these components. Communication buses320optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device300includes input/output (I/O) interface330comprising display340, which is typically a touch screen display. I/O interface330also optionally includes a keyboard and/or mouse (or other pointing device)350and touchpad355, tactile output generator357for generating tactile outputs on device300(e.g., similar to tactile output generator(s)167described above with reference toFIG.1A), sensors359(e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensors similar to contact intensity sensor(s)165described above with reference toFIG.1A). Memory370includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory370optionally includes one or more storage devices remotely located from CPU(s)310. In some embodiments, memory370stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory102of portable multifunction device100(FIG.1A), or a subset thereof. Furthermore, memory370optionally stores additional programs, modules, and data structures not present in memory102of portable multifunction device100. For example, memory370of device300optionally stores drawing module380, presentation module382, word processing module384, website creation module386, disk authoring module388, and/or spreadsheet module390, while memory102of portable multifunction device100(FIG.1A) optionally does not store these modules. Each of the above-identified elements inFIG.3is, optionally, stored in one or more of the previously mentioned memory devices. Each of the above-identified modules corresponds to a set of instructions for performing a function described above. The above-identified modules or computer programs (e.g., sets of instructions or including instructions) need not be implemented as separate software programs (such as computer programs (e.g., including instructions)), procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. In some embodiments, memory370optionally stores a subset of the modules and data structures identified above. Furthermore, memory370optionally stores additional modules and data structures not described above. Attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device100. FIG.4Aillustrates an exemplary user interface for a menu of applications on portable multifunction device100in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device300. In some embodiments, user interface400includes the following elements, or a subset or superset thereof:Signal strength indicator(s)402for wireless communication(s), such as cellular and Wi-Fi signals;Time404;Bluetooth indicator405;Battery status indicator406;Tray408with icons for frequently used applications, such as:Icon416for telephone module138, labeled “Phone,” which optionally includes an indicator414of the number of missed calls or voicemail messages;Icon418for e-mail client module140, labeled “Mail,” which optionally includes an indicator410of the number of unread e-mails;Icon420for browser module147, labeled “Browser;” andIcon422for video and music player module152, also referred to as iPod (trademark of Apple Inc.) module152, labeled “iPod;” andIcons for other applications, such as:Icon424for IM module141, labeled “Messages;”Icon426for calendar module148, labeled “Calendar;”Icon428for image management module144, labeled “Photos;”Icon430for camera module143, labeled “Camera;”Icon432for online video module155, labeled “Online Video;”Icon434for stocks widget149-2, labeled “Stocks;”Icon436for map module154, labeled “Maps;”Icon438for weather widget149-1, labeled “Weather;”Icon440for alarm clock widget149-4, labeled “Clock;”Icon442for workout support module142, labeled “Workout Support;”Icon444for notes module153, labeled “Notes;” andIcon446for a settings application or module, labeled “Settings,” which provides access to settings for device100and its various applications136. It should be noted that the icon labels illustrated inFIG.4Aare merely exemplary. For example, icon422for video and music player module152is labeled “Music” or “Music Player.” Other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon. FIG.4Billustrates an exemplary user interface on a device (e.g., device300,FIG.3) with a touch-sensitive surface451(e.g., a tablet or touchpad355,FIG.3) that is separate from the display450(e.g., touch screen display112). Device300also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors359) for detecting intensity of contacts on touch-sensitive surface451and/or one or more tactile output generators357for generating tactile outputs for a user of device300. Although some of the examples that follow will be given with reference to inputs on touch screen display112(where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown inFIG.4B. In some embodiments, the touch-sensitive surface (e.g.,451inFIG.4B) has a primary axis (e.g.,452inFIG.4B) that corresponds to a primary axis (e.g.,453inFIG.4B) on the display (e.g.,450). In accordance with these embodiments, the device detects contacts (e.g.,460and462inFIG.4B) with the touch-sensitive surface451at locations that correspond to respective locations on the display (e.g., inFIG.4B,460corresponds to468and462corresponds to470). In this way, user inputs (e.g., contacts460and462, and movements thereof) detected by the device on the touch-sensitive surface (e.g.,451inFIG.4B) are used by the device to manipulate the user interface on the display (e.g.,450inFIG.4B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein. Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). For example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). As another example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). Similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously. FIG.5Aillustrates exemplary personal electronic device500. Device500includes body502. In some embodiments, device500can include some or all of the features described with respect to devices100and300(e.g.,FIGS.1A-4B). In some embodiments, device500has touch-sensitive display screen504, hereafter touch screen504. Alternatively, or in addition to touch screen504, device500has a display and a touch-sensitive surface. As with devices100and300, in some embodiments, touch screen504(or the touch-sensitive surface) optionally includes one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied. The one or more intensity sensors of touch screen504(or the touch-sensitive surface) can provide output data that represents the intensity of touches. The user interface of device500can respond to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device500. Exemplary techniques for detecting and processing touch intensity are found, for example, in related applications: International Patent Application Ser. No. PCT/US2013/040061, titled “Device, Method, and Graphical User Interface for Displaying User Interface Objects Corresponding to an Application,” filed May 8, 2013, published as WIPO Publication No. WO/2013/169849, and International Patent Application Ser. No. PCT/US2013/069483, titled “Device, Method, and Graphical User Interface for Transitioning Between Touch Input to Display Output Relationships,” filed Nov. 11, 2013, published as WIPO Publication No. WO/2014/105276, each of which is hereby incorporated by reference in their entirety. In some embodiments, device500has one or more input mechanisms506and508. Input mechanisms506and508, if included, can be physical. Examples of physical input mechanisms include push buttons and rotatable mechanisms. In some embodiments, device500has one or more attachment mechanisms. Such attachment mechanisms, if included, can permit attachment of device500with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. These attachment mechanisms permit device500to be worn by a user. FIG.5Bdepicts exemplary personal electronic device500. In some embodiments, device500can include some or all of the components described with respect toFIGS.1A,1B, and3. Device500has bus512that operatively couples I/O section514with one or more computer processors516and memory518. I/O section514can be connected to display504, which can have touch-sensitive component522and, optionally, intensity sensor524(e.g., contact intensity sensor). In addition, I/O section514can be connected with communication unit530for receiving application and operating system data, using Wi-Fi, Bluetooth, near field communication (NFC), cellular, and/or other wireless communication techniques. Device500can include input mechanisms506and/or508. Input mechanism506is, optionally, a rotatable input device or a depressible and rotatable input device, for example. Input mechanism508is, optionally, a button, in some examples. Input mechanism508is, optionally, a microphone, in some examples. Personal electronic device500optionally includes various sensors, such as GPS sensor532, accelerometer534, directional sensor540(e.g., compass), gyroscope536, motion sensor538, and/or a combination thereof, all of which can be operatively connected to I/O section514. Memory518of personal electronic device500can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors516, for example, can cause the computer processors to perform the techniques described below, including process700(FIG.7). A computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like. Personal electronic device500is not limited to the components and configuration ofFIG.5B, but can include other or additional components in multiple configurations. As used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices100,300, and/or500(FIGS.1A,3, and5A-5B). For example, an image (e.g., icon), a button, and text (e.g., hyperlink) each optionally constitute an affordance. As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad355inFIG.3or touch-sensitive surface451inFIG.4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system112inFIG.1Aor touch screen112inFIG.4A) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a “focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user's intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device). As used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on multiple intensity samples. The characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). A characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, a mean value of the intensities of the contact, an average value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. For example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. In this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation. Attention is now directed towards embodiments of user interfaces (“UP”) and associated processes that are implemented on an electronic device, such as portable multifunction device100, device300, or device500. FIGS.6A-6Killustrate exemplary user interfaces for providing live video, in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes inFIG.7. Generally, implementation of various techniques for providing live video described below allow a user to stream (e.g., display) live video when editing and/or presenting a presentation. Live video streams provided in this manner can, optionally, include a live video stream captured by a camera and/or a live video stream provided by a non-camera electronic device (e.g., mobile device, a smart phone, and/or a tablet) that mirrors at least a portion of a display of the electronic device. FIGS.6A-6Iillustrate examples in which an electronic device (e.g., a laptop computer) operates in an editing mode for a presentation. The editing mode may be initiated, for instance, using a presentation program. With reference toFIG.6A, electronic device600is a multifunction device and has one or more components described above in relation to one or more of devices100,300, and500. In some embodiments, electronic device600includes a display602and a camera603. Camera603is, optionally, a front-facing camera integrated in the housing of device600. In some embodiments, electronic device600is in communication with one or more external devices (e.g., cameras, mobile devices), including but not limited to camera605(e.g., a camera not integrated into the housing of device600) and electronic device606(e.g., a smart phone). FIG.6Adepicts an example in which user operates device600in an editing mode of a presentation program to edit a first slide604aof presentation604. While operating in the editing mode, device600displays, on display602, editing interface610. Editing interface610includes menu affordances, such as menu affordance612. The menu affordances may be used to access various menus and/or features of the presentation program. While displaying editing interface610, device600detects selection of menu affordance612. The selection is a user input650a(e.g., mouse click) on menu affordance612and causes device600to display menu614, as shown inFIG.6A. While displaying menu614, device600detects selection of live object affordance616. The selection is a user input651a(e.g., mouse click) on the live object affordance616. As shown inFIG.6B, in response to user input651a, device600displays live object configuration interface620, which is optionally overlaid on editing interface610in some embodiments. Generally, live object configuration interface620is used to perform a process for editing various characteristics (e.g., properties) of a live object (e.g., live object634ofFIG.6F) and/or inserting the live object onto a slide (e.g., slide604a) of a presentation. In some embodiments, the live object includes a live video stream from (e.g., provided by) a live video source, such as a camera (e.g., camera603, camera605) or non-camera electronic device (e.g., electronic device606). In some embodiments, characteristics of a live object may be edited before and/or after insertion of the live object onto a slide. For example, live object configuration interface620includes visual features622, add device affordance624, source selector626, add object affordance628, and manage affordance630. Visual features622include various features which may be used to specify the manner in which a live object is to be displayed (e.g., once inserted onto a slide). Visual features622may, for instance, be used to specify a size, shape, orientation, and/or location of a live object. In some examples, editing a live object includes selecting a live video source for the live object. In some instances, however, an accessory device may not yet be configured for use as a live video source. Accordingly, add device affordance624can be used to add an accessory device as a live video source. For example, with reference toFIG.6B, while displaying live object configuration interface620, device600detects selection of add device affordance624. The selection is a user input650b(e.g., mouse click) on the add device affordance624. In response, device600displays add device menu660including a plurality of candidate devices662(e.g., candidate devices662aand662b) and name field664. While device menu660is displayed, an accessory device to be used as a live video source is selected from candidate devices662and a name for the accessory device is received in name field664. In response to selection of completion affordance668, the selected candidate device662is added to a list of accessory devices for which a live video stream is available for live objects. A live video source is selected for a live object using source selector626. When selected, source selector626causes device600to display a list of one or more accessory devices from which a live video stream is available for the live object (recall that this list can include any number of devices added using add device affordance624, as described). In some embodiments, source selector626is a dropdown list. Accordingly, as shown inFIG.6C, in response to input650c, device600expands source selector626to display accessory devices from which a live video stream is available. In the illustrated example, such accessory devices include an “Integrated Camera” (i.e., camera602), a “Side Camera” (i.e., camera605), and a “Smart Phone” (i.e., device606). Once accessory devices have been displayed in this manner, an accessory device may be selected as a live video source for a live object. As shown inFIG.6C, “Integrated Camera” (i.e., camera602) is selected as the live video source for a live object via input651c. In some embodiments, source selector626can be used to modify a live video source for a live object. For example, an additional selection of source selector626may cause device600to display, for an additional time, the list of accessory devices from which a live video stream is available. Thereafter, selection of an accessory device of the list of accessory devices will cause the selected accessory device to be assigned as the live video source for a live object. In some embodiments, source selector626can be used to modify a live video source of a live object prior to inserting the live object onto a slide. In some embodiments, source selector626can be used to modify a live video source of a live object after inserting the live object onto a slide. Once a source (e.g., an accessory device) for a live object has been selected, the live object can, optionally, be inserted onto slide604a. With reference toFIG.6D, for example, while displaying live object configuration interface620, device600detects selection of add object affordance628. The selection is a user input650d(e.g., mouse click) on add object affordance628and causes device600to insert live object634onto slide604a, as shown inFIG.6F. Once inserted, live object634includes a live video stream provided by the live video source associated with live object634, “Integrated Camera” (i.e., camera602). In the illustrated example, “Integrated Camera” (i.e., camera602) is the live video source selected for live object634, and live object634includes a live video stream of user636positioned in front of camera602(recall that camera602may be a front-facing integrated camera). In some embodiments, live video sources for a live object may be selected in other manners. Returning toFIG.6D, for example, while displaying editing interface610, device600detects selection of manage affordance630. The selection is a user input651d(e.g., mouse click) on manage affordance630and causes device600to display device management menu670, as illustrated inFIG.6E. Device management menu670includes live video previews671(e.g., live video previews671a-671c) for each accessory device that may be selected as a live video source. In some embodiments, each live video preview671includes a live video stream from a corresponding accessory device. By way of example, live video preview671aincludes a live video stream from “Integrated Camera” (i.e., device602), live video preview671bincludes a live video stream from “Side Camera” (i.e., device605), and live video preview671cincludes a live video stream from “Smart Phone” (i.e., device606). Displaying device management menu670in this manner allows a user (e.g., user636) to simultaneously view live video streams for multiple devices, and optionally, select a live video source for a live object. As illustrated inFIG.6E, user636is sitting in front of device600and live video preview671ais providing a live front view of user636, live video preview671bis providing a live side view of user636, and live video preview671cis providing a live video stream that mirrors the display of device606. In some embodiments, in response to selection of live video preview671avia input650e, “Integrated Camera” (i.e., camera602) is selected as a live video source for a live object. In some embodiments, selection of a live video preview671additionally or alternatively causes a live video object (e.g., live video object634) to be inserted onto slide604a, as shown inFIG.6F. As described, characteristics (e.g., visual characteristics) of live objects may be edited (e.g., modified) in some embodiments. As an example, sizes and/or positions of live objects are visual characteristics that may be edited. As illustrated inFIGS.6F-6G, in response to one or more inputs (e.g., a drag input, click and drag input650f) a size of live object634has been reduced and a position of live object634has been adjusted such that live object634is positioned closer to the leftmost and uppermost edges of slide604a. In some embodiments, a visual characteristic of a live object may be edited such that only a portion of a live video stream is displayed. For example, as shown inFIG.6H, a portion of the live video stream included in live object634is not displayed. Rather, the live video stream includes only a portion (e.g., upper-right portion, non-centered portion) of the field of view of “Integrated Camera” (i.e., camera602). In some embodiments, which portion(s) of a live video stream are included and/or not included in a live object can be selected by a user. In some embodiments, electronic device600receives user input to add one or more other visual elements, such as text and/or static images onto the slide. For example, as shown inFIG.6H, as a result of user input, live object634is concurrently displayed on slide604awith other visual elements (e.g., static objects, live objects) in the presentation editing mode. As shown inFIG.6H, for another example, frame638has been inserted onto slide604aand at least partially overlaid on live object634(e.g., frame638has a shallower depth than live object634on slide604a) such that the portion of live object634intersecting with frame638is not displayed in slide604a. Additionally, text box640has been inserted onto slide604aand at least partially overlaid on live object634such that the portion of live object634intersecting with the text of text box640is not displayed in slide604a. The portions of live object634not intersecting with frame638and text box640continue to display a live video stream, as described. In the example ofFIG.6H, text640is in a layer above that of live object634and text640is partially overlaid on text634. In some examples, a slide of presentation604may include multiple live video streams (e.g., by virtue of multiple live objects being inserted onto the slide). For example, while displaying slide604a, device600detects one or more user inputs (e.g., click inputs) requesting a transition from slide604aof presentation604to slide604bof presentation604. In response, device600replaces display of slide604ain editing interface610with slide604b, as shown inFIG.6I. As shown, slide604bhas been updated to include live objects642,644and text box646. In the example illustrated inFIG.6I, the live video source of live object642is “Side Camera”, which is camera605having a field of view oriented toward a side of user636. Accordingly, live object642includes a live video stream of user636from a side perspective. The live video source of live object644is “Smart Phone” (i.e., device606). Accordingly, live object644includes a live video stream that is a mirror of a display of device606. As the content of the display of device606is updated, the same updates are displayed as part of live object644. Because live object642and live object644have different video sources, display of slide604bwill include display of multiple live video streams, each of which, optionally, is displayed according to user-specified display characteristics. FIGS.6J-6Killustrate examples in which user636operates electronic device600in a presentation mode for presenting presentation604. The presentation mode may be initiated, for instance, using the presentation program. InFIG.6J, device600displays slide604a. As shown, slide604aincludes live object634, frame638, and text box640, and is displayed in a manner corresponding to slide604a, as shown inFIG.6H. While slide604ais displayed, live object634includes a live video stream provided by camera602(recall that camera602was selected as the live video source for object634while device600operated in the editing mode). It will be appreciated that although slide604amay be unchanged when transitioning from the editing mode to the presentation mode, because live object634provides a live video stream during each of the modes, content of the live video stream may vary. For example, as shown inFIGS.6E-6H, while displaying slide604ain the editing mode, user636is shown in the live video stream of live object634as wearing casual attire (e.g., t-shirt) and glasses. In contrast, as shown inFIG.6J, while displaying slide604ain the presentation mode, user636, now wearing formal business attire for the presentation, is shown in the live video stream of live object634as wearing the formal business attire (e.g., tie and coat) and not wearing glasses. While displaying slide604a, device600detects one or more user inputs (e.g., click inputs) requesting a transition from slide604aof presentation604to slide604bof presentation604. In response, device600replaces display of slide604awith slide604b, as shown inFIG.6J. As shown, slide604bincludes live object642, live object644, and text box646, and is displayed in a manner corresponding to slide604bas shown inFIG.6I. While displaying slide604b, live object642includes a live video stream provided by camera605(recall that camera605was selected as the live video source for object642while device600operated in the editing mode). Concurrently, live object644includes a live video stream provided by device606(recall that device606was selected as the live video source for object642while device600operated in the editing mode). By concurrently displaying live objects644and642, device600allows viewers of the presentation to view the presenter (user636) and the contents of the display of device606live. Thus, user636can address the audio and demonstrate techniques on device606live, using the presentation. It will be appreciated that although slide604bmay be unchanged when transitioning from the editing mode to the presentation mode, because live objects642,644provide live video streams during each of the modes, content of the live video streams may vary. For example, as shown inFIG.6I, while displaying slide604bin the editing mode, user636is shown in the live video stream of live object642as wearing casual attire (e.g., t-shirt) and glasses. In contrast, as shown inFIG.6K, while displaying slide604bin the presentation mode, user636has changed attire into formal business attire and, therefore, user636is shown in the live video stream of live object642as wearing formal business attire (e.g., tie and coat) and not wearing glasses. As another example, as shown inFIG.6I, while displaying slide604bin the editing mode, the display of device606is shown in the live video stream of live object644as displaying a home screen interface. In contrast, as shown inFIG.6K, while displaying slide604bin the presentation mode, the display of device606is shown in the live video stream of live object644as displaying an inbox of an email application. In some embodiments, live video streams included in one or more live objects may be recorded (e.g., in response to user input) during operation in the editing mode and/or the presentation mode. Thereafter, recordings of live video streams may be displayed in live objects during operation in the presentation mode (or subsequent presentations in the presentation mode). In some embodiments, displaying recordings in this manner replaces display of live video streams during operation of the presentation mode. FIG.7is a flow diagram illustrating a method for providing live video using a computer system in accordance with some embodiments. Method700is performed at a computer system (e.g.,100,300,500) that is, optionally, in communication with a display generation component and one or more input devices. In some embodiments, the computer system is in communication with a first camera and an external non-camera device, such as a smart phone or tablet. Some operations in method700are, optionally, combined, the orders of some operations are, optionally, changed, and some operations are, optionally, omitted. As described below, method700provides an intuitive way for providing live video. The method reduces the cognitive burden on a user and reduces computational requirements for providing live video, thereby creating a more efficient human-machine interface and system. For battery-operated computing devices, enabling a user to provide live video faster and more efficiently conserves power and increases the time between battery charges. While in an editing mode for a presentation (e.g.,604) (e.g., a slide presentation), the computer system receives (705), via one or more input devices (e.g., a computer pointer, a keyboard, a touch-sensitive surface), a request (e.g.,650d,650e) to insert a live object (e.g.,634,642,644) (e.g., an object that corresponds to a device (e.g., a camera, a phone, and/or a tablet) configured to provide a live video stream) onto a first slide (e.g.,604b) (e.g., at a first location and/or at a first size on the slide) of the presentation (e.g., onto slide2of6total slides). In some embodiments, the request includes associating (e.g., via a user selection) the live object with a source (e.g.,603,605,606) (e.g., a first camera that is in (wired or wireless) communication with the computer system, a second camera that is in (wired or wireless) communication with the computer system, or a non-camera external device (such as a smart phone or tablet) that is in (wired or wireless) communication with the computer system) for live video. In some embodiments, in response to receiving the request to insert the live object onto the first slide of the presentation, the computer system inserts the live object onto the first slide of the presentation. While in a presentation mode for the presentation (e.g.,604) (e.g., in full screen and/or not including editing options for the presentation), the computer system displays (710), via a display generation component, the first slide (e.g.,604a,604b) including concurrent display of the live object (e.g.,634,642,644) (e.g., at the first location and/or at the first size on the slide) and one or more visual elements (e.g.,638,640) (e.g., that are not live video streams, that are static objects, a slide title, a static image, and/or a pre-recorded video). In some embodiments, in accordance with (715) a determination that the live object is associated with a first camera (e.g., that is integrated into a housing of the computer system, that is not integrated into a housing of the computer system) (and, optionally, not associated with the external non-camera device), the live object (e.g.,634) includes a live video stream from the first camera (e.g.,603,605) (e.g., without including the live video stream mirroring the display of the non-camera external device). In some embodiments, in accordance with (720) a determination that the live object is associated with a non-camera external device (e.g.,606) (e.g., a smart phone or tablet) (and, optionally, not associated with the first camera), the live object (e.g.,634) includes a live video stream that mirrors at least a portion of (e.g., all of, less than all of) a display of the non-camera external device (e.g., without including the live video stream from the first camera). Displaying a live video stream of a video source that is from a camera or non-camera external device enables the presentation to include live video embedded into the presentation, thereby providing viewers with improved visual feedback of the field of view of the camera or the contents of the display of the non-camera external device. In some embodiments, displaying, via a display generation component, the first slide including concurrent display of the live object and one or more visual elements includes, in accordance with a determination that the live object is associated with a second camera (e.g.,605) (e.g., an external camera, different from the first camera, that is not integrated into the housing of the computer system and/or is connected wirelessly or by wire to the computer system) (and, optionally, not associated with the first camera or the non-camera external device) different from the first camera, the live object includes a live video stream (e.g., of at least a portion of the field of view of) from the second camera (e.g., without including the live video stream from the first camera or the live video stream mirroring the display of the non-camera external device). Displaying a live video stream of a video source that is from a second camera enables the presentation to include the live video embedded into the presentation, thereby providing viewers with improved visual feedback of the field of view of the second camera. In some embodiments, the computer system displays, via a display generation component, a second live object (e.g., at the first location and/or at the first size on the slide) that is associated with a second camera (e.g.,605) (e.g., an external camera, different from the first camera, that is not integrated into the housing of the computer system and/or is connected wirelessly or by wire to the computer system) (and, optionally, not associated with the first camera or the non-camera external device) different from the first camera (e.g.,603). In some embodiments, the second live object includes a live video stream (e.g., of at least a portion of the field of view of) from the second camera (e.g., without including the live video stream from the first camera or the live video stream mirroring the display of the non-camera external device). In some embodiments, the second live object (e.g., an object placed at a second location and/or second size in a slide of the presentation) is displayed, via the display generation component and concurrently with the live object and the one or more visual elements (e.g.,638,640,646), as part of the first slide of the presentation. Displaying multiple live video streams on a single slide provides viewers with concurrent visual feedback of the fields of view of the cameras, thereby providing improved visual feedback. Additionally, displaying multiple live video streams as objects included in a single slide allows the computer system to display both streams without requiring user input to switch between different slides, each with their own stream, thereby reducing the number of required user inputs. In some embodiments, the second live object (e.g.,642,644) (e.g., an object placed at a second location and/or second size in a slide of the presentation) is displayed, via the display generation component, as part of a second slide (e.g.,604b) of the presentation (e.g.,604) that is different from the first slide (e.g.,604a). Displaying different live video streams corresponding to different object on different slides provides viewers with timely visual feedback of the fields of view of the cameras, thereby providing improved visual feedback. For example, the video streams are displayed concurrently with other relevant text and/or images on the respective slide. In some embodiments, while in the editing mode for the presentation, the computer system receives, via one or more input devices (e.g., a computer pointer, a keyboard, a touch-sensitive surface), a second request to insert a third live object (e.g., an object that corresponds to a device (e.g., a camera, a phone, and/or a tablet) configured to provide a live video stream) onto a second slide (e.g.,604b) (e.g., at a second location and/or at a second size on the slide) of the presentation (e.g., onto slide3of6total slides). In some embodiments, the second request includes associating (e.g., via a user selection) the live object with a source (e.g.,605,606) (e.g., a first camera that is in (wired or wireless) communication with the computer system, a second camera that is in (wired or wireless) communication with the computer system, or a non-camera external device (such as a smart phone or tablet) that is in (wired or wireless) communication with the computer system) for live video. In some embodiments, while in the presentation mode for the presentation (e.g., in full screen and/or not including editing options for the presentation), the computer system displays, via the display generation component, the second slide (e.g.,604b) including the third live object (e.g.,642,644). In some embodiments, in accordance with a determination that the third live object is associated with the first camera (e.g.,603) (e.g., that is integrated into a housing of the computer system, that is not integrated into a housing of the computer system) (and, optionally, not associated with the external non-camera device), the third live object includes a live video stream from the first camera (e.g., without including the live video stream mirroring the display of the non-camera external device). In some embodiments, in accordance with a determination that the third live object is associated with a second camera (e.g., an external camera, different from the first camera, that is not integrated into the housing of the computer system and/or is connected wirelessly or by wire to the computer system) (and, optionally, not associated with the first camera or the non-camera external device) different from the first camera, the third live object includes a live video stream (e.g., of at least a portion of the field of view of) from the second camera (e.g., without including the live video stream from the first camera or the live video stream mirroring the display of the non-camera external device). In some embodiments, in accordance with a determination that the third live object is associated with a non-camera external device (e.g., a smart phone or tablet) (and, optionally, not associated with the first camera), the third live object (e.g.,644) includes a live video stream that mirrors at least a portion of (e.g., all of, less than all of) a display of the non-camera external device (e.g.,606) (e.g., without including the live video stream from the first camera or the second camera). In some embodiments, while displaying the first slide (e.g.,604b) during the presentation mode, the computer system receives an input from the user requesting to transition to the second slide and, in response, ceases to display the first slide (and, accordingly, the objects of the first slide) and displays the second slide (e.g.,604b) (and, accordingly, the objects of the second slide). Displaying a live video stream of a video source that is from a camera or non-camera external device enables the second slide of the presentation to include live video embedded into the presentation, thereby providing viewers with improved visual feedback of the field of view of the camera or the contents of the display of the non-camera external device. In some embodiments, receiving the request to insert the live object onto the first slide of the presentation includes receiving a selection of a device (e.g.,651c,650e) (e.g., a first camera, a second camera, a smartphone) from among a plurality of displayed device identifiers (e.g., a list and/or a drop-down list) to associate the live object with a source for live video from the device corresponding to the selected device identifier. In some embodiments, prior to receiving the request to insert the live object onto the first slide of the presentation, the computer system receives user input (e.g.,650b) to add a device identifier (e.g., a first camera, a second camera, a smartphone) to the plurality of device identifiers for selection for associating with a live object. In some embodiments, the computer system receives one or more user inputs that selects a device configured to provide a live video stream. The computer system optionally also receives a user-specified name (e.g., name inserted in name field664) (e.g., “side camera”) for the selected device. Based on receiving the selection of the device and the user-specified name for the device, the computer system adds the selected device to the list of devices available to be selected as a source for live video for live objects. In some embodiments, while in an editing mode for the presentation (e.g., the slide presentation), the computer system receives, via one or more input devices (e.g., a computer pointer, a keyboard, a touch-sensitive surface), a request (e.g.,650f) to specify one or more visual characteristics (e.g., location on the slide, animation (such as movement, rotation) on the slide, zoom size, filter, cropping, color, border (such as a frame), gradient (such as one side of the object being (e.g., partially, fully) transparent and the opposite side of the object not being transparent), and/or shading) of the live object. In some embodiments, displaying the live object as part of the first slide includes displaying the live object with the one or more visual characteristics. In some embodiments, the computer system receives selection of the one or more visual characteristics before adding the live object to the first slide and, as a result, the live object is added to the first slide (e.g.,604a) with the selected visual characteristics. In some embodiments, the live object is already part of the first slide when the computer system receives the request to specify the one or more visual characteristics and, in response, updates (changes) the specified one or more visual characteristics of the object as requested. Enabling the computer system to receive user inputs to specify visual characteristics of the live object enables the computer system to display the live object with the specified visual characteristics, thereby providing viewers with improved visual feedback of the field of view of the camera or the contents of the display of the non-camera external device. In some embodiments, a first visual characteristic of the one or more visual characteristics specifies a portion of (e.g., less than all of) a field of view of (e.g., not a center portion of) the source to include for the live object (and/or specifies a portion of the field of view of the source to not include for the live object) (e.g., portion of live object634shown inFIG.6H). Reducing the portion of the field of view of a source being displayed and/or processed reduces the processing power required, thereby saving battery power, and also helps to maintain the required portion of the video at a larger size, thereby providing the user with improved visual feedback. In some embodiments, while in the presentation mode for the presentation (e.g., in full screen and/or not including editing options for the presentation), the live object is displayed with the one or more visual characteristics. In some embodiments, the computer system receives input from the user specifying the portion of the field of view of the camera to use for the live feed during the presentation editing mode and re-uses that same portion of the field of view of the camera for the live feed during the presentation mode of the presentation. Reducing the portion of the field of view of a source being displayed and/or processed reduces the processing power required, thereby saving battery power, and also helps to maintain the required portion of the video at a larger size, thereby providing the user with improved visual feedback. In some embodiments, subsequent to receiving the request to insert the live object onto the first slide of the presentation with the source (e.g., a live camera stream) associated with the live object for the live video, the computer system receives a request to associate a second source (e.g.,605,606) (e.g., a live video stream that mirrors a non-camera external device, different from the current source) with the live object for the live video. In some embodiments, in response to receiving the request to associate the second source (e.g., a live video stream that mirrors a non-camera external device) with the live object for the live video, the computer system ceases to display a live video from the first source as part of the live object (e.g., in the editing mode) and displaying a live video from the second source as part of the live object. In some embodiments, prior to receiving the request to associate the second source with the live object for the live video, the live object is displayed with a plurality of user-specified visual characteristics (e.g., location on the slide, animation (such as movement, rotation) on the slide, zoom size, filter, cropping, color, border (such as a frame), gradient (such as one side of the object being (e.g., partially, fully) transparent and the opposite side of the object not being transparent), and/or shading). In some embodiments, the live video from the second source is displayed as part of the live object with the plurality of user-specified visual characteristics (e.g., location on the slide, animation (such as movement, rotation) on the slide, zoom size, filter, cropping, color, border (such as a frame), gradient (such as one side of the object being (e.g., partially, fully) transparent and the opposite side of the object not being transparent), and/or shading). In some embodiments, the computer system inserts a live object into a presentation and receives user inputs to specify the visual characteristics (e.g., location on the slide, animation (such as movement, rotation) on the slide, zoom size, filter, cropping, color, border (such as a frame), gradient (such as one side of the object being (e.g., partially, fully) transparent and the opposite side of the object not being transparent), and/or shading) of the object and the source (e.g., a first camera, a second camera, a display of a non-camera device) of the live video for the object. When the computer system subsequently receives user input change the source of the live video for the object, the computer system updates the object to display the updated source for the live video but continues to display the live object with the visual characteristics previously specified by the user. Maintaining the visual characteristics of the live object that were previously specified by the user when changing the source of the live video for the live object enables the user to update the object to include a desired video source without requiring the user to provide multiple inputs to again reproduced the visual characteristics previously provided, thereby reducing the number of user inputs required. In some embodiments, (e.g., while in the presentation mode for the presentation), while displaying, via the display generation component, the first slide including concurrent display of the live object and one or more visual elements, the computer system receives user input requesting to transition to a second slide. In some embodiments, in response to receiving the request to transition to the second slide, the computer system ceases to display the first slide and displaying the second slide. In some embodiments, the second slide includes one or more live objects and or one or more visual elements (e.g., slide #, date, title of slide, and/or content). In some embodiments, the computer system receives user input requesting to record presenting of the presentation. In some embodiments, in response to receiving user input requesting to record presenting of the presentation, the computer system records a first video feed associated with a first live object while the first live object is displayed as part of a first slide of the presentation (e.g., while in presentation mode, and/or without recording a video of other visual elements of the presentation, and/or without recording video feeds of live objects that are not displayed). In some embodiments, in response to receiving user input requesting to record presenting of the presentation, the computer system records a second video feed associated with a second live object while the second live object is displayed as part of a second slide of the presentation (e.g., while in presentation mode, and/or without recording a video of other visual elements of the presentation, and/or without recording video feeds of live objects that are not displayed). In some embodiments, the computer system records the video from sources associated with live objects that are displayed as part of the presentation. For example, if a first slide has a first object associated with a first camera, the computer system records the video from the first camera while the first object continues to be displayed (e.g., while the first slide with the first object continues to be displayed). When the computer system replaces the first slide with a second slide, the computer system stops recording video from the first camera if the second slide does not include a live object associated with the first camera and, instead, records videos for sources from live objects on the second slide. In this way, the recorded videos can be inserted into the presentation for subsequent playback. Recording the live video feeds when live objects associated with the live video feeds are displayed allows the computer system to record a presenter presenting slides and/or a device's display during the presentation for playback at a future live presentation (e.g., the streams are recorded, but the presenter can still manually navigate between the slides during the future live presentation), thereby reducing the number of times the presenter should provide audio input to the computer system for presenting the slides. In some embodiments, while in the presentation mode, the computer system displays the first slide and using the recorded first video as the source for the first live object (and, optionally, not transitioning to the second slide (even after the recorded first video ends) until user input is received requesting to transition to the second slide). In some embodiments, while in the presentation mode, the computer system displays the second slide and using the recorded second video as the source for the second live object. Enabling the computer system to transition between slides based on user input when the slides include previously recording video streams allows the computer system to keep pace with the presenter, such as by enabling a slide to remain displayed for an extended duration (after the playback of videos on the slide end) so that the presenter can provide additional description during the future presentation before the presenter provides input to transition to a subsequent slide. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated. Although the disclosure and examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims. As described above, one aspect of the present technology is the gathering and use of data available from various sources (e.g., sources for live video) to provide live video, for instance, in a presentation. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, twitter IDs, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to identify sources from which live video is provided and/or display a live video feed of a user during a presentation. The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country. Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of live video, personal identification information may be selectively omitted from live video provided by one or more sources as the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection or display of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app. Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
136,650
11943560
DESCRIPTION OF THE EMBODIMENTS In the following, the present embodiments are explained with reference to the drawings. The following embodiments are not necessarily intended to limit the present invention. Further, all combinations of features explained in the present embodiments are not necessarily indispensable to the solution of the present invention. First Embodiment In the present embodiment, an image processing system that generates a virtual viewpoint image representing a virtual advertisement, which is one kind of additional information that does not exist in an addition-target virtual viewpoint image is explained by using diagrams.FIG.1AandFIG.1Bare each a diagram for explaining the image processing system according to the present embodiment andFIG.1Ashows a configuration example of the image processing system andFIG.1Bshows a function block of the image processing system. The image processing system of the present embodiment has an information processing apparatus110, an image generation apparatus120that is connected to the information processing apparatus110, and a user terminal140that is connected to the image generation apparatus120. The connection between each apparatus may be wired or wireless. The information processing apparatus110is an apparatus that provides a parameter for generating a virtual viewpoint image in which additional information is displayed. Specifically, the information processing apparatus110is an apparatus that performs instructions to register virtual advertisement data, setting of information relating to a set display condition of virtual advertisement data, a time code, and virtual viewpoint information to a parameter, and transmission of the information to the image generation apparatus120. The information processing apparatus110has an operation reception unit111, a virtual advertisement setting unit112, a registration instruction unit113, a parameter setting unit114, and a parameter transmission unit118. The operation reception unit111receives an input of a user operation (also referred to as user input). Information that is generated in accordance with the received input operation is sent to each function unit in accordance with the operation contents. The virtual advertisement setting unit112sets virtual advertisement data that is combined with a virtual viewpoint video (that is added onto virtual viewpoint image) in accordance with the information that is sent from the operation reception unit111. The virtual advertisement data is, for example, image data of an advertisement signboard that does not exist in an addition-target virtual viewpoint image (basic 3D data). The virtual advertisement data that is set may be one or a plurality of pieces of virtual advertisement data may be set, and one kind or a plurality of kinds of virtual advertisement data may be set. As the virtual advertisement data that is set, for example, there is data of an image (seeFIG.3B) that is generated based on a 3D model of a virtual advertisement, whose details will be described later. The virtual advertisement data that is set in the virtual advertisement setting unit112is sent to the registration instruction unit113and a display condition setting unit115of the parameter setting unit114. The registration instruction unit113gives instructions to register the virtual advertisement data that is set by the virtual advertisement setting unit112in the image generation apparatus120. Specifically, the registration instruction unit113sends, together with the set virtual advertisement data, instructions information on the instructions to register the virtual advertisement data to a registration reception unit121of the image generation apparatus120and instructs the registration reception unit121to register the virtual advertisement data that is sent in the image generation apparatus120. The parameter setting unit114has the display condition setting unit115, a time code setting unit116, and a virtual viewpoint setting unit117. The parameter setting unit114sets various kinds of information to a virtual viewpoint image generation parameter (in the following, also referred to as parameter) in accordance with the input operation received by the operation reception unit111for each frame of the virtual viewpoint image that is generated by the image generation apparatus120. That is, the parameter is information for each frame of the virtual viewpoint image that is generated by the image generation apparatus120and information to which various kinds of information are set. The parameter to which various kinds of information are set is sent to the parameter transmission unit118. The display condition setting unit115selects setting-target virtual advertisement data from among the virtual advertisement data that is set by the virtual advertisement setting unit112in accordance with the information received from the operation reception unit111. Then, the display condition setting unit115sets identification information uniquely indicating the display condition of the selected virtual advertisement data and the selected virtual advertisement to the parameter. The display condition includes the display start time, the display period of time and the like of the selected virtual advertisement. The time code setting unit116sets a time code to generate a virtual viewpoint image to the parameter in accordance with the information received from the operation reception unit111. The time code that is set corresponds to the time code associated with basic 3D data corresponding to the virtual viewpoint image, whose details will be described later. The virtual viewpoint setting unit117sets virtual viewpoint information including the position of a virtual viewpoint, a view direction from the virtual viewpoint, and an angle of view of the virtual viewpoint as a parameter in accordance with the information received from the operation reception unit111. In the following explanation, for convenience, explanation is given by replacing the virtual viewpoint with a virtual camera. That is, the position of the virtual viewpoint, the view direction from the virtual viewpoint, and the angle of view of the virtual viewpoint respectively correspond to the position of the virtual camera, the orientation of the virtual camera, and the angle of view of the virtual camera. The virtual viewpoint information may not include all of the position of the virtual viewpoint, the view direction from the virtual viewpoint, and the angle of view of the virtual viewpoint and the virtual viewpoint information may be configured to include at least one of them. The parameter transmission unit118outputs the parameter to which the information on the condition and the like is set in each of the display condition setting unit115, the time code setting unit116, and the virtual viewpoint setting unit117to the outside of the information processing apparatus110. Specifically, the parameter transmission unit118transmits the parameter to which various kinds of information are set to a parameter reception unit123of the image generation apparatus120. It may also be possible for the parameter transmission unit118to transmit the parameters one by one or a plurality of the parameters at a time. A set of a plurality of parameters continuous from a certain time to another time is also referred to as a virtual camera path because the locus of the virtual viewpoint forms a path. The image generation apparatus120is an apparatus that generates a virtual viewpoint image in which a virtual advertisement is displayed for each frame in accordance with the parameter received from the information processing apparatus110. That is, it can be said that the image generation apparatus120is an apparatus that generates a virtual viewpoint image in which a desired virtual advertisement is added to a desired position. The image generation apparatus120has the registration reception unit121, a virtual advertisement storage unit122, the parameter reception unit123, a parameter extraction unit124, a virtual advertisement reading unit128, a basic data storage unit129, a basic data reading unit130, a combination unit131, and a rendering unit132. The registration reception unit121receives the instructions information and the virtual advertisement data for which registration instructions are given from the registration instruction unit113of the information processing apparatus110. The registration reception unit121sends the received virtual advertisement data to the virtual advertisement storage unit122in order to register the virtual advertisement data in accordance with the instructions information. The virtual advertisement storage unit122stores the virtual advertisement data that is sent from the registration reception unit121. The parameter reception unit123acquires the parameter to which the display condition of additional information that is displayed in the virtual viewpoint image, the identification information uniquely indicating the additional information, the virtual viewpoint information on the virtual viewpoint image that is generated, and the time code are set. Specifically, the parameter reception unit123receives the parameter that is transmitted from the parameter transmission unit118of the information processing apparatus110. The parameter extraction unit124has a display condition extraction unit125, a time code extraction unit126, and a virtual viewpoint extraction unit127. The parameter extraction unit124extracts various kinds of information that are set in each function unit from the parameter received by the parameter reception unit123. The display condition extraction unit125extracts, from the parameter received by the parameter reception unit123, the identification information on the virtual advertisement data that is set in the display condition setting unit115and the information relating to the display condition of the virtual advertisement data, which are included in the parameter. The identification information on the virtual advertisement data and the information relating to the display condition of the virtual advertisement data are sent to the virtual advertisement reading unit128. The virtual advertisement reading unit128reads the virtual advertisement data corresponding to the identification information extracted from the parameter by the display condition extraction unit125from the virtual advertisement storage unit122. The read virtual advertisement data is sent to the combination unit131. The basic data storage unit129stores 3D data of a basic model (in the following, referred to as basic 3D data). The basic 3D data is data obtained by three-dimensionally modeling an object that exists actually. A virtual viewpoint image is generated by using the basic 3D data, and therefore, it can also be said that the basic 3D data is shape data indicating the shape of an object included in the virtual viewpoint image that is generated. The time code extraction unit126extracts the time code that is included in the parameter from the parameter received by the parameter reception unit123. The extracted time code is data that is set by the time code setting unit116and matches with the time code of the basic 3D data. The extracted time code is sent to the basic data reading unit130. The parameter from which the time code is extracted is the same as the parameter that is the extraction target of the display condition extraction unit125. The basic data reading unit130reads the basic 3D data relevant (corresponding) to the time code extracted by the time code extraction unit126from the basic data storage unit129. The read basic 3D data is sent to the combination unit131. The combination unit131combines the basic 3D data read by the basic data reading unit130and the virtual advertisement data read by the virtual advertisement reading unit128and generates one piece of combined 3D data. The generated one piece of combined 3D data is sent to the rendering unit132. The virtual viewpoint extraction unit127extracts, from the parameter received by the parameter reception unit123, the virtual viewpoint information (virtual camera position information) indicating the position of the virtual camera, the orientation of the virtual camera, and the angle of view corresponding to the virtual camera, which are included in the parameter. The extracted virtual viewpoint information is sent to the rendering unit132. The parameter from which the virtual viewpoint information is extracted is the same as the parameter that is the extraction target of the display condition extraction unit125. The rendering unit132generates a virtual viewpoint image in which a virtual advertisement is displayed (virtual advertisement-added virtual viewpoint image) by performing rendering processing for the combined 3D data that is generated in the combination unit131in accordance with the virtual viewpoint information extracted by the virtual viewpoint extraction unit127. The virtual viewpoint information includes the position and orientation of the virtual camera and the angle of view corresponding to the virtual camera. The generated virtual advertisement-added virtual viewpoint image is sent to the user terminal140and the like outside the image generation apparatus120. The user terminal140is an apparatus that displays the virtual advertisement-added virtual viewpoint image that is sent from the image generation apparatus120. The user terminal140may be an information processing apparatus, such as a personal computer including a liquid crystal display, a tablet terminal including a touch panel, and a smartphone. Although the image processing system having the information processing apparatus110, the image generation apparatus120, and the user terminal140is explained as an example, the image processing system is not limited to this. It may also be possible for the information processing apparatus110to include the function of the image generation apparatus120, the function of the user terminal140, or the functions of both the image generation apparatus120and the user terminal140. Further, the image generation apparatus120may include the function of the user terminal140. <Hardware Configuration of Each Apparatus> The hardware configuration of the information processing apparatus110, the image generation apparatus120, and the user terminal140is explained by using diagrams.FIG.2is a diagram showing a hardware configuration example of the information processing apparatus110, the image generation apparatus120, and the user terminal140. The information processing apparatus110, the image generation apparatus120, and the user terminal140have a common hardware configuration and have a CPU211, a ROM212, a RAM213, an auxiliary storage device214, a display unit215, an operation unit216, a communication unit217, and a bus218. The CPU211implements each function of the information processing apparatus110and the image generation apparatus120shown inFIG.1Bby controlling the entire image processing system using computer programs and data stored in the ROM212and the RAM213. It may also be possible for the information processing apparatus110and the image generation apparatus120to have one piece or a plurality of pieces of dedicated hardware different from the CPU211and for the dedicated hardware to perform at least part of the processing of the CPU211. As examples of the dedicated hardware, there are an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and the like. The ROM212stores programs and the like that do not need to be changed. The RAM213temporarily stores programs and data supplied from the auxiliary storage device214, data supplied from the outside via the communication unit217, and the like. The auxiliary storage device214includes, for example, a hard disk drive and the like and stores various kinds of data, such as image data and voice data. The display unit215includes, for example, a liquid crystal display, an LED and the like and displays a GUI (Graphical User Interface) for a user to operate the information processing apparatus110, and the like. The operation unit216includes, for example, a keyboard, a mouse, a joystick, a touch panel and the like and receives the operation by a user and inputs various instructions to the CPU211. The CPU211operates as a display control unit configured to control the display unit215and an operation control unit configured to control the operation unit216. The communication unit217is used for communication with external devices of the information processing apparatus110, the image generation apparatus120, and the user terminal140. For example, in a case where the information processing apparatus110, the image generation apparatus120, and the user terminal140are connected with an external device by a wire, a communication cable is connected to the communication unit217. In a case where the information processing apparatus110, the image generation apparatus120, and the user terminal140have a function to wirelessly communicate with an external device, the communication unit217comprises an antenna. The bus218connects each unit of the information processing apparatus110, the image generation apparatus120, and the user terminal140and transmits information. In the present embodiment, although the display unit215and the operation unit216exist inside the information processing apparatus110, at least one of the display unit215and the operation unit216may exist as another device outside the information processing apparatus110. The display unit215and the operation unit216may be integrated into one unit. <Generation of Virtual Advertisement-Added Virtual Viewpoint Image> The generation of a virtual advertisement-added virtual viewpoint image in accordance with a virtual viewpoint based on 3D data (three-dimensional data) is explained by using diagrams.FIG.3AtoFIG.3Care each a diagram explaining the generation of a virtual advertisement-added virtual viewpoint image in accordance with a virtual viewpoint based on 3D data corresponding to an image (video) of a game of soccer.FIG.3Ashows a virtual viewpoint image example generated in accordance with a virtual viewpoint based on basic 3D data.FIG.3Bshows an image example of virtual advertisement data in accordance with the virtual viewpoint of the virtual viewpoint image inFIG.3A.FIG.3Cshows a virtual advertisement-added virtual viewpoint image example obtained by combining the virtual viewpoint image inFIG.3Aand the virtual advertisement data inFIG.3B. A virtual viewpoint image310shown inFIG.3Ais an image in a case where an object that exists actually is viewed from an arbitrary viewpoint (virtual viewpoint) and includes a field311, players312existing on the field, and advertisement signboards313installed on the side of the field311. The virtual viewpoint image310is an image that is generated in accordance with a virtual viewpoint based on basic 3D data stored in advance in the basic data storage unit129. The basic 3D data is, for example, shape data indicating the shape of an object, which is generated by a publicly known technique based on a plurality of captured images obtained by capturing a target area from different viewpoints with a plurality of imaging apparatuses. The publicly known technique is, for example, the shape-from-silhouette method and the like. A virtual advertisement data image320shown inFIG.3Bis a virtual advertisement image that can be displayed in an area that does not overlap the object in the virtual viewpoint image310, which is the target of interest of a viewer, such as the players312and the advertisement signboards313existing in the virtual viewpoint image310. That is, the virtual advertisement data image320includes two signboards321and322arranged side by side along the halfway line of the field311in the virtual viewpoint image310. The signboard321is a virtual advertisement image in which a texture “Abcde” is described centered. The signboard322is a virtual advertisement image in which a texture “ABCDE” is described centered. A virtual advertisement-added virtual viewpoint image330shown inFIG.3Cis an image generated by the rendering unit132in accordance with the virtual viewpoint based on the combined 3D data obtained by the combination unit131combining the basic 3D data and the virtual advertisement data. Consequently, by preparing in advance virtual advertisement data desired to be displayed in a virtual viewpoint image, it is made possible to generate a virtual advertisement-added virtual viewpoint image in which an advertisement signboard seems to exist in the vicinity of the halfway line of the field at which the advertisement signboard cannot exist actually. <Operation of Image Processing System> Following the above, a flow of processing performed by the image processing system is explained by using diagrams.FIG.4AtoFIG.4Care each a flowchart showing a flow of processing performed by the image processing system.FIG.4Ashows a flow of processing performed by the image processing system,FIG.4Bshows a detailed flow of processing to set a virtual viewpoint image generation parameter, andFIG.4Cshows a detailed flow of processing to generate a virtual advertisement-added virtual viewpoint image. The processing shown inFIG.4AtoFIG.4Cis performed by the CPU211reading computer programs stored in the ROM212or the auxiliary storage device214and executing the programs. In the following, in explanation ofFIG.4AtoFIG.4C, a processing step is simply described as S. As shown inFIG.4A, at S401, the information processing apparatus110of the image processing system receives a user operation and sets various kinds of information to a parameter in accordance with the received operation contents. A detailed flow of setting processing will be described later by using diagrams. The parameter to which various kinds of information are set is transmitted to the image generation apparatus120of the image processing system. At S402, the image generation apparatus120of the image processing system generates a virtual advertisement-added virtual viewpoint image in accordance with the various kinds of information set to the parameter. That is, the image generation apparatus120generates a virtual advertisement-added virtual viewpoint image in accordance with the virtual viewpoint information set to the parameter based on the combined 3D data obtained by combining the basic 3D data corresponding to the time code set to the parameter and the virtual advertisement data indicated by the identification information. A detailed flow of generation processing will be described later by using diagrams. The generated virtual advertisement-added virtual viewpoint image is sent to the display that is the display unit215of the image generation apparatus120, the user terminal140connected to the image generation apparatus120, and the like. Following the above, a detailed flow of setting processing of a virtual viewpoint image generation parameter is explained by using diagrams. As shown inFIG.4B, at S411, the virtual advertisement setting unit112reads the virtual advertisement data that is created in advance. The virtual advertisement data may be read from a virtual advertisement database, not shown schematically, connected to the information processing apparatus110, or a USB memory connected to the information processing apparatus110. The read virtual advertisement data is sent to the registration instruction unit113and the display condition setting unit115. At S412, the registration instruction unit113transmits the target virtual advertisement data to the image generation apparatus120along with the information on instructions to register the virtual advertisement data read by the virtual advertisement setting unit112in the virtual advertisement storage unit122of the image generation apparatus120. At S413, the operation reception unit111receives a user operation for inputting a virtual viewpoint image generation parameter. At S414, the operation reception unit111determines whether or not there are instructions to display a virtual advertisement in the information that is generated in accordance with the input operation received by the operation reception unit111at S413. In a case where determination results that there are instructions to display a virtual advertisement in the information that is generated in accordance with the input operation are obtained (YES at S414), the processing moves to S415. On the other hand, in a case where determination results that there are not instructions to display a virtual advertisement in the information that is generated in accordance with the input operation are obtained (NO at S414), the processing skips S415and moves to S416. At S415, the display condition setting unit115sets a display condition of selection-target virtual advertisement data and identification information on the selection-target virtual advertisement to the parameter that is transmitted to the image generation apparatus120based on the information that is generated in accordance with the input operation received by the operation reception unit111at S413. At S416, the time code setting unit116sets a time code to the parameter that is transmitted to the image generation apparatus120based on the information that is generated in accordance with the input operation received by the operation reception unit111at S413. At S417, the virtual viewpoint setting unit117sets virtual viewpoint information to the parameter that is transmitted to the image generation apparatus120based on the information that is generated in accordance with the input operation received by the operation reception unit111at S413. The virtual viewpoint information includes the position of the virtual camera, the orientation of the virtual camera, and the angle of view corresponding to the virtual camera. At S418, the parameter transmission unit118transmits the parameter to which various kinds of information generated in the processing at S415to S417are set to the parameter reception unit123of the image generation apparatus120. At S419, the information processing apparatus110determines whether there is an input of a new parameter. In a case where determination results that there is an input of a new parameter are obtained (YES at S419), the processing returns to S413and the series of processing at S413to S418is performed for the input new parameter. On the other hand, in a case where determination results that there is not an input of a new parameter are obtained (NO at S419), the flow shown inFIG.4Bis terminated. Following the above, a detailed flow of generation processing of a virtual advertisement-added virtual viewpoint image is explained by using diagrams. As shown inFIG.4C, at S421, the registration reception unit121receives the target virtual advertisement data along with instructions information transmitted from the registration instruction unit113of the information processing apparatus110. At S422, the virtual advertisement storage unit122stores the virtual advertisement data received by the registration reception unit121at S421. Due to this, the virtual advertisement data is registered in the image generation apparatus120. At S423, the parameter reception unit123receives the parameter transmitted by the parameter transmission unit118at S418. At S424, the basic data reading unit130reads the basic 3D data relevant (corresponding) to the time code extracted from the parameter by the time code extraction unit126of the parameter extraction unit124from the basic data storage unit129. The parameter from which the time code is extracted is the parameter received by the parameter reception unit123at S423. At S425, the display condition extraction unit125of the parameter extraction unit124determines whether or not the information relating to the display condition of virtual advertisement data is included in the parameter received by the parameter reception unit123at S423. In a case where determination results that the information relating to the display condition of virtual advertisement data is included in the parameter are obtained (YES at S425), the processing moves to S426. On the other hand, in a case where determination results that the information relating to the display condition of virtual advertisement data is not included in the parameter are obtained (NO at S425), the processing skips S426and S427and moves to S428. At S426, the virtual advertisement reading unit128reads the virtual advertisement data corresponding to the identification information extracted from the parameter by the display condition extraction unit125from the virtual advertisement storage unit122. The parameter from which the identification information is extracted is the parameter received by the parameter reception unit123at S423. At S427, the combination unit131generates combined 3D data by combining the basic 3D data read by the basic data reading unit130at S424and the virtual advertisement data read by the virtual advertisement reading unit128at S426. At S428, the rendering unit132performs rendering processing for the combined 3D data generated at S427or the basic 3D data read at S424in accordance with the virtual viewpoint information extracted from the parameter by the virtual viewpoint extraction unit127. By this processing, a virtual advertisement-added virtual viewpoint image or a virtual viewpoint image with no virtual advertisement is generated. The virtual advertisement-added virtual viewpoint image or the virtual viewpoint image with no virtual advertisement that is generated is sent to the user terminal140and the like outside the image generation apparatus120as an image signal. The parameter from which the virtual viewpoint information is extracted is the parameter received by the parameter reception unit123at S423. At S429, the image generation apparatus120determines whether or not there is reception of a new parameter. In a case where determination results that there is reception of a new parameter are obtained (YES at S429), the processing returns to S423and the series of processing at S423to S428is performed for the received new parameter. On the other hand, in a case where determination results that there is not reception of a new parameter are obtained (NO at S429), the flow shown inFIG.4Cis terminated. In the flows inFIG.4AandFIG.4B, although the parameter setting processing including the reading and registration of virtual advertisement data is explained, the parameter setting processing is not limited to this. That is, it may also be possible to perform the parameter setting processing including the setting of a virtual advertisement display condition by performing in advance the reading and registration of virtual advertisement data. Further, in the flows inFIG.4AandFIG.4C, although the virtual viewpoint image generation processing including the reception and storage of instructions to register virtual advertisement data is explained, the generation processing is not limited to this. That is, it may also be possible to perform the generation processing of a virtual advertisement-added virtual viewpoint image by performing in advance the reception and storage of instructions to register virtual advertisement data. <Flow of Processing Between Apparatuses of Image Processing System> A flow of processing between the apparatuses of the image processing system is explained by using diagrams.FIG.5is a sequence diagram showing a flow of processing between the information processing apparatus110and the image generation apparatus120in the image processing system. In the following, in the explanation ofFIG.5andFIG.10, to be described later, the processing sequence is simply described as S. At S501, the information processing apparatus110(registration instruction unit113) instructs the image generation apparatus120to register virtual advertisement data “A” set by the virtual advertisement setting unit112. To S501, the processing at S412in the flowchart inFIG.4Band the processing at S421in the flowchart inFIG.4Ccorrespond. Here, a virtual advertisement data example is explained by using diagrams.FIG.6AtoFIG.6Care each a diagram showing a virtual advertisement data registration example and a virtual viewpoint image generation parameter example andFIG.6Ashows a 3D model format example of virtual advertisement data that is transmitted at the time of registration. As shown inFIG.6A, in the 3D model format of virtual advertisement data, a file name and a file size of each of an obj file, an mtl file, and a png file are designated. In the obj file, information on the vertex, the normal and the like of the 3D model (three-dimensional model) of a virtual advertisement is stored. In the mtl (material) file, color information and texture information on the 3D model of a virtual advertisement are stored. In the png file, parameters of the 3D model of a virtual advertisement are stored. Here, the obj file is explained by using diagrams.FIG.7is a diagram showing an example of the obj file that is designated in the 3D model format of virtual advertisement data. In the obj file, the vertex coordinate values are designated by a keyword “v”, the texture coordinate values are designated by a keyword “vt”, and a normal vector is designated by a keyword “vn”, respectively. Further, in the present embodiment, although the obj file is explained as an example of the file format representing information on the position, size, and orientation of the 3D model of virtual advertisement data, the file format is not limited to this. The file format may be another format as long as the position, size, and orientation of the 3D model of virtual advertisement data can be represented. Explanation is returned toFIG.5. At S502, the information processing apparatus110(parameter transmission unit118) transmits a virtual viewpoint image generation parameter (without virtual advertisement) without virtual advertisement display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Here, a virtual viewpoint image generation parameter (without virtual advertisement) data example is explained by using diagrams.FIG.6Bis a diagram showing a format example of virtual viewpoint image generation parameter (without virtual advertisement) data (in the following, also referred to as parameter (without virtual advertisement) data). In the format of the parameter (without virtual advertisement) data, the position of the virtual camera is designated by “rotation quaternion” and the orientation of the virtual camera is designated by “translation vector”, respectively. In the format of the parameter (without virtual advertisement) data, further, information representing the angle of view corresponding to the virtual camera is designated by “horizontal angle” and the time code is designated by “time code”, respectively. That is, in the format of the parameter (without virtual advertisement) data shown inFIG.6B, although the virtual viewpoint information is designated, information relating to a virtual advertisement is not designated. At S503, as at S502, the information processing apparatus110transmits the parameter (without virtual advertisement) without virtual advertisement display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Here, an example of a virtual viewpoint image is explained by using diagrams, which is generated in accordance with virtual viewpoint information by the image generation apparatus120extracting the virtual viewpoint information from the parameter (without virtual advertisement) at S502and S503. In a case where the rendering unit132performs rendering processing for the basic 3D data in accordance with the virtual viewpoint information extracted from the parameter (without virtual advertisement), the virtual viewpoint image to which no virtual advertisement is added as shown inFIG.3Ais generated. Note that the time code that is extracted from the parameter (without virtual advertisement) received at S503is different from the time code that is extracted from the parameter (without virtual advertisement) received at S502. That is, the target frame of the parameter received at S503is different from that of the parameter received at S502. At S504, the information processing apparatus110(parameter transmission unit118) transmits a virtual viewpoint image generation parameter (with virtual advertisement “A”) with virtual advertisement display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Here, a virtual viewpoint image generation parameter (with virtual advertisement “A”) data example is explained by using diagrams.FIG.6Cis a diagram showing a format example of the virtual viewpoint image generation parameter (with virtual advertisement “A”) data (in the following, also referred to as parameter (with virtual advertisement “A” data)). In the format of the parameter (with virtual advertisement “A”) data, the information representing the position of the virtual camera, the orientation of the virtual camera, and the angle of view corresponding to the virtual camera, and the time code are designated, which are also designated in the parameter (without virtual advertisement) shown inFIG.6B. In the format of the parameter (with virtual advertisement “A”) data, information relating to virtual advertisement (inFIG.6C, virtual advertisement “A”) display instructions is further designated. This parameter (with virtual advertisement “A”) includes instructions to display the virtual advertisement “A” ([“advertise ID”: “A” ] inFIG.6A) that is instructed to be registered in the processing at S501in the virtual viewpoint image. Upon receipt of the parameter (with virtual advertisement “A”) data such as this, the image generation apparatus120generates a virtual viewpoint image in which the virtual advertisement is displayed. Note that, the time code that is extracted from the parameter received at S504is different from the time code that is extracted from the parameter received at S502to S503. That is, the target frames of the parameters received at S502to S504are different from one another. At S505, as at S504, the information processing apparatus110transmits the parameter (with virtual advertisement “A”) with virtual advertisement display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Here, an example of a virtual advertisement-added virtual viewpoint image is explained by using diagrams, which is generated in accordance with virtual viewpoint information by the image generation apparatus120extracting the virtual viewpoint information from the parameter (with virtual advertisement “A”) at S504and S505. In accordance with the virtual viewpoint information that is extracted from the parameter (with virtual advertisement “A”), the rendering unit132performs rendering processing for the combined 3D data obtained by combining the basic 3D data and the virtual advertisement data. Due to this processing, the virtual viewpoint image to which the virtual advertisement is added as shown inFIG.3Cis generated. Note that the time code that is extracted from the parameter (with virtual advertisement “A”) received at S505is different from the time code that is extracted from the parameters received at S502to S504. That is, the target frame of the parameter (with virtual advertisement “A”) received at S505is different from the target frame of the parameters received at S502to S504. At S506, as at S502and S503, the information processing apparatus110transmits again the parameter (without virtual advertisement) without virtual advertisement display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. In the image generation apparatus120, in a case where the rendering unit132performs rendering processing for the basic 3D data in accordance with the virtual viewpoint information that is extracted from the parameter (without virtual advertisement), a virtual viewpoint image to which no virtual advertisement is added is generated again. Note that the time code that is extracted from the parameter (without virtual advertisement) received at S506is different from the time code that is extracted from the parameters received at S502to S505. That is, the target frame of the parameter (without virtual advertisement) received at S506is different from that of the parameters received at S502to S505. As above, as explained usingFIG.5, it is possible to perform setting so that a desired virtual advertisement is added to (displayed at) a desired position in a desired virtual viewpoint image or setting so that no virtual advertisement is added (displayed) for each frame. <Virtual Advertisement Display Condition Setting UI> An example of a user interface (UI) for setting a condition of displaying a virtual advertisement in a virtual viewpoint image is explained by using diagrams.FIG.8AandFIG.8Bare each a diagram showing a virtual advertisement display condition setting UI example andFIG.8Ashows a case where nondisplay of a virtual advertisement is set andFIG.8Bshows a case where display of a virtual advertisement is set. In a case where a virtual advertisement display condition is set by a user operation on the virtual advertisement display condition setting UI, information on the virtual advertisement display condition is set to a parameter by the display condition setting unit115 A virtual advertisement display condition setting UI801includes two radio buttons802and803. The radio button802is a button for setting nondisplay of a virtual advertisement, that is, no virtual advertisement is displayed in a virtual viewpoint image. The radio button803is a button for designating a display-target virtual advertisement, which is an additional display of a virtual advertisement, for a virtual viewpoint image. The ratio button803is arranged within a list box displaying a plurality of registered virtual advertisements in a list. In a case where it is desired not to display a virtual advertisement in a virtual viewpoint image, as shown inFIG.8A, the radio button802corresponding to “None” for setting nondisplay of a virtual advertisement by the user operation is selected. To the selection of the radio button802such as this, the processing sequences at S502, S503, and S506in the sequence diagram inFIG.5correspond. On the other hand, in a case where it is desired to display a virtual advertisement in a virtual viewpoint image, as shown inFIG.8B, the radio button803corresponding to the virtual advertisement (“virtual advertisement A”) desired to be displayed is selected in order to set display of the virtual advertisement by the user operation at timing at which the virtual advertisement is desired to be displayed. To the selection of the radio button803such as this, the processing sequences at S504and S505in the sequence diagram inFIG.5correspond. <User Operation Screen> Here, a user operation screen for setting a virtual viewpoint corresponding to a virtual viewpoint image and a virtual advertisement display condition is explained by using diagrams.FIG.9AandFIG.9Bare each a diagram showing a user operation screen example andFIG.9Ashows a case where nondisplay of a virtual advertisement is set andFIG.9Bshows a case where display of a virtual advertisement is set.FIG.9AandFIG.9Beach show a case where the virtual viewpoint image corresponding to the set display condition is displayed on the display on the left side and the virtual advertisement display condition setting UI is displayed on the display on the right side. The display aspect of the virtual viewpoint image and the virtual advertisement display condition setting UI is not limited to this. It may also be possible to display the virtual viewpoint image and the virtual advertisement display condition setting UI on one display. As shown inFIG.9A, a display901on the left side displays a virtual viewpoint image911generated in accordance with the virtual viewpoint based on the basic 3D data. The virtual viewpoint is set initially and before the reception of the user operation relating to the virtual viewpoint, the virtual viewpoint that is set initially is used as the virtual viewpoint. A display902on the right side displays a virtual advertisement display designation UI912in the state where the radio button for setting nondisplay of a virtual advertisement to the image displayed on the display901on the left side is selected. Nondisplay of a virtual advertisement is selected, and therefore, the display on the left side displays the virtual viewpoint image to which no virtual advertisement is added. As shown inFIG.9B, the display901on the left side displays a virtual viewpoint image921generated in accordance with the virtual viewpoint based on the basic 3D data. The display902on the right side displays a virtual advertisement display designation UI922in the state where the radio button for setting display of the virtual advertisement A to the image displayed on the display901on the left side is selected by the user operation. Display of a virtual advertisement is selected, and therefore, the display on the left side displays the virtual viewpoint image to which the virtual advertisement is added. As in the virtual viewpoint image911displayed on the display901, in the state where the game is played across the field and the players exist across the field, the radio button corresponding to “None” (nondisplay of virtual advertisement) is selected on the UI912. Because of this, to the virtual viewpoint image911displayed on the display901on the left side, no virtual advertisement is added. As in the virtual viewpoint image921displayed on the display901, at the timing at which the state is brought about where the players gather on one side of the field and no player exists on the other side, the radio button corresponding to the virtual advertisement A is selected on the UI922. Because of this, to the virtual viewpoint image921displayed on the display901on the left side, the virtual advertisement is added. By registering the virtual advertisement data in advance and selecting the virtual advertisement at the timing at which the virtual advertisement is desired to be displayed on the UI displayed on the display902on the right side while watching the virtual viewpoint image that is displayed on the display901on the left side, it is possible to immediately display the virtual advertisement in the virtual viewpoint image. In the present embodiment, in a case where the radio button corresponding to the virtual advertisement A is selected by the user operation, the parameter setting unit114sets the display instructions (display condition) of the virtual advertisement A to the parameter and the parameter transmission unit118transmits the parameter to the image generation apparatus120. Note that the operation method is not limited to this and it may also be possible to press down a key assigned to the keyboard or the operation may be one by a touch panel. Further, it may also be possible to enable adjustment of the virtual viewpoint by the user operation by using the virtual viewpoint image911displayed on the display901on the left side. In a case where the virtual viewpoint is adjusted (set) by the user operation, the parameter setting unit114sets the virtual viewpoint information to the parameter and the parameter transmission unit118transmits the parameter to the image generation apparatus120. Due to this, for example, in a scene where a penalty kick is played in a game of soccer, the setting is performed in accordance with the camera angle so that the virtual advertisement is displayed in the area in which no player exists on the field and the generation of a virtual advertisement-added virtual viewpoint image in accordance with the setting is enabled. That is, it is made possible to display a virtual advertisement in the area that does not overlap a player existing on the field and in which a signboard cannot be installed within the field As explained above, according to the present embodiment, by registering in advance virtual advertisement data (3D model and texture of stationary image) and setting the identification information on the virtual advertisement data, the virtual advertisement display condition, the virtual viewpoint information, and the time code to the parameter, the following effects are obtained. It is possible to generate a virtual advertisement-added virtual viewpoint image or a virtual viewpoint image with no virtual advertisement in accordance with the information extracted from the parameter. Because of this, it is possible to display an effective virtual advertisement suitable to a scene in a virtual viewpoint image by switching between display and nondisplay of a virtual advertisement in real time for each frame. That is, it is possible to generate a virtual viewpoint image in which a virtual advertisement is displayed at a desired position at desired timing. Second Embodiment In the present embodiment, an aspect in which virtual advertisement data is displayed by switching a plurality of kinds of registered virtual advertisement data is explained by using diagrams.FIG.10is a sequence diagram showing a flow of processing between the information processing apparatus110and the image generation apparatus120in a case where a plurality of kinds of virtual advertisement data is registered and the display condition of each of the plurality of kinds of registered virtual advertisement data is set. In the present embodiment, the processing to register a virtual advertisement and set the display condition of the registered virtual advertisement is the same as that of the image processing system of the first embodiment described above, and therefore, its explanation is omitted. At S1001, the information processing apparatus110(registration instruction unit113) instructs the image generation apparatus120to register virtual advertisement data “AA” set by the virtual advertisement setting unit112. To S1001, the processing at S412in the flowchart inFIG.4Band the processing at S421in the flowchart inFIG.4Ccorrespond. Similarly, at S1002, the information processing apparatus110instructs the image generation apparatus120to register virtual advertisement data “BB” set by the virtual advertisement setting unit112. At S1003, the information processing apparatus110instructs the image generation apparatus120to register virtual advertisement data “CC” set by the virtual advertisement setting unit112. Here, examples of the virtual advertisement data “AA”, “BB”, and “CC” are explained by using diagrams.FIG.11AtoFIG.11Care each a diagram showing an image example of the virtual advertisement data displayed in the virtual viewpoint image andFIG.11Ashows an image example of the virtual advertisement data “AA”,FIG.11Bshows an image example of the virtual advertisement data “BB”, andFIG.11Cshows an image example of the virtual advertisement data “CC”. As shown inFIG.11AtoFIG.11C, although the texture of virtual advertisements1101,1102, and1103is the same, the display positions of the virtual advertisements are different. In the above, although the case is explained where the three kinds of data whose display positions of the virtual advertisements are different are registered, the case is not limited to this. That is, it may also be possible to create in advance a plurality of kinds of virtual advertisement data each corresponding to the position at and the size with which the virtual advertisement data is desired to be displayed and having the texture desired to be displayed and register all of them in advance. At S1004, the information processing apparatus110(parameter transmission unit118) transmits the parameter (without virtual advertisement) without virtual advertisement display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. At S1005, the information processing apparatus110(parameter transmission unit118) transmits the parameter (with virtual advertisement “AA”) with virtual advertisement “AA” display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Similarly, at S1006, the information processing apparatus110transmits the parameter (with virtual advertisement “BB”) with virtual advertisement “BB” display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Further, at S1007, the information processing apparatus110transmits the parameter (with virtual advertisement “CC”) with virtual advertisement “CC” display instructions, which is set by the parameter setting unit114, to the image generation apparatus120. Note that, each of the time codes extracted from the parameters received at S1004to S1007is different from the time code extracted from the parameter received in another processing sequence other than the processing sequence of its own. That is, the target frames of the parameters received at S1004to S1007are different from one another. <Virtual Advertisement Display Setting UI> A user interface (UI) example for setting a virtual advertisement display condition for a virtual viewpoint image is explained by using diagrams.FIG.12AtoFIG.12Care each a diagram showing a virtual advertisement display condition setting UI example.FIG.12Ashows a virtual advertisement registration UI example,FIG.12Bshows a virtual advertisement set creation UI example, andFIG.12Cshows a virtual advertisement schedule setting UI example. In a case where a virtual advertisement is registered by the user operation on the virtual advertisement registration UI, the virtual advertisement data is registered in the image generation apparatus120by the virtual advertisement setting unit112. In a case where a virtual advertisement display condition (virtual advertisement set) is created by the user operation on the virtual advertisement set creation UI, the information relating to the virtual advertisement display condition is set to the parameter by the display condition setting unit115. In a case where a virtual advertisement display condition (virtual advertisement schedule) is set by the user operation on the virtual advertisement schedule setting UI, the information relating to the virtual advertisement display condition is set to the parameter by the display condition setting unit115. As shown inFIG.12A, a Virtual Advertisement Registration UI1210is a user interface for registering a virtual advertisement. The Virtual Advertisement Registration UI1210has a list box1211that displays registration-target virtual advertisements in such a manner that they can be selected and a “Register” button1212for registering a selected virtual advertisement. In the list box1211, a plurality of sets of an image representing a virtual advertisement and the display position thereof, the name of the virtual advertisement, and a radio button for selecting the virtual advertisement is displayed. In the example inFIG.12A, in the list box1211, the virtual advertisement AA, the virtual advertisement BB, and the virtual advertisement CC are displayed. The virtual advertisement AA is data indicating that a virtual advertisement in which the texture “Abcde” is centered is displayed at the position at the center in the vertical direction and on the left side slightly shifted from the center in the horizontal direction in the virtual viewpoint image. The virtual advertisement BB is data indicating that a virtual advertisement in which the texture “Abcde” is centered is displayed at the position at the center in the vertical direction and at the center in the horizontal direction in the virtual viewpoint image. The virtual advertisement CC is data indicating that a virtual advertisement in which the texture “Abcde” is centered is displayed at the position at the center in the vertical direction and on the right side slightly shifted from the center in the horizontal direction in the virtual viewpoint image. By pressing down the “Register” button1212in the state where the radio button corresponding to the virtual advertisement desired to be registered, the selected virtual advertisement is registered in the image generation apparatus120. As shown inFIG.12B, a Virtual Advertisement Set Creation UI1220has a list box1221, a creation area1222, and a confirmation area1223. In the list box1221, registered virtual advertisements are displayed in a list. In the creation area1222, virtual advertisements that can be created as a virtual advertisement set are displayed in a list. In the confirmation area1223, a created virtual advertisement set is displayed in a list. The example inFIG.12Bshows a state where a virtual advertisement set1237that displays in order the virtual advertisement AA, the virtual advertisement BB, and the virtual advertisement CC is registered in the confirmation area1223. Here, a creation method of a virtual advertisement set is explained by using diagrams.FIG.13AtoFIG.13Care each a diagram explaining a creation method of a virtual advertisement set.FIG.13Ashows a state where registered virtual advertisements are displayed in a list,FIG.13Bshows a state where a virtual advertisement that is the target of a virtual advertisement set is designated (specified), andFIG.13Cshows a state where a virtual advertisement set is registered. In a case where a virtual advertisement is registered by the user operation, the virtual advertisement setting unit112displays registration-target virtual advertisements in a list in the list box1221on the Virtual Advertisement Set Creation UI1220as shown inFIG.13Ain accordance with the operation contents of the user operation. In the example inFIG.13A, a virtual advertisement AA1231, a virtual advertisement BB1232, a virtual advertisement CC1233and the like are displayed in a list. Then, in the user operation, by the mouse operation of drag and drop being performed for the virtual advertisement desired to be a virtual advertisement set, the virtual advertisement desired to be a virtual advertisement set is moved from within the list box1221into the creation area1222. In the example inFIG.13B, as a virtual advertisement set, a virtual advertisement AA1234, a virtual advertisement BB1235, and a virtual advertisement CC1236are selected. Then, by a Create button1225being pressed down by the user operation, as shown inFIG.13C, in the confirmation area1223, the created virtual advertisement set1237is displayed. In the example inFIG.13C, the virtual advertisement set1237that displays in order the virtual advertisement AA1234, the virtual advertisement BB1235, and the virtual advertisement CC1236is confirmed. Explanation is returned toFIG.12C. As shown inFIG.12C, a virtual advertisement schedule setting UI1240is a UI for designating a schedule of displaying a virtual advertisement for a virtual viewpoint image. The virtual advertisement schedule setting UI1240has a virtual advertisement set designation area1241and a virtual advertisement schedule designation area1242. In the virtual advertisement set designation area1241, virtual advertisement sets that can be selected are displayed in a list. In the example inFIG.12C, in the designation area1241, a state is shown where one virtual advertisement set is selected from two or more kinds of virtual advertisement set. In the virtual advertisement schedule designation area1242, the order of displaying virtual advertisements included in the virtual advertisement set, a box for setting the unit of updating by the number of frames, and a pulldown menu to set whether or not to perform loop reproduction are displayed. In the example inFIG.12C, in the designation area1242, on the upper side in the left field, a virtual advertisement set that displays the virtual advertisement AA, the virtual advertisement BB, and the virtual advertisement CC in this order is shown and on the middle side in the left field, a virtual advertisement set that displays the virtual advertisement AA and the virtual advertisement BB in this order is shown. Further, the right field shows that one frame is set as the unit of updating and the loop reproduction is set to be performed. In a case where a virtual advertisement (virtual advertisement set) display condition is set by the user operation on the virtual advertisement schedule setting UI1240, the display condition setting unit115sets the virtual advertisement display condition to the parameter. By performing the user operation for the virtual advertisement schedule setting UI1240in this manner, it is possible to perform the setting processing of a virtual advertisement display schedule by the display condition setting unit115. That is, it is possible for the display condition setting unit115to select a virtual advertisement that is displayed in a virtual viewpoint image from the virtual advertisement data registered by the virtual advertisement setting unit112and schedule the display time of the selected virtual advertisement, the display interval, the display order, the number of times of display and the like. The identification number of the virtual advertisement data set to the schedule by the display condition setting unit115is set to the parameter to which the information relating to virtual advertisement display instructions are set along with the virtual viewpoint information and the time code along the schedule. The virtual viewpoint information includes the position of the virtual camera (virtual viewpoint), the orientation (direction) of the virtual camera (virtual viewpoint), and the angle of view corresponding to the virtual camera. Then, the parameter transmission unit118transmits the parameter to which various kinds of information are set by the parameter setting unit114to the image generation apparatus120. As explained above, by registering in advance a plurality of kinds of virtual advertisement data and setting identification information on the plurality of kinds of virtual advertisement data and information relating to the display condition to the parameter, it is possible to display a desired kind of virtual advertisement in a virtual viewpoint image at desired timing. In the virtual advertisement designation example shown inFIG.12B, it is made possible to produce a display as a virtual advertisement, in which the position of the advertisement signboard moves from the position on the left side to the position on the right side in the image. Further, by registering in advance a plurality of kinds of virtual advertisement data, it is also made possible to display a virtual advertisement by switching a plurality of kinds of virtual advertisement. Although the example is explained in which the plurality of virtual advertisements whose texture is the same and whose display positions are different is displayed in order in the virtual viewpoint image, the example is not limited to this. For example, it may also be possible to display virtual advertisements with a plurality of kinds of texture, whose display position is the same, or display in order a plurality of virtual advertisements with a plurality of kinds of texture, whose display positions are different, in a virtual viewpoint image. Third Embodiment In the present embodiment, another example of an image of a virtual advertisement is explained by using diagrams.FIG.14AtoFIG.14Care each a diagram showing an image example of virtual advertisement data, which is displayed in a virtual viewpoint image.FIG.14Ashows a case where the texture of the virtual advertisement is right-justified,FIG.14Bshows a case where the texture of the virtual advertisement is centered, andFIG.14Cshows a case where the texture of the virtual advertisement is left-justified. A virtual advertisement1401shown inFIG.14Ais obtained by creating image data of an advertisement signboard that does not exist in an addition-target virtual viewpoint image (basic 3D data) as a computer graphics. In the virtual advertisement1401, the texture “Abcde” is described (displayed) right-justified on the advertisement signboard. A virtual advertisement1402shown inFIG.14Bis obtained by creating image data of an advertisement signboard that does not exist in an addition-target virtual viewpoint image (basic 3D data) as a computer graphics. In the virtual advertisement1402, the texture “Abcde” is described (displayed) centered on the advertisement signboard. A virtual advertisement1403shown inFIG.14Cis obtained by creating imaged data of an advertisement signboard that does not exist in an addition-target virtual viewpoint image (basic 3D data) as a computer graphics. In the virtual advertisement1403, the texture “Abcde” is described (displayed) left-justified on the advertisement signboard. That is, the above-described virtual advertisements1401,1402, and1403show examples in which although the 3D model representing the signboard of the virtual advertisement is the same, only the description positions (display positions) of the texture are different. In a case of the present embodiment as well, it is possible to obtain the effects described below by the same sequence as the sequence shown inFIG.10. That is, by registering the data of the virtual advertisements1401,1402, and1403in the image generation apparatus120and setting the display condition of displaying the virtual advertisement desired to be displayed by switching to the parameter including without virtual advertisement, it is possible to switch to the virtual advertisement desired to be displayed to another. Further, in the present embodiment, the texture of the virtual advertisements1401to1403exists in the vicinity of the right side, the center, and the left side, and therefore, it is made possible to produce a display in which the contents of the advertisement move by switching the displays in order of the virtual advertisement1401, the virtual advertisement1402, and the virtual advertisement1403. As explained above, according to the present embodiment, by setting the virtual advertisement display condition to the parameter by the user operation for the information processing apparatus110, it is made possible to switch between display and nondisplay of a virtual advertisement and produce a display by switching the kinds of virtual advertisement. Fourth Embodiment In the present embodiment, an aspect is explained by using diagrams, in which by setting in advance the time to display a virtual advertisement as a schedule, the immediate control by a user is not necessary and control to switch the kinds of virtual advertisement to be displayed in a virtual viewpoint image is performed in accordance with the schedule. The function units of the information processing apparatus of the image processing system of the present embodiment, which are different from those of the first embodiment, are explained by usingFIG.1B. The display condition setting unit of the present embodiment receives the time code indicating the virtual advertisement display time, which is set by the time code setting unit, in addition to receiving the virtual advertisement that is set by the virtual advertisement setting unit112and the information that is generated in accordance with the input operation received by the operation reception unit. Next, the display condition setting unit of the present embodiment associates the parameter to which a predetermined virtual advertisement display condition is set and the time code indicating the display time (display start time and display end time) of the predetermined virtual advertisement with each other and sends them to the parameter transmission unit of the present embodiment. In a case where the display start time of the predetermined virtual advertisement, which is indicated by the time code, is reached, the parameter transmission unit of the present embodiment transmits the corresponding parameter (with predetermined virtual advertisement) to the parameter reception unit123of the image generation apparatus120. Further, in a case where the display end time of the predetermined virtual advertisement, which is indicated by the time code, is reached, the parameter transmission unit of the present embodiment transmits the corresponding parameter (without predetermined virtual advertisement) to the parameter reception unit123of the image generation apparatus120. Further, the flow of processing performed by the information processing apparatus in a case where a schedule is set as the above-described virtual advertisement display condition is explained for the processing different from that of the first embodiment by usingFIG.4B. At S413, the operation reception unit111receives the user operation for inputting the parameter, that is, the user operation for inputting (registering) the schedule relating to the display-target virtual advertisement and to which the display start time and the display end time of the virtual advertisement are set. At S414, the operation reception unit111determines whether or not the current time is included in the range from the display start time to the display end time of the virtual advertisement, which are set to the schedule received at S413. In a case where determination results that the current time is included in the display time range of the virtual advertisement, which is set to the schedule, are obtained (YES at S414), the processing moves to S415. On the other hand, in a case where determination results that the current time is not included in the display time range of the virtual advertisement, which is set to the schedule, are obtained (NO at S414), the processing skips S415and moves to S416. As described above, by setting in advance the virtual advertisement display time as a schedule, it is possible for the information processing apparatus110to transmit the parameter (with virtual advertisement) or the parameter (without virtual advertisement), which is set in accordance with the schedule, to the image generation apparatus120. <Virtual Advertisement Display Condition Setting UI> A user interface (UI) example for setting a virtual advertisement display condition (display time) for a virtual viewpoint image is explained by using diagrams.FIG.15is a diagram showing a virtual advertisement display condition setting UI example. A virtual advertisement display condition setting UI1501has a Schedule Setting field1510for setting the time to display a virtual advertisement. The Schedule Setting field1510has a start time setting field1511, an end time setting field1512, and a display target setting field1513. The start time setting field1511is an area in which the time to start a display of a target virtual advertisement is set by the user operation. The end time setting field1512is an area in which the time to end a display of a target virtual advertisement is set by the user operation. The display target setting field1513is an area in which the kind of virtual advertisement to be displayed in a virtual viewpoint image is set by the user operation. In a case where a virtual advertisement display schedule is set by the user operation on the display condition setting UI, information relating to the virtual advertisement display condition is set to the parameter by the display condition setting unit115. For example, a case is explained where the broadcast time of a game of soccer is from 18:00 to 20:00 and the setting is performed so that from 18:00 to 19:00, a virtual advertisement AAA is displayed in a virtual viewpoint image and from 19:00 to 20:00, a virtual advertisement BBB is displayed in the virtual viewpoint image As shown inFIG.15, by the user operation, in the start time setting field1511, “18:00” is set, in the end time setting field1512, “19:00” is set, and in the display target setting field1513, “virtual advertisement AAA” is set. Further, in the start time setting field1511, “19:00” is set, in the end time setting field1512, “20:00” is set, and in the display target setting field1513, “virtual advertisement BBB” is set. The information relating to the set virtual advertisement display condition is set to the parameter in accordance with reaching of the designated time by the display condition setting unit of the parameter setting unit and the parameter is transmitted to the image generation apparatus120by the parameter transmission unit. Consequently, by setting the virtual advertisement display schedule as described above, the control is performed so that the virtual advertisement AAA is displayed in the virtual viewpoint image from 18:00 to 19:00 and the virtual advertisement BBB is displayed in the virtual viewpoint image from 19:00 to 20:00. As explained above, according to the present embodiment, by setting in advance the time to display a virtual advertisement as a schedule, the immediate operation by a user is not necessary and it is made possible to perform the control to switch the kinds of virtual advertisement to be displayed in a virtual viewpoint image in accordance with the schedule. In the above-described embodiment, although explanation is given by taking the timer setting by time as an example of the display condition, it is also possible to set the time elapsed from instructions to display a virtual advertisement as the display condition, not limited only to the time. Further, it is also possible to use the display condition that combines the virtual advertisement display instructions setting by a timer and the virtual advertisement display instructions setting at arbitrary timing by an operator as the display condition. For example, in the state where the schedule is set so that the virtual advertisement AAA is displayed in the virtual viewpoint image from 18:00 to 19:00 and the virtual advertisement BBB is displayed in the virtual viewpoint image from the 19:00 to 20:00, it is also possible to combine the display instructions setting of a virtual advertisement that is displayed next. That is, it is also possible to switch the virtual advertisement that is displayed by the operation of an operator to the virtual advertisement A only during the period of a penalty kick. Further, it is also possible to apply the above-described embodiment to a system that bills the advertiser of a virtual advertisement for the charge in accordance with the display time of the virtual advertisement by a timer. Fifth Embodiment In the present embodiment, an aspect is explained by using diagrams, in which a 3D model and moving image data of a virtual advertisement are registered in advance and a virtual advertisement whose texture moves is displayed in a virtual viewpoint image by designating the moving image data or the frame number of the moving image data, which are already registered along with the parameter. A virtual advertisement data registration example and a virtual viewpoint image generation parameter example are explained by using diagrams.FIG.16AandFIG.16Bare each a diagram showing a virtual advertisement data registration example and a virtual viewpoint image generation parameter example andFIG.16Ashows a 3D model format example of virtual advertisement data that is transmitted at the time of registration andFIG.16Bshows a format example of parameter (with virtual advertisement) data. As shown inFIG.16A, in the 3D model format of virtual advertisement data, the file name and the file size of each of the obj file, the mtl file, and an mp4 file are designated. The obj file and the mtl file are the same as the obj file and the mtl file explained by usingFIG.6AtoFIG.6C, and therefore, their explanation is omitted. In the mp4 file, moving image data (parameter) of a 3D model of a virtual advertisement is stored. Here, a parameter (with virtual advertisement) data example is explained by using diagrams. In the format of the parameter (with virtual advertisement) data shown inFIG.16B, as virtual advertisement display instructions, “AAA: FWD” representing “advertisement ID: moving image reproduction method” is designated. Here, FWD represents the uniform reproduction in the forward direction. By designating the reproduction method of a moving image at the time of setting the various kinds of information to the parameter as described above, it is made possible to display a moving virtual advertisement. As explained above, according to the present embodiment, by registering in advance a moving image as virtual advertisement data and setting identification information on the virtual advertisement data and information relating to the display condition to the parameter, the following effects are obtained. It is possible to generate a virtual advertisement (moving image data)-added virtual viewpoint image or a virtual viewpoint image without virtual advertisement in accordance with the information that is set to the parameter. Because of this, it is possible to display an effective virtual advertisement suitable to a scene in a virtual viewpoint image by switching between display and nondisplay of a virtual advertisement real time for each frame. That is, it is possible to generate a virtual viewpoint image in which a virtual advertisement (moving image data) is displayed at a desired position at desired timing. Further, in the first to fifth embodiments, although explanation is given by taking advertisement information (virtual advertisement) as an example of additional information that is added to basic 3D data (moving image file), additional information is not limited to advertisement information. As another example of additional information that is added to basic 3D data, it may also be possible to add information on a player (in the following, referred to as player information) captured in a virtual viewpoint image. In a case where player information is added to a moving image file, in the virtual advertisement storage unit122, the data type, the number of pieces of data, and information relating to the address of a player information database are stored. The data type is information indicating that the additional information is information relating to a player. The number of pieces of data is the number of pieces of player information. The information relating to the address of a player information database is address information for connecting to a database in which the player information is accumulated. In a case where the address information such as this is not stored in the virtual advertisement storage unit122, in the virtual advertisement storage unit122, the player information ID, the player information model data of player information, the player information material data, the information on a pasting area of player information on a virtual viewpoint video and priority, and the like are stored. As described above, it is possible to apply the present embodiments also to information other than advertisement information. Other Embodiments In the above, although the information processing apparatus110is explained, which performs the setting of advertisement data and the display condition thereof, the setting of information relating to the display condition to the parameter, and the transmission of the data to the image generation apparatus120, the information processing apparatus is not limited to this. The information processing apparatus may have a configuration in which the reading of virtual advertisement data that is added to a virtual viewpoint image from the image generation apparatus120, the setting of the display condition thereof, the setting of information relating to the display condition to the parameter, and transmission to the image generation apparatus120are performed. Further, the object of the present invention is also achieved by supplying a storage medium storing computer program codes that implement the functions described previously to a system and the system reading and executing the computer program codes. In this case, the computer program codes themselves read from the storage medium implement the functions of the embodiments described previously and the storage medium storing the computer program codes configures the present disclosure. Further, a case is also included where an operating system (OS) or the like running on a computer performs part of all of the actual processing based on instructions of the program codes and by the processing, the functions described previously are implemented. Furthermore, it may also be possible to implement the image processing system of the present embodiments by the following aspect. That is, computer program codes read from the storage medium are written to a function extension card inserted into the computer or a memory provided in a function extension unit connected to the computer. Then, a case is also included where the CPU or the like provided in the function extension card or the function extension unit performs part of all of the actual processing based on instructions of the computer program codes, and thereby, the functions described previously are implemented. In a case where the present disclosure is applied to the above-described storage medium, in the storage medium, the computer program code corresponding to the flowchart explained previously is stored. Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like. According to the present embodiments, it is possible to generate a virtual viewpoint image in which desired information is added to a desired position. This application claims the benefit of Japanese Patent Application No. 2020-203296, filed Dec. 8, 2020, which is hereby incorporated by reference wherein in its entirety.
83,690
11943561
The figures depict examples of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative examples of the structures and methods illustrated may be employed without departing from the principles, or benefits touted, of this disclosure. In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. DETAILED DESCRIPTION In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain inventive examples. However, it will be apparent that various examples may be practiced without these specific details. The figures and description are not intended to be restrictive. A typical image sensor includes an array of pixel cells. Each pixel cell includes a photodiode to measure the intensity incident light by converting photons into charge (e.g., electrons or holes). The charge generated by the photodiode can be converted to a voltage by a charge sensing unit, which can include a floating drain node. The voltage can be quantized by an analog-to-digital converter (ADC) into a digital value. The digital value can represent an intensity of light received by the pixel cell and can form a pixel, which can correspond to light received from a spot of a scene. An image comprising an array of pixels can be derived from the digital outputs of the array of pixel cells. An image sensor can be used to perform different modes of imaging, such as 2D and 3D sensing. The 2D and 3D sensing can be performed based on light of different wavelength ranges. For example, visible light can be used for 2D sensing, whereas invisible light (e.g., infra-red light) can be used for 3D sensing. An image sensor may include an optical filter array to allow visible light of different optical wavelength ranges and colors (e.g., red, green, blue, monochrome, etc.) to a first set of pixel cells assigned for 2D sensing, and invisible light to a second set of pixel cells assigned for 3D sensing. To perform 2D sensing, a photodiode at a pixel cell can generate charge at a rate that is proportional to an intensity of visible light component (e.g., red, green, blue, monochrome, etc.) incident upon the pixel cell, and the quantity of charge accumulated in an exposure period can be used to represent the intensity of visible light (or a certain color component of the visible light). The charge can be stored temporarily at the photodiode and then transferred to a capacitor (e.g., a floating diffusion) to develop a voltage. The voltage can be sampled and quantized by an analog-to-digital converter (ADC) to generate an output corresponding to the intensity of visible light. An image pixel value can be generated based on the outputs from multiple pixel cells configured to sense different color components of the visible light (e.g., red, green, and blue colors). Moreover, to perform 3D sensing, light of a different wavelength range (e.g., infra-red light) can be projected onto an object, and the reflected light can be detected by the pixel cells. The light can include structured light, light pulses, etc. The pixel cells outputs can be used to perform depth sensing operations based on, for example, detecting patterns of the reflected structured light, measuring a time-of-flight of the light pulse, etc. To detect patterns of the reflected structured light, a distribution of quantities of charge generated by the pixel cells during the exposure period can be determined, and pixel values can be generated based on the voltages corresponding to the quantities of charge. For time-of-flight measurement, the timing of generation of the charge at the photodiodes of the pixel cells can be determined to represent the times when the reflected light pulses are received at the pixel cells. Time differences between when the light pulses are projected to the object and when the reflected light pulses are received at the pixel cells can be used to provide the time-of-flight measurement. A pixel cell array can be used to generate information of a scene. In some examples, each pixel cell (or at least some of the pixel cells) of the pixel cell array can be used to perform collocated 2D and 3D sensing at the same time. For example, a pixel cell may include multiple photodiodes each configured to convert a different spectral component of light to charge. For 2D sensing, a photodiode can be configured to convert visible light (e.g., monochrome, or for a color of a particular frequency range) to charge, whereas another photodiode can be configured to convert infra-red light to charge for 3D sensing. Having the same set of pixel cells to perform sensing of different spectral components of light can facilitate the correspondence between 2D and 3D images of different spectral components of light generated by the pixel cells. Moreover, given that every pixel cell of a pixel cell array can be used to generate the image, the full spatial resolution of the pixel cell array can be utilized for the imaging. The 2D and 3D imaging data can be fused for various applications that provide virtual-reality (VR), augmented-reality (AR) and/or mixed reality (MR) experiences. For example, a wearable VR/AR/MR system may perform a scene reconstruction of an environment in which the user of the system is located. Based on the reconstructed scene, the VR/AR/MR can generate display effects to provide an interactive experience. To reconstruct a scene, the 3D image data can be used to determine the distances between physical objects in the scene and the user. Moreover, 2D image data can capture visual attributes including textures, colors, and reflectivity of these physical objects. The 2D and 3D image data of the scene can then be merged to create, for example, a 3D model of the scene including the visual attributes of the objects. As another example, a wearable VR/AR/MR system can also perform a head tracking operation based on a fusion of 2D and 3D image data. For example, based on the 2D image data, the VR/AR/AR system can extract certain image features to identify an object. Based on the 3D image data, the VR/AR/AR system can track a location of the identified object relative to the wearable device worn by the user. The VR/AR/AR system can track the head movement based on, for example, tracking the change in the location of the identified object relative to the wearable device as the user's head moves. To improve the correlation of 2D and 3D image data, an array of pixel cells can be configured to provide collocated imaging of different components of incident light from a spot of a scene. Specifically, each pixel cell can include a plurality of photodiodes, and a plurality of corresponding charge sensing units. Each photodiode of the plurality of photodiodes is configured to convert a different light component of incident light to charge. To enable the photodiodes to receive different light components of the incident light, the photodiodes can be formed in a stack which provides different absorption distances for the incident light for different photodiodes, or can be formed on a plane under an array of optical filters. Each charge sensing unit includes one or more capacitors to sense the charge of the corresponding photodiode by converting the charge to a voltage, which can be quantized by an ADC to generate a digital representation of an intensity of an incident light component converted by each photodiode. The ADC includes a comparator. As part of a quantization operation, the comparator can compare the voltage with a reference to output a decision. The output of the comparator can control when a memory stores a value from a free-running counter. The value can provide a result of quantizing the voltage. One major challenge of including multiple photodiodes in a pixel cell is how to reduce the size and power consumption of the pixel cell, which can impact a number of pixel cells that can be fit into a pixel cell array. The number of pixel cells in a pixel cell array can dominate the available resolution of the imaging. Specifically, in addition to the photodiodes, a pixel cell may include processing circuits to support measurement of the charge generated by each photodiode and to support the generation of a pixel value based on the measurements. Moreover, each pixel cell may also include memory devices (e.g., static random-access memory (SRAM)) to store the measurement results while waiting to fetch the measurement results to the VR/AR/AR application for processing. The processing circuits and memory devices typically have considerable footprints and consume considerable amounts of power. For example, a pixel cell may include a charge sensing unit, which includes one or more charge storage devices (e.g., a floating drain node, a capacitor, etc.) to store the charge generated by a photodiode and to convert the charge to a voltage, and a buffer to buffer the voltage. Moreover, the processing circuits may include a quantizer to quantize the voltage to a digital value. The quantizer typically includes a comparator which includes analog circuits (e.g., differential pair, output stage, current source, etc.), which have large footprints and consume lots of power. Further, the memory devices typically include multiple memory banks (e.g., SRAM cells) to store the bits of the measurement result. The memory devices have significant footprints and can consume lots of power, especially if the memory devices are constructed using high bandwidth transistor devices to improve the operation speed. To reduce the footprint and power consumption of the image sensor, and to include photodiodes in the image sensor to improve resolution, the processing circuits and memory devices can be shared among groups of the photodiodes. Each photodiode within the group can take turn in accessing the processing circuits to measure the charge generated by the photodiode, and accessing the memory to store the measurement result. Besides footprint and power, there are other performance metrics of an image sensor, such as dynamic range, power, frame rate, etc. The dynamic range can refer to a range of light intensities measurable by the image sensor. For dynamic range, the upper limit can be defined based on the linearity of the light intensity measurement operation provided by the image sensor, whereas the lower limit can be defined based on the noise signals (e.g., dark charge, thermal noise, etc.) that affect the light intensity measurement operation. On the other hand, various factors can affect the frame rate, which can refer to the amount of time it takes for the image sensor to generate an image frame. The factors may include, for example, the time of completion of the quantization operation, various delays introduced to the quantization operation, etc. To increase the dynamic range of the light intensity measurement operation, the ADC can quantize the voltages based on different quantization operations associated with different intensity ranges. Specifically, each photodiode can generate a quantity of charge within an exposure period, with the quantity of charge representing the incident light intensity. Each photodiode also has a quantum well to store at least some of the charge as residual charge. The quantum well capacity can be set based on a bias voltage on the switch between the photodiode and the charge sensing unit. For a low light intensity range, the photodiode can store the entirety of the charge as residual charge in the quantum well. In a PD ADC quantization operation, the ADC can quantize a first voltage generated by the charge sensing unit from sensing a quantity of the residual charge to provide a digital representation of the low light intensity. As the residual charge is typically much less susceptible to dark current in the photodiode, the noise floor of the low light intensity measurement can be lowered, which can further extend the lower limit of the dynamic range. Moreover, for a medium light intensity range, the quantum well can be saturated by the residual charge, and the photodiode can transfer the remaining charge as overflow charge to the charge sensing unit, which can generate a second voltage from sensing a quantity of the overflow charge. In a FD ADC quantization operation, the ADC can quantize the second voltage to provide a digital representation of the medium light intensity. For both low and medium light intensities, the one or more capacitors in the charge sensing unit are not yet saturated, and the magnitudes of the first voltage and second voltage correlate with the light intensity. Accordingly for both low and medium light intensities, the comparator of the ADC can compare the first voltage or second voltage against a ramping voltage to generate a decision. The decision can control the memory to store a counter value which can represent a quantity of residual charge or overflow charge. For a high light intensity range, the overflow charge can saturate the one or more capacitors in the charge sensing unit. As a result, the magnitudes of the second voltage no longer tracks the light intensity, and non-linearity can be introduced to the light intensity measurement. To reduce the non-linearity caused by the saturation of the capacitors, the ADC can perform a time-to-saturation (TTS) measurement operation within the exposure period by comparing the second voltage, which can keep rising or falling as additional charge is accumulated at the charge sensing unit, with a static threshold to generate a decision. When the second voltage reaches the static threshold, a decision can be generated to control the memory to store a counter value. The counter value can represent a time when the second voltage reaches a saturation threshold. Compared with FD ADC and PD ADC in which the counter value can be linearly related to the incident light intensity, in TTS measurement the counter value can be non-linearly related to the incident light intensity, where the second voltage reaches the static threshold within a shorter time when the incident light intensity increases and vice versa. Moreover, the duration of the TTS measurement operation, as well as the duration of the exposure period, are typically controlled by a controller based on a clock signal supplied to the controller. In some examples, the controller can completely align the TTS measurement operation with the exposure period, such that they start and end at the same time to have the same duration, and the duration can be set based on the cycle period of the clock signal. The cycle period of the clock signal can be set based on a target operation speed of the controller, which can be adjusted based on, for example, a frame rate, a power target, etc., of the image sensor. Although the TTS measurement operation can reduce the non-linearity caused by the saturation of the capacitors and increase the upper limit of the dynamic range, various issues can arise if the TTS measurement period aligns completely, or at least scales up linearly, with the exposure period. One potential issue is power consumption. Specifically, during the TTS measurement operation both the voltage buffer of the charge sensing unit and the comparator of the ADC are powered on to compare the second voltage with the static threshold to generate the decision. Both the voltage buffer and the comparator are analog devices and can consume huge static power when powered on. If the exposure period has a relatively long duration, and the TTS measurement operation is performed within the entirety of the exposure period, both the voltage buffer and the comparator can consume huge amount of power for a long period of time, leading to huge power consumption at the image sensor. The exposure period for the photodiode can be extended due to various reasons. For example, as explained above, the exposure period can be extended due to a lower operation speed of the controller for a lower frame rate, a reduced power target, etc., of the image sensor. Moreover, in a case where the image sensor operates in a low ambient light environment, the exposure period can be extended to allow the photodiode to generate more charge within the exposure period for measurement, which can reduce the signal-to-noise ratio. In addition, performing the TTS measurement within the entirety of the exposure period may allow only one photodiode, within a group of photodiodes that shares a quantizer, to perform the TTS measurement operation. This can create differences in dynamic ranges among the photodiodes. Specifically, to support a global shutter operation, it is desirable to have each photodiode of an image sensor to perform measurement of light within the same exposure period, or within exposure periods that are substantially aligned in time. But if a group of photodiodes shares a quantizer, and one photodiode uses the quantizer to perform the TTS measurement within the entirety of the exposure period, other photodiodes may be unable to perform the TTS measurement within that exposure period. As a result, only one photodiode can use TTS measurement operation to extend the upper limit of dynamic range, while other photodiodes cannot, which can lead to different dynamic ranges among the photodiodes. In a case where different photodiodes within a group measure different frequency components of light, such arrangements can lead to uneven performances of the image sensor in measuring the different frequency components of incident light. The present disclosure relates to an image sensor that can address at least some of the issues above. In one example, the image sensor includes a pixel cell, which can include a photodiode, a charge sensing unit, a quantizer, and a controller. The photodiode is configured to generate a charge in response to light within an exposure period having a first duration. The photodiode can accumulate at least a part of the charge as residual charge, and then output the remaining charge as overflow charge after the photodiode becomes saturated by the residual charge. The charge sensing unit is configured to accumulate the overflow charge output by the photodiode within the exposure period. The controller is configured to determine, using the quantizer and within a TTS (time-to-saturation) measurement period having a second duration, whether a first quantity of the overflow charge accumulated at the charge sensing unit exceeds a threshold, and a TTS measurement for the first quantity to exceed the threshold. The controller is further configured to, based on whether the first quantity exceeds the threshold, output a first value representing the TTS measurement, or a second value representing a second quantity of the charge (the residual charge, the overflow charge, etc.) generated by the photodiode within the exposure period, to represent an intensity of the light. The second value can be generated based on, for example, the aforementioned FD ADC (to measure a quantity of the overflow charge) and PD ADC (to measure a quantity of the residual charge) operations. The first value and the second value can generated by a counter based on a timing of the decision of the quantizer. In the present disclosure, the second duration of the TTS measurement period can be programmed separately from the first duration of the exposure period, such that when the first duration of the exposure period is increased (e.g., due to a lower operation speed of the controller, to enable the photodiode to generate more charge in a low ambient light environment, etc.), the second duration of the TTS measurement period can remain fixed, or at least does not increase by the same amount or by the same proportion. Various techniques are proposed to allow the duration of the TTS measurement operation to be set separately from the duration of the exposure period. Specifically, TTS duration setting and exposure period setting can be supplied from separate registers, which allow the two settings to be individually programmable. In addition, the controller and the counter can operate on clock signals of different frequencies, such that when the clock frequency of the controller increases to increase the exposure period, the duration of the TTS operation can remain fixed or at least do not increase by the same proportion. In addition, the second duration of the TTS measurement period can set a lower limit of the first duration of the exposure period. This can ensure that the exposure period does not end during the TTS measurement period. The second duration of the TTS measurement period can be set based on, for example, a frequency of the counter clock, the bit resolution of the TTS operation, etc. Specifically, the second duration can be set to allow the counter to sweep through the entire range of counter values representing the range of TTS measurement results, which in turn represents the number of bits used to represent the TTS measurement. On the other hand, the first duration of the exposure period can be increased without the corresponding increase in the second duration of the TTS measurement period, as long as the first duration exceeds the second duration. Various techniques are proposed to improve the performance of the image sensor based on the difference in durations between the TTS measurement period and the exposure period. For example, various components of the processing circuit, such as the voltage buffer of the charge sensing unit, the comparator of the ADC, etc., can be disabled between the end of the TTS measurement period and the end of the exposure period. With such arrangement, the exposure period of the photodiode can be extended without corresponding increase in the power consumption of the image sensor. Moreover, in a case where the charge in the charge sensing unit exceeds the saturation threshold and the TTS is measured, the image sensor can also provide the TTS measurement before the exposure period ends. This can reduce the latency in providing the light intensity measurement results and allow the application that consumes the light intensity measurement results to operate at a higher speed. As another example, the threshold for saturation detection (and TTS measurement) can be scaled from a reference threshold. The reference threshold can correspond to a case where the exposure period and the TTS measurement period have the same duration. The scaling can be based on a ratio between the first duration and the second duration. The reduced threshold can account for the fact that the total quantity of charge generated by the photodiode within the TTS measurement period is less than within the exposure period. As the subsequent FD ADC and PD ADC operations measure the total quantity of charge generated by the photodiode within the exposure period, while the TTS measurement is based on a reduced quantity of charge generated within the shortened TTS measurement period, scaling the threshold can reduce the dead zone in the range of light intensity to be measured, such that the intensity range of detection is not (or less) affected by the shortened TTS measurement period. In another example, an image sensor can include a first photodiode and a second photodiode. The first photodiode is configured to generate a first charge in response to a first component of light within a first exposure period having a first duration. The first photodiode can accumulate at least a part of the first charge as first residual charge, and then output the remaining first charge as first overflow charge after the first photodiode becomes saturated by the first residual charge. The second photodiode is configured to generate a second charge in response to a second component of the light within a second exposure period having a second duration. The second photodiode can accumulate at least a part of the second charge as second residual charge, and then output the remaining second charge as second overflow charge after the second photodiode becomes saturated by the second residual charge. In some examples, the two photodiodes can be part of a pixel cell to detect different frequency components of the incident light (e.g., different color components, visible component versus infra-red components, etc.) for collocated 2D/3D sensing, in which case the first component and the second component can have different frequency ranges. In some examples, the two photodiodes can be of different pixel cells and configured to detect the same frequency component of the incident light, in which case the first component and the second component can have the same frequency range. The image sensor further includes a first charge sensing unit, a second charge sensing unit, a quantizer, and a controller. The first charge sensing unit is configured to accumulate the first overflow charge within the first exposure period, whereas the second charge sensing unit is configured to accumulate the second overflow charge within the second exposure period. The controller can determine, using the quantizer and within a first TTS measurement period having a third duration, whether a first quantity of the at least a part of the first charge accumulated at the first charge sensing unit exceeds a first threshold, and a first TTS it takes for the first quantity to exceed the first threshold. Based on whether the first quantity exceeds the first threshold, the controller can output a first value representing the first TTS or a second value representing a second quantity of the first charge generated by the first photodiode within the first exposure period to represent an intensity of the first component of the light. Moreover, the controller can determine, using the quantizer and within a second TTS measurement period having a fourth duration, whether a third quantity of the at least a part of the second charge accumulated at the second charge sensing unit exceeds a second threshold, and a second TTS it takes for the second quantity to exceed the second threshold. Based on whether the third quantity exceeds the second threshold, the controller can output a third value representing the second TTS or a fourth value representing a fourth quantity of the second charge generated by the second photodiode within the second exposure period to represent an intensity of the second component of the light. As in the previous examples of the present disclosure, the second duration of the first TTS measurement period can be individually programmable from the first duration of the first exposure period, such that when the first duration of the first exposure period is increased (e.g., to accommodate a lower operation speed of the controller, to enable the photodiode to generate more charge in a low ambient light environment, etc.), the second duration of the first TTS measurement period can remain fixed, or at least does not increase by the same amount or by the same proportion. Likewise, the third duration of the second TTS measurement period can be individually programmable from the fourth duration of the second exposure period, such that when the third duration of the second exposure period is increased, the fourth duration of the second TTS measurement period can also remain fixed, or at least does not increase by the same amount or by the same proportion. As in the previous examples, the first threshold and the second threshold for saturation detection can be scaled according to, respectively, the ratio between the first duration and the third duration (for the first photodiode) and the ratio between the second duration and the fourth duration (for the second photodiode). In some examples, the first exposure period of the first photodiode can be extended to overlap with both first and second TTS measurement periods, such that a TTS measurement operation can be performed for each photodiode within the first exposure period. This allows the upper limit of dynamic range to be extended for both photodiodes, which can provide a more uniform detection performance among photodiodes that share a quantizer. Moreover, the second exposure period for the second photodiode (which includes the second TTS measurement periods) can overlap substantially with the first exposure period for the first photodiode, which can improve the global shutter operation of the image sensor. Following the first TTS measurement period and the second TTS measurement period, the controller can use the quantizer to perform FD ADC and/or PD ADC operations to generate the second value for the first photodiode, and output the second value if the first overflow charge (if any) does not exceed the first threshold. Moreover, the controller can also use the quantizer to perform FD ADC and/or PD ADC operations to generate the fourth value for the second photodiode, and output the fourth value if the second overflow charge (if any) does not exceed the second threshold. In some examples, the image sensor can include a first memory and a second memory to store the intensity measurement results of, respectively, the first photodiode and the second photodiode. Based on determining that the first overflow charge exceeds the first threshold, the controller can store the first value from the TTS operation for the first photodiode in the first memory. Moreover, based on determining that the second overflow charge exceeds the second threshold, the controller can store the third value from the TTS operation for the second photodiode in the second memory. The controller can also include a first output logic circuit for the first memory and a second output logic circuit for the second memory. The first output logic circuit can store a first indication that the first overflow charge exceeds the first threshold, whereas the second output logic circuit can store a second indication that the second overflow charge exceeds the second threshold. Based on the first indication, the controller can either stop the subsequent FD ADC and PD ADC operations for the first photodiode, or at least not to overwrite the first value in the first memory with the second value from the FD ADC/PD ADC operations. Moreover, based on the second indication, the controller can either stop the subsequent FD ADC and PD ADC operations for the second photodiode, or at least not to overwrite the third value in the second memory with the fourth value from the FD ADC/PD ADC operations. In some examples, as described above, based on the first and second indications, the controller can perform a read out of the first value and the third value from, respectively, the first memory and the second memory, before the first and second exposure periods end. This can reduce the latency in providing the light intensity measurement results and allow the application that consumes the light intensity measurement results to operate at a higher speed. All these can improve the performances of the image sensor and the system that uses the outputs of the image sensor. The disclosed techniques may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some examples, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, e.g., create content in an artificial reality and/or are otherwise used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers. FIG.1Ais a diagram of an example of a near-eye display100. Near-eye display100presents media to a user. Examples of media presented by near-eye display100include one or more images, video, and/or audio. In some examples, audio is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the near-eye display100, a console, or both, and presents audio data based on the audio information. Near-eye display100is generally configured to operate as a virtual reality (VR) display. In some examples, near-eye display100is modified to operate as an augmented reality (AR) display and/or a mixed reality (MR) display. Near-eye display100includes a frame105and a display110. Frame105is coupled to one or more optical elements. Display110is configured for the user to see content presented by near-eye display100. In some examples, display110comprises a waveguide display assembly for directing light from one or more images to an eye of the user. Near-eye display100further includes image sensors120a,120b,120c, and120d. Each of image sensors120a,120b,120c, and120dmay include a pixel array configured to generate image data representing different fields of views along different directions. For example, sensors120aand120bmay be configured to provide image data representing two fields of view towards a direction A along the Z axis, whereas sensor120cmay be configured to provide image data representing a field of view towards a direction B along the X axis, and sensor120dmay be configured to provide image data representing a field of view towards a direction C along the X axis. In some examples, sensors120a-120dcan be configured as input devices to control or influence the display content of the near-eye display100, to provide an interactive VR/AR/MR experience to a user who wears near-eye display100. For example, sensors120a-120dcan generate physical image data of a physical environment in which the user is located. The physical image data can be provided to a location tracking system to track a location and/or a path of movement of the user in the physical environment. A system can then update the image data provided to display110based on, for example, the location and orientation of the user, to provide the interactive experience. In some examples, the location tracking system may operate a SLAM algorithm to track a set of objects in the physical environment and within a view of field of the user as the user moves within the physical environment. The location tracking system can construct and update a map of the physical environment based on the set of objects, and track the location of the user within the map. By providing image data corresponding to multiple fields of views, sensors120a-120dcan provide the location tracking system a more holistic view of the physical environment, which can lead to more objects to be included in the construction and updating of the map. With such an arrangement, the accuracy and robustness of tracking a location of the user within the physical environment can be improved. In some examples, near-eye display100may further include one or more active illuminators130to project light into the physical environment. The light projected can be associated with different frequency spectrums (e.g., visible light, infra-red light, ultra-violet light, etc.), and can serve various purposes. For example, illuminator130may project light in a dark environment (or in an environment with low intensity of infra-red light, ultra-violet light, etc.) to assist sensors120a-120din capturing images of different objects within the dark environment to, for example, enable location tracking of the user. Illuminator130may project certain markers onto the objects within the environment, to assist the location tracking system in identifying the objects for map construction/updating. In some examples, illuminator130may also enable stereoscopic imaging. For example, one or more of sensors120aor120bcan include both a first pixel array for visible light sensing and a second pixel array for infra-red (IR) light sensing. The first pixel array can be overlaid with a color filter (e.g., a Bayer filter), with each pixel of the first pixel array being configured to measure intensity of light associated with a particular color (e.g., one of red, green or blue colors). The second pixel array (for IR light sensing) can also be overlaid with a filter that allows only IR light through, with each pixel of the second pixel array being configured to measure intensity of IR lights. The pixel arrays can generate an RGB image and an IR image of an object, with each pixel of the IR image being mapped to each pixel of the RGB image. Illuminator130may project a set of IR markers on the object, the images of which can be captured by the IR pixel array. Based on a distribution of the IR markers of the object as shown in the image, the system can estimate a distance of different parts of the object from the IR pixel array, and generate a stereoscopic image of the object based on the distances. Based on the stereoscopic image of the object, the system can determine, for example, a relative position of the object with respect to the user, and can update the image data provided to display100based on the relative position information to provide the interactive experience. As discussed above, near-eye display100may be operated in environments associated with a very wide range of light intensity. For example, near-eye display100may be operated in an indoor environment or in an outdoor environment, and/or at different times of the day. Near-eye display100may also operate with or without active illuminator130being turned on. As a result, image sensors120a-120dmay need to have a wide dynamic range to be able to operate properly (e.g., to generate an output that correlates with the intensity of incident light) across a very wide range of light intensity associated with different operating environments for near-eye display100. FIG.1Bis a diagram of another example of near-eye display100.FIG.1Billustrates a side of near-eye display100that faces the eyeball(s)135of the user who wears near-eye display100. As shown inFIG.1B, near-eye display100may further include a plurality of illuminators140a,140b,140c,140d,140e, and140f. Near-eye display100further includes a plurality of image sensors150aand150b. Illuminators140a,140b, and140cmay emit lights of certain frequency range (e.g., NIR) towards direction D (which is opposite to direction A ofFIG.1A). The emitted light may be associated with a certain pattern, and can be reflected by the left eyeball of the user. Sensor150amay include a pixel array to receive the reflected light and generate an image of the reflected pattern. Similarly, illuminators140d,140e, and140fmay emit NIR lights carrying the pattern. The NIR lights can be reflected by the right eyeball of the user, and may be received by sensor150b. Sensor150bmay also include a pixel array to generate an image of the reflected pattern. Based on the images of the reflected pattern from sensors150aand150b, the system can determine a gaze point of the user, and update the image data provided to display100based on the determined gaze point to provide an interactive experience to the user. As discussed above, to avoid damaging the eyeballs of the user, illuminators140a,140b,140c,140d,140e, and140fare typically configured to output lights of very low intensities. In a case where image sensors150aand150bcomprise the same sensor devices as image sensors120a-120dofFIG.1A, the image sensors120a-120dmay need to be able to generate an output that correlates with the intensity of incident light when the intensity of the incident light is very low, which may further increase the dynamic range requirement of the image sensors. Moreover, the image sensors120a-120dmay need to be able to generate an output at a high speed to track the movements of the eyeballs. For example, a user's eyeball can perform a very rapid movement (e.g., a saccade movement) in which there can be a quick jump from one eyeball position to another. To track the rapid movement of the user's eyeball, image sensors120a-120dneed to generate images of the eyeball at high speed. For example, the rate at which the image sensors generate an image frame (the frame rate) needs to at least match the speed of movement of the eyeball. The high frame rate requires short total exposure period for all of the pixel cells involved in generating the image frame, as well as high speed for converting the sensor outputs into digital values for image generation. Moreover, as discussed above, the image sensors also need to be able to operate at an environment with low light intensity. FIG.2is an example of a cross section200of near-eye display100illustrated inFIG.1. Display110includes at least one waveguide display assembly210. An exit pupil230is a location where a single eyeball220of the user is positioned in an eyebox region when the user wears the near-eye display100. For purposes of illustration,FIG.2shows the cross section200associated eyeball220and a single waveguide display assembly210, but a second waveguide display is used for a second eye of a user. Waveguide display assembly210is configured to direct image light to an eyebox located at exit pupil230and to eyeball220. Waveguide display assembly210may be composed of one or more materials (e.g., plastic, glass, etc.) with one or more refractive indices. In some examples, near-eye display100includes one or more optical elements between waveguide display assembly210and eyeball220. In some examples, waveguide display assembly210includes a stack of one or more waveguide displays including, but not restricted to, a stacked waveguide display, a varifocal waveguide display, etc. The stacked waveguide display is a polychromatic display (e.g., a red-green-blue (RGB) display) created by stacking waveguide displays whose respective monochromatic sources are of different colors. The stacked waveguide display is also a polychromatic display that can be projected on multiple planes (e.g., multi-planar colored display). In some configurations, the stacked waveguide display is a monochromatic display that can be projected on multiple planes (e.g., multi-planar monochromatic display). The varifocal waveguide display is a display that can adjust a focal position of image light emitted from the waveguide display. In alternate examples, waveguide display assembly210may include the stacked waveguide display and the varifocal waveguide display. FIG.3illustrates an isometric view of an example of a waveguide display300. In some examples, waveguide display300is a component (e.g., waveguide display assembly210) of near-eye display100. In some examples, waveguide display300is part of some other near-eye display or other system that directs image light to a particular location. Waveguide display300includes a source assembly310, an output waveguide320, and a controller330. For purposes of illustration,FIG.3shows the waveguide display300associated with a single eyeball220, but in some examples, another waveguide display separate, or partially separate, from the waveguide display300provides image light to another eye of the user. Source assembly310generates image light355. Source assembly310generates and outputs image light355to a coupling element350located on a first side370-1of output waveguide320. Output waveguide320is an optical waveguide that outputs expanded image light340to an eyeball220of a user. Output waveguide320receives image light355at one or more coupling elements350located on the first side370-1and guides received input image light355to a directing element360. In some examples, coupling element350couples the image light355from source assembly310into output waveguide320. Coupling element350may be, e.g., a diffraction grating, a holographic grating, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors. Directing element360redirects the received input image light355to decoupling element365such that the received input image light355is decoupled out of output waveguide320via decoupling element365. Directing element360is part of, or affixed to, first side370-1of output waveguide320. Decoupling element365is part of, or affixed to, second side370-2of output waveguide320, such that directing element360is opposed to the decoupling element365. Directing element360and/or decoupling element365may be, e.g., a diffraction grating, a holographic grating, one or more cascaded reflectors, one or more prismatic surface elements, and/or an array of holographic reflectors. Second side370-2represents a plane along an x-dimension and a y-dimension. Output waveguide320may be composed of one or more materials that facilitate total internal reflection of image light355. Output waveguide320may be composed of e.g., silicon, plastic, glass, and/or polymers. Output waveguide320has a relatively small form factor. For example, output waveguide320may be approximately 50 mm wide along x-dimension, 30 mm long along y-dimension and 0.5-1 mm thick along a z-dimension. Controller330controls scanning operations of source assembly310. The controller330determines scanning instructions for the source assembly310. In some examples, the output waveguide320outputs expanded image light340to the user's eyeball220with a large field of view (FOV). For example, the expanded image light340is provided to the user's eyeball220with a diagonal FOV (in x and y) of 60 degrees and/or greater and/or 150 degrees and/or less. The output waveguide320is configured to provide an eyebox with a length of 20 mm or greater and/or equal to or less than 50 mm; and/or a width of 10 mm or greater and/or equal to or less than 50 mm. Moreover, controller330also controls image light355generated by source assembly310, based on image data provided by image sensor370. Image sensor370may be located on first side370-1and may include, for example, image sensors120a-120dofFIG.1Ato generate image data of a physical environment in front of the user (e.g., for location determination). Image sensor370may also be located on second side370-2and may include image sensors150aand150bofFIG.1Bto generate image data of eyeball220(e.g., for gaze point determination) of the user. Image sensor370may interface with a remote console that is not located within waveguide display300. Image sensor370may provide image data to the remote console, which may determine, for example, a location of the user, a gaze point of the user, etc., and determine the content of the images to be displayed to the user. The remote console can transmit instructions to controller330related to the determined content. Based on the instructions, controller330can control the generation and outputting of image light355by source assembly310. FIG.4illustrates an example of a cross section400of the waveguide display300. The cross section400includes source assembly310, output waveguide320, and image sensor370. In the example ofFIG.4, image sensor370may include a set of pixel cells402located on first side370-1to generate an image of the physical environment in front of the user. In some examples, there can be a mechanical shutter404interposed between the set of pixel cells402and the physical environment to control the exposure of the set of pixel cells402. In some examples, the mechanical shutter404can be replaced by an electronic shutter gate, as to be discussed below. Each of pixel cells402may correspond to one pixel of the image. Although not shown inFIG.4, it is understood that each of pixel cells402may also be overlaid with a filter to control the frequency range of the light to be sensed by the pixel cells. After receiving instructions from the remote console, mechanical shutter404can open and expose the set of pixel cells402in an exposure period. During the exposure period, image sensor370can obtain samples of lights incident on the set of pixel cells402, and generate image data based on an intensity distribution of the incident light samples detected by the set of pixel cells402. Image sensor370can then provide the image data to the remote console, which determines the display content, and provide the display content information to controller330. Controller330can then determine image light355based on the display content information. Source assembly310generates image light355in accordance with instructions from the controller330. Source assembly310includes a source410and an optics system415. Source410is a light source that generates coherent or partially coherent light. Source410may be, e.g., a laser diode, a vertical cavity surface emitting laser, and/or a light emitting diode. Optics system415includes one or more optical components that condition the light from source410. Conditioning light from source410may include, e.g., expanding, collimating, and/or adjusting orientation in accordance with instructions from controller330. The one or more optical components may include one or more lenses, liquid lenses, mirrors, apertures, and/or gratings. In some examples, optics system415includes a liquid lens with a plurality of electrodes that allows scanning of a beam of light with a threshold value of scanning angle to shift the beam of light to a region outside the liquid lens. Light emitted from the optics system415(and also source assembly310) is referred to as image light355. Output waveguide320receives image light355. Coupling element350couples image light355from source assembly310into output waveguide320. In examples where coupling element350is diffraction grating, a pitch of the diffraction grating is chosen such that total internal reflection occurs in output waveguide320, and image light355propagates internally in output waveguide320(e.g., by total internal reflection), toward decoupling element365. Directing element360redirects image light355toward decoupling element365for decoupling from output waveguide320. In examples where directing element360is a diffraction grating, the pitch of the diffraction grating is chosen to cause incident image light355to exit output waveguide320at angle(s) of inclination relative to a surface of decoupling element365. In some examples, directing element360and/or decoupling element365are structurally similar. Expanded image light340exiting output waveguide320is expanded along one or more dimensions (e.g., may be elongated along x-dimension). In some examples, waveguide display300includes a plurality of source assemblies310and a plurality of output waveguides320. Each of source assemblies310emits a monochromatic image light of a specific band of wavelength corresponding to a primary color (e.g., red, green, or blue). Each of output waveguides320may be stacked together with a distance of separation to output an expanded image light340that is multi-colored. FIG.5is a block diagram of an example of a system500including the near-eye display100. The system500comprises near-eye display100, an imaging device535, an input/output interface540, and image sensors120a-120dand150a-150bthat are each coupled to control circuitries510. System500can be configured as a head-mounted device, a wearable device, etc. Near-eye display100is a display that presents media to a user. Examples of media presented by the near-eye display100include one or more images, video, and/or audio. In some examples, audio is presented via an external device (e.g., speakers and/or headphones) that receives audio information from near-eye display100and/or control circuitries510and presents audio data based on the audio information to a user. In some examples, near-eye display100may also act as an AR eyewear glass. In some examples, near-eye display100augments views of a physical, real-world environment, with computer-generated elements (e.g., images, video, sound, etc.). Near-eye display100includes waveguide display assembly210, one or more position sensors525, and/or an inertial measurement unit (IMU)530. Waveguide display assembly210includes source assembly310, output waveguide320, and controller330. IMU530is an electronic device that generates fast calibration data indicating an estimated position of near-eye display100relative to an initial position of near-eye display100based on measurement signals received from one or more of position sensors525. Imaging device535may generate image data for various applications. For example, imaging device535may generate image data to provide slow calibration data in accordance with calibration parameters received from control circuitries510. Imaging device535may include, for example, image sensors120a-120dofFIG.1Afor generating image data of a physical environment in which the user is located, for performing location tracking of the user. Imaging device535may further include, for example, image sensors150a-150bofFIG.1Bfor generating image data for determining a gaze point of the user, to identify an object of interest of the user. The input/output interface540is a device that allows a user to send action requests to the control circuitries510. An action request is a request to perform a particular action. For example, an action request may be to start or end an application or to perform a particular action within the application. Control circuitries510provide media to near-eye display100for presentation to the user in accordance with information received from one or more of: imaging device535, near-eye display100, and input/output interface540. In some examples, control circuitries510can be housed within system500configured as a head-mounted device. In some examples, control circuitries510can be a standalone console device communicatively coupled with other components of system500. In the example shown inFIG.5, control circuitries510include an application store545, a tracking module550, and an engine555. The application store545stores one or more applications for execution by the control circuitries510. An application is a group of instructions, that, when executed by a processor, generates content for presentation to the user. Examples of applications include: gaming applications, conferencing applications, video playback applications, or other suitable applications. Tracking module550calibrates system500using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the near-eye display100. Tracking module550tracks movements of near-eye display100using slow calibration information from the imaging device535. Tracking module550also determines positions of a reference point of near-eye display100using position information from the fast calibration information. Engine555executes applications within system500and receives position information, acceleration information, velocity information, and/or predicted future positions of near-eye display100from tracking module550. In some examples, information received by engine555may be used for producing a signal (e.g., display instructions) to waveguide display assembly210that determines a type of content presented to the user. For example, to provide an interactive experience, engine555may determine the content to be presented to the user based on a location of the user (e.g., provided by tracking module550), or a gaze point of the user (e.g., based on image data provided by imaging device535), a distance between an object and user (e.g., based on image data provided by imaging device535). FIG.6illustrates an example of an image sensor600. Image sensor600can be part of near-eye display100, and can provide 2D and 3D image data to control circuitries510ofFIG.5to control the display content of near-eye display100. As shown inFIG.6, image sensor600may include an array of pixel cells602including pixel cell602a. Pixel cell602acan include a plurality of photodiodes612including, for example, photodiodes612a,612b,612c, and612d, one or more charge sensing units614, and one or more analog-to-digital converters616. The plurality of photodiodes612can convert different components of incident light to charge. For example, photodiode612a-612ccan correspond to different visible light channels, in which photodiode612acan convert a visible blue component (e.g., a wavelength range of 450-490 nanometers (nm)) to charge. Photodiode612bcan convert a visible green component (e.g., a wavelength range of 520-560 nm) to charge. Photodiode612ccan convert a visible red component (e.g., a wavelength range of 635-700 nm) to charge. Moreover, photodiode612dcan convert an infra-red component (e.g., 700-1000 nm) to charge. Each of the one or more charge sensing units614can include a charge storage device and a buffer to convert the charge generated by photodiodes612a-612dto voltages, which can be quantized by one or more ADCs616into digital values. The digital values generated from photodiodes612a-612ccan represent the different visible light components of a pixel, and each can be used for 2D sensing in a particular visible light channel. Moreover, the digital value generated from photodiode612dcan represent the infra-red light component of the same pixel and can be used for 3D sensing. AlthoughFIG.6shows that pixel cell602aincludes four photodiodes, it is understood that the pixel cell can include a different number of photodiodes (e.g., two, three, etc.). In some examples, image sensor600may also include an illuminator622, an optical filter624, an imaging module628, and a sensing controller630. Illuminator622may be an infra-red illuminator, such as a laser, a light emitting diode (LED), etc., that can project infra-red light for 3D sensing. The projected light may include, for example, structured light, light pulses, etc. Optical filter624may include an array of filter elements overlaid on the plurality of photodiodes612a-612dof each pixel cell including pixel cell602a. Each filter element can set a wavelength range of incident light received by each photodiode of pixel cell602a. For example, a filter element over photodiode612amay transmit the visible blue light component while blocking other components, a filter element over photodiode612bmay transmit the visible green light component, a filter element over photodiode612cmay transmit the visible red light component, whereas a filter element over photodiode612dmay transmit the infra-red light component. Image sensor600further includes an imaging module628. Imaging module628may further include a 2D imaging module632to perform 2D imaging operations and a 3D imaging module634to perform 3D imaging operations. The operations can be based on digital values provided by ADCs616. For example, based on the digital values from each of photodiodes612a-612c,2D imaging module632can generate an array of pixel values representing an intensity of an incident light component for each visible color channel, and generate an image frame for each visible color channel. Moreover, 3D imaging module634can generate a 3D image based on the digital values from photodiode612d. In some examples, based on the digital values, 3D imaging module634can detect a pattern of structured light reflected by a surface of an object, and compare the detected pattern with the pattern of structured light projected by illuminator622to determine the depths of different points of the surface with respect to the pixel cells array. For detection of the pattern of reflected light, 3D imaging module634can generate pixel values based on intensities of infra-red light received at the pixel cells. As another example, 3D imaging module634can generate pixel values based on time-of-flight of the infra-red light transmitted by illuminator622and reflected by the object. Image sensor600further includes a sensing controller640to control different components of image sensor600to perform 2D and 3D imaging of an object. Reference is now made toFIG.7A-FIG.7C, which illustrate examples of operations of image sensor600for 2D and 3D imaging.FIG.7Aillustrates an example of operations for 2D imaging. For 2D imaging, pixel cells array602can detect visible light in the environment including visible light reflected off an object. For example, referring toFIG.7A, visible light source700(e.g., a light bulb, the sun, or other sources of ambient visible light) can project visible light702onto an object704. Visible light706can be reflected off a spot708of object704. Visible light706can also include the ambient infra-red light component. Visible light706can be filtered by optical filter array624to pass different components of visible light706of wavelength ranges w0, w1, w2, and w3to, respectively, photodiodes612a,612b,612c, and612dof pixel cell602a. Wavelength ranges w0, w1, w2, and w3an correspond to, respectively, blue, green, red, and infra-red. As shown inFIG.7A, as the infra-red illuminator622is not turned on, the intensity of infra-red component (w3) is contributed by the ambient infra-red light and can be very low. Moreover, different visible components of visible light706can also have different intensities. Charge sensing units614can convert the charge generated by the photodiodes to voltages, which can be quantized by ADCs616into digital values representing the red, blue, and green components of a pixel representing spot708. Referring toFIG.7C, after the digital values are generated, sensing controller640can control 2D imaging module632to generate, based on the digital values, sets of images including a set of images710, which includes a red image frame710a, a blue image frame710b, and a green image frame710ceach representing one of red, blue, or green color image of a scene captured with the same exposure period714. Each pixel from the red image (e.g., pixel712a), from the blue image (e.g., pixel712b), and from the green image (e.g., pixel712c) can represent visible components of light from the same spot (e.g., spot708) of a scene. A different set of images720can be generated by 2D imaging module632in a subsequent exposure period724. Each of red image710a, blue image710b, and green image710ccan represent the scene in a specific color channel and can be provided to an application to, for example, extract image features from the specific color channel. As each image represents the same scene and each corresponding pixel of the images represent light from the same spot of the scene, the correspondence of images between different color channels can be improved. Furthermore, image sensor600can also perform 3D imaging of object704. Referring toFIG.7B, sensing controller610can control illuminator622to project infra-red light732, which can include a light pulse, structured light, etc., onto object704. Infra-red light732can have a wavelength range of 700 nanometers (nm) to 1 millimeter (mm). Infra-red light734can reflect off spot708of object704and can propagate towards pixel cells array602and pass through optical filter624, which can provide the infra-red component (of wavelength range w3) to photodiode612dto convert to charge. Charge sensing units614can convert the charge to a voltage, which can be quantized by ADCs616into digital values. Referring toFIG.7C, after the digital values are generated, sensing controller640can control 3D imaging module634to generate, based on the digital values, an infra-red image710dof the scene as part of images710captured within exposure period714. As infra-red image710dcan represent the same scene in the infra-red channel and a pixel of infra-red image710d(e.g., pixel712d) represents light from the same spot of the scene as other corresponding pixels (pixels712a-712c) in other images within images710, the correspondence between 2D and 3D imaging can be improved as well. FIG.8A-FIG.8Eillustrate examples of arrangements of photodiodes612in an image sensor, such as within a pixel cell or between different pixel cells. As shown inFIG.8A, the photodiodes612a-612din a pixel cell602acan form a stack along an axis that is perpendicular to a light receiving surface800through which pixel cell602areceives incident light802from a spot804a. For example, the photodiodes612a-612dcan form a stack along a vertical axis (e.g., the z-axis) when the light receiving surface800is parallel with the x and y axes. Each photodiode can have a different distance from light receiving surface800, and the distance can set the component of incident light802being absorbed and converted to charge by each photodiode. For example, photodiode612ais closest to light receiving surface800and can absorb and convert the blue component to charge, which is of the shortest wavelength range among the other components. Light812includes the remaining components of light802(e.g., green, red, and infra-red) and can propagate to photodiode612b, which can absorb and convert the green component. Light822includes the remaining components of light812(e.g., red and infra-red) and can propagate to photodiode612c, which can absorb and convert the red component. The remaining infra-red component832can propagate to photodiode612dto be converted to charge. Each the photodiodes612a,612b,612c, and612dcan be in a separate semiconductor substrate, which can be stacked to form image sensor600. For example, photodiode612acan be in a semiconductor substrate840, photodiode612bcan be in a semiconductor substrate842, photodiode612ccan be in a semiconductor substrate844, whereas photodiode612dcan be in a semiconductor substrate846. Each of substrates840-846can include a charge sensing unit, such as charge sensing units614. Substrates840-846can form a sensor layer. Each semiconductor substrate can include other photodiodes of other pixel cells, such as pixel cells602bto receive light from spot804b. Image sensor600can include another semiconductor substrate848which can include pixel cell processing circuits849which can include, for example, ADCs616, imaging module628, sensing controller640, etc. In some examples, charge sensing units614can be in semiconductor substrate848. Semiconductor substrate848can form an application specific integrated circuit (ASIC) layer. Each semiconductor substrate can be connected to a metal interconnect, such as metal interconnects850,852,854, and856to transfer the charge generated at each photodiode to processing circuit849. FIG.8B-FIG.8Dillustrate other example arrangements of photodiodes612within a pixel cell. As shown inFIG.8B-FIG.8D, the plurality of photodiodes612can be arranged laterally parallel with light receiving surface800. The top graph ofFIG.8Billustrates a side view of an example of pixel cell602a, whereas the bottom graph ofFIG.8Billustrates a top view of pixel array602including pixel cell602a. As shown inFIG.8B, with light receiving surface800being parallel with the x and y axes, photodiodes612a,612b,612c, and612dcan be arranged adjacent to each other also along the x and y axes in semiconductor substrate840. Pixel cell602afurther includes an optical filter array860overlaid on the photodiodes. Optical filter array860can be part of optical filter624. Optical filter array860can include a filter element overlaid on each of photodiodes612a,612b,612c, and612dto set a wavelength range of incident light component received by the respective photodiode. For example, filter element860ais overlaid on photodiode612aand can allow only visible blue light to enter photodiode612a. Moreover, filter element860bis overlaid on photodiode612band can allow only visible green light to enter photodiode612b. Further, filter element860cis overlaid on photodiode612cand can allow only visible red light to enter photodiode612c. Filter element860dis overlaid on photodiode612dand can allow only infra-red light to enter photodiode612d. Pixel cell602afurther includes one or more microlens862which can project light864from a spot of a scene (e.g., spot804a) via optical tiler array860to different lateral locations of light receiving surface800, which allows each photodiode to become a sub-pixel of pixel cell602aand to receive components of light from the same spot corresponding to a pixel. Pixel cell602acan also include semiconductor substrate848which can include circuit849(e.g., charge sensing units614, ADCs616, etc.) to generate digital values from the charge generated by the photodiodes. Semiconductor substrates840and848can form a stack and can be connected with interconnect856. InFIG.8B, semiconductor substrate840can form a sensor layer, whereas semiconductor substrate848can form an ASIC layer. The arrangements ofFIG.8B, in which the photodiodes are arranged laterally and an optical filter array is used to control the light components received by the photodiodes, can offer numerous advantages. For example, the number of stacks and the number of semiconductor substrates can be reduced, which not only reduce the vertical height but also the interconnects among the semiconductor substrates. Moreover, relying on filter elements rather than the propagation distance of light to set the wavelength ranges of the components absorbed by each photodiode can offer flexibilities in selecting the wavelength ranges. As shown in top graph ofFIG.8C, pixel cells array602can include different optical filter arrays860for different pixel cells. For example, each pixel cell of pixel cells array602can have an optical filter array that provides monochrome channel of a wavelength range of 380-740 nm (labelled with “M”) for photodiodes612aand612b, and an infra-red channel of a wavelength range of 700-1000 nm (labelled with “NIR”) for photodiode612d. But the optical filter arrays may also provide a different visible color channel for the different pixel cells. For example, the optical filter arrays860for pixel cells array602a,602b,602c, and602dmay provide, respectively, a visible green channel (labelled with “G”), a visible red channel (labelled with “R”), a visible blue channel (labelled with “B”), and a visible green channel for photodiode612cof the pixel cells arrays. As another example, as shown in the bottom graph ofFIG.8C, each optical filter array860can provide a monochrome and infra-red channel (labelled “M+NIR”) which spans a wavelength range of 380-1000 nm for photodiode612bof each pixel cells array. FIG.8Dillustrates examples of optical filter array860to provide the example channels shown inFIG.8C. As shown inFIG.8D, optical filter array860can include a stack of optical filters to select a wavelength range of light received by each photodiode within a pixel cell array. For example, referring to the top graph ofFIG.8D, optical filter860acan include an all-pass element870(e.g., a transparent glass that passes both visible light and infra-red light) and an infra-red blocking element872forming a stack to provide a monochrome channel for photodiode612a. Optical filter860bcan also include an all-pass element874and an infra-red blocking element876to also provide a monochrome channel for photodiode612b. Further, optical filter860ccan include a green-pass element876which passes green visible light (but reject other visible light component), and an infra-red blocking element878, to provide a green channel for photodiode612c. Lastly, optical filter860dcan include an all-pass element880and a visible light blocking filter882(which can block out visible light but allows infra-red light to go through) to provide an infra-red channel for photodiode612d. In another example, as shown in the bottom graph ofFIG.8D, optical filter860bcan include only all-pass element872to provide a monochrome and infra-red channel for photodiode612b. FIG.8Eillustrates another example optical configurations of photodiodes612. As shown inFIG.8E, instead of overlaying a microlens862over a plurality of photodiodes, as shown inFIG.8B, a plurality of microlenses892can be overlaid over the plurality of photodiodes612a-612d, which are arranged in a 2×2 format. For example, microlens892acan be overlaid over photodiode612a, microlens892bcan be overlaid over photodiode612b, microlens892ccan be overlaid over photodiode612c, whereas microlens892dcan be overlaid over photodiode612d. With such arrangements, each photodiode can correspond to a pixel, which can shrink the required footprint of pixel cell array to achieve a target resolution. Different patterns of filter arrays can be inserted between plurality of microlenses862and plurality of photodiodes612. For example, as shown inFIG.8E, a 2×2 color filter pattern comprising red (R), green (G), and blue (B) filters can be inserted between the microlenses and the photodiodes. Moreover, an all-pass filter pattern can also be inserted between the microlenses and the photodiodes so that each photodiode detects a monochrome channel. Also, an infra-red filter pattern can also be inserted between the microlenses and the photodiodes so that each photodiode detects infra-red light. Reference is now made toFIG.9, which illustrates additional components of pixel cell602aincluding an example of charge sensing unit614and ADC616. As shown inFIG.9, pixel cell602acan include a photodiode PD (e.g., photodiode612a), a charge draining transistor M0, a charge transfer transistor M1, a charge sensing unit614comprising a charge storage device902and a switchable buffer904, and an ADC616comprising a CC capacitor, a comparator906, and output logic circuits908. The output of comparator906is coupled, via output logic circuits908, with a memory bank912and a counter914which can be internal to or external to pixel cell602a. Pixel cell602further includes a controller920to control the transistors, charge sensing unit614, as well as ADC616, As to be described below, controller920can set an exposure period to accumulate charge based on incident light, and can control charge sensing unit614and ADC616to perform multiple quantization operations associated with different light intensity ranges to generate a digital representation of the intensity of the incident light. Controller920can be internal to pixel cell602aor part of sensing controller640. Each transistor in pixel cell602acan include, for example, a metal-oxide-semiconductor field-effect transistor (MOSFET), a bipolar junction transistor (BJT), etc. Photodiode PD, charge draining transistor M0, charge transfer transistor M1, and charge sensing unit614can be in a sensor layer (e.g., substrates840-846ofFIG.8A, substrate840ofFIG.8B), whereas ADC616, memory bank912, and counter914can be in an ASIC layer (e.g., substrate848ofFIG.8AandFIG.8B), with the two substrates forming a stack. Specifically, charge transfer transistor M1can be controlled by a TG signal provided by controller920to transfer some of the charge to charge storage device902. In one quantization operation, charge transfer transistor M1can be biased at a partially-on state to set a quantum well capacity of photodiode PD, which also sets a quantity of residual charge stored at photodiode PD. After photodiode PD is saturated by the residual charge, overflow charge can flow through charge transfer transistor M1to charge storage device902. In another quantization operation, charge transfer transistor M1can be fully turned on to transfer the residual charge from photodiode PD to charge storage device for measurement. Moreover, charge draining transistor M0is coupled between photodiode PD and a charge sink. Charge draining transistor M0can be controlled by an anti-blooming (AB) signal provided by controller920to start an exposure period, in which photodiode PD can generate and accumulate charge in response to incident light. Charge draining transistor M0can also be controlled to provide an anti-blooming function to drain away additional charge generated by photodiode PD to the charge sink after charge storage device902saturates, to prevent the additional charge from leaking into neighboring pixel cells. Charge storage device902has a configurable capacity and can convert the charge transferred from transistor M1to a voltage at the OF node. Charge storage device902includes a CFDcapacitor (e.g., a floating drain) and a CEXTcapacitor (e.g., a MOS capacitor, a metal capacitor, etc.) connected by a M6transistor. M6transistor can be enabled by a LG signal to expand the capacity of charge storage device902by connecting CFDand CEXTcapacitors in parallel, or to reduce the capacity by disconnecting the capacitors from each other. The capacity of charge storage device902can be reduced for measurement of residual charge to increase the charge-to-voltage gain and to reduce the quantization error. Moreover, the capacity of charge storage device902can also be increased for measurement of overflow charge to reduce the likelihood of saturation and to improve non-linearity. As to be described below, the capacity of charge storage device902can be adjusted for measurement of different light intensity ranges. Charge storage device902is also coupled with a reset transistor M2which can be controlled by a reset signal RST, provided by controller920, to reset CFDand CEXTcapacitors between different quantization operations. In some examples, with transistor M1fully enabled, reset signal RST can also be used to control the start and end of the exposure period in which PD generates and accumulates charge in response to light. In such examples, charge draining transistor M0can be omitted. Switchable buffer904can be include a transistor M3configured as a source follower to buffer the voltage at the OF node to improve its driving strength. The buffered voltage can be at the input node PIXEL_OUT of ADC616. The M4transistor provides a current source for switchable buffer904and can be biased by a VB signal. Switchable buffer904also includes a transistor M5which be enabled or disabled by a SEL signal. When transistor M5is disabled, source follower M3can be disconnected from the PIXEL_OUT node. As to be described below, pixel cell602amay include multiple charge sensing units614each including a switchable buffer904, and one of the charge sensing units can be coupled with PIXEL_OUT (and ADC616) at one time based on the SEL signal. As described above, charge generated by photodiode PD within an exposure period can be temporarily stored in charge storage device902and converted to a voltage. The voltage can be quantized to represent an intensity of the incident light based on a pre-determined relationship between the charge and the incident light intensity. Reference is now made toFIG.10, which illustrates a quantity of charge accumulated with respect to time for different light intensity ranges. The total quantity of charge accumulated at a particular time point can reflect the intensity of light incident upon photodiode PD ofFIG.6within an exposure period. The quantity can be measured when the exposure period ends. A threshold1002and a threshold1004can be defined for a threshold's quantity of charge defining a low light intensity range1006, a medium light intensity range1008, and a high light intensity range1010for the intensity of the incident light. For example, if the total accumulated charge is below threshold1002(e.g., Q1), the incident light intensity is within low light intensity range1006. If the total accumulated charge is between threshold1004and threshold1002(e.g., Q2), the incident light intensity is within medium light intensity range1008. If the total accumulated charge is above threshold1004, the incident light intensity is within medium light intensity range1010. The quantity of the accumulated charge, for low and medium light intensity ranges, can correlate with the intensity of the incident light, if the photodiode does not saturate within the entire low light intensity range1006and the measurement capacitor does not saturate within the entire medium light intensity range1008. The definitions of low light intensity range1006and medium light intensity range1008, as well as thresholds1002and1004, can be based on the full well capacity of photodiode PD and the capacity of charge storage device902. For example, low light intensity range706can be defined such that the total quantity of residual charge stored in photodiode PD, at the end of the exposure period, is below or equal to the storage capacity of the photodiode, and threshold1002can be based on the full well capacity of photodiode PD. Moreover, medium light intensity range1008can be defined such that the total quantity of charge stored in charge storage device902, at the end of the exposure period, is below or equal to the storage capacity of the measurement capacitor, and threshold1004can be based on the storage capacity of charge storage device902. Typically threshold1004is can be based on a scaled storage capacity of charge storage device902to ensure that when the quantity of charge stored in charge storage device902is measured for intensity determination, the measurement capacitor does not saturate, and the measured quantity also relates to the incident light intensity. As to be described below, thresholds1002and1004can be used to detect whether photodiode PD and charge storage device902saturate, which can determine the intensity range of the incident light. In addition, in a case where the incident light intensity is within high light intensity range1010, the total overflow charge accumulated at charge storage device902may exceed threshold1004before the exposure period ends. As additional charge is accumulated, charge storage device902may reach full capacity before the end of the exposure period, and charge leakage may occur. To avoid measurement error caused due to charge storage device902reaching full capacity, a time-to-saturation measurement can be performed to measure the time duration it takes for the total overflow charge accumulated at charge storage device902to reach threshold1004. A rate of charge accumulation at charge storage device902can be determined based on a ratio between threshold1004and the time-to-saturation, and a hypothetical quantity of charge (Q3) that could have been accumulated at charge storage device902at the end of the exposure period (if the capacitor had limitless capacity) can be determined by extrapolation according to the rate of charge accumulation. The hypothetical quantity of charge (Q3) can provide a reasonably accurate representation of the incident light intensity within high light intensity range1010. Referring back toFIG.9, to measure high light intensity range1010and medium light intensity range1008, charge transfer transistor M1can be biased by TG signal in a partially turned-on state. For example, the gate voltage of charge transfer transistor M1(TG) can be set based on a target voltage developed at photodiode PD corresponding to the full well capacity of the photodiode. With such arrangements, only overflow charge (e.g., charge generated by the photodiode after the photodiode saturates) will transfer through charge transfer transistor M1to reach charge storage device902, to measure time-to-saturation (for high light intensity range1010) and/or the quantity of charge stored in charge storage device902(for medium light intensity range1008). For measurement of medium and high light intensity ranges, the capacitance of charge storage device902(by connecting CEXTand CFD) can also be maximized to increase threshold1004. Moreover, to measure low light intensity range1006, charge transfer transistor M1can be controlled in a fully turned-on state to transfer the residual charge stored in photodiode PD to charge storage device902. The transfer can occur after the quantization operation of the overflow charge stored at charge storage device902completes and after charge storage device902is reset. Moreover, the capacitance of charge storage device902can be reduced. As described above, the reduction in the capacitance of charge storage device902can increase the charge-to-voltage conversion ratio at charge storage device902, such that a higher voltage can be developed for a certain quantity of stored charge. The higher charge-to-voltage conversion ratio can reduce the effect of measurement errors (e.g., quantization error, comparator offset, etc.) introduced by subsequent quantization operation on the accuracy of low light intensity determination. The measurement error can set a limit on a minimum voltage difference that can be detected and/or differentiated by the quantization operation. By increasing the charge-to-voltage conversion ratio, the quantity of charge corresponding to the minimum voltage difference can be reduced, which in turn reduces the lower limit of a measurable light intensity by pixel cell602aand extends the dynamic range. The charge (residual charge and/or overflow charge) accumulated at charge storage device902can develop an analog voltage at the OF node, which can be buffered by switchable buffer904at PIXEL_OUT and quantized by ADC616. As shown inFIG.9, ADC616includes a comparator906which can be reset by a transistor M8, and output logic circuits908. ADC616is also coupled with memory bank912and counter914. Counter914can generate a set of count values based on a free-running clock signal, whereas memory bank912can be controlled, by comparator906via output logic circuits908, to store a count value (e.g., the latest count value) generated by counter914. In some examples, memory bank912can include an array of latch devices to store multiple bits as a pixel value. The stored count value can be output via pixel output buses816. Comparator906can compare an analog voltage COMP_IN, which is derived from PIXEL_OUT by the CC capacitor, against a threshold VREF, and generate a decision VOUT based on the comparison result. The CC capacitor can be used in a noise/offset compensation scheme to store the reset noise and comparator offset information in a VCC voltage, which can be added to the PIXEL_OUT voltage to generate the COMP_IN voltage, to cancel the reset noise component in the PIXEL_OUT voltage. The offset component remains in the COMP_IN voltage and can be cancelled out by the offset of comparator906when comparator906compares the COMP_IN voltage against threshold VREF to generate the decision VOUT. Comparator906can generate a logical one for VOUT if the COMP_IN voltage equals or exceeds VREF. Comparator906can also generate a logical zero for VOUT if the COMP_IN voltage falls below VREF. VOUT can control a latch signal which controls memory bank912to store a count value from counter914. FIG.11Aillustrates an example of time-to-saturation measurement by ADC616. To perform the time-to-saturation measurement, a threshold generator (which can be external to pixel cell602a) can generate a fixed VREF. Fixed VREF can be set at a voltage corresponding a charge quantity threshold for saturation of charge storage device902(e.g., threshold1004ofFIG.10). Counter914can start counting right after the exposure period starts (e.g., right after charge draining transistor M0is disabled). As the COMP_IN voltage ramps down (or up depending on the implementation) due to accumulation of overflow charge at charge storage device902, clock signal keeps toggling to update the count value at counter914. The COMP_IN voltage may reach the fixed VREF threshold at a certain time point, which causes VOUT to flip from low to high. The change of VOUT may stop the counting of counter914, and the count value at counter914may represent the time-to-saturation. FIG.11Billustrates an example of measurement of a quantity of charge stored at charge storage device902. After measurement starts, the threshold generator can generate a ramping VREF, which can either ramp up (in the example ofFIG.11B) or ramp down depending on implementation. The rate of ramping can be based on the frequency of the clock signal supplied to counter914. In a case where overflow charge is measured, the voltage range of ramping VREF can be between threshold1004(charge quantity threshold for saturation of charge storage device902) and threshold1002(charge quantity threshold for saturation of photodiode PD), which can define the medium light intensity range. In a case where residual charge is measured, the voltage range of the ramping VREF can be based on threshold1002and scaled by the reduced capacity of charge storage device902for residual charge measurement. In the example ofFIG.11B, the quantization process can be performed with uniform quantization steps, with VREF increasing (or decreasing) by the same amount for each clock cycle. The amount of increase (or decrease) of VREF corresponds to a quantization step. When VREF reaches within one quantization step of the COMP_IN voltage, VOUT of comparator906flips, which can stop the counting of counter914, and the count value can correspond to a total number of quantization steps accumulated to match, within one quantization step, the COMP_IN voltage. The count value can become a digital representation of the quantity of charge stored at charge storage device902, as well as the digital representation of the incident light intensity. As discussed above, ADC616can introduce quantization errors when there is a mismatch between a quantity of charge represented by the quantity level output by ADC616(e.g., represented by the total number of quantization steps) and the actual input quantity of charge that is mapped to the quantity level by ADC808. The quantization error can be reduced by using a smaller quantization step size. In the example ofFIG.11B, the quantization error can be reduced by the amount of increase (or decrease) in VREF per clock cycle. Although quantization error can be reduced by using smaller quantization step sizes, area and performance speed may limit how far the quantization step can be reduced. With smaller quantization step size, the total number of quantization steps needed to represent a particular range of charge quantities (and light intensity) may increase. A larger number of data bits may be needed to represent the increased number of quantization steps (e.g., 8 bits to represent 255 steps, 7 bits to represent 127 steps, etc.). The larger number of data bits may require additional buses to be added to pixel output buses816, which may not be feasible if pixel cell601is used on a head-mounted device or other wearable devices with very limited spaces. Moreover, with a larger number of quantization step size, ADC808may need to cycle through a larger number of quantization steps before finding the quantity level that matches (with one quantization step), which leads to increased processing power consumption and time, and reduced rate of generating image data. The reduced rate may not be acceptable for some applications that require a high frame rate (e.g., an application that tracks the movement of the eyeball). One way to reduce quantization error is by employing a non-uniform quantization scheme, in which the quantization steps are not uniform across the input range.FIG.11Cillustrates an example of a mapping between the ADC codes (the output of the quantization process) and the input charge quantity level for a non-uniform quantization process and a uniform quantization process. The dotted line illustrates the mapping for the non-uniform quantization process, whereas the solid line illustrates the mapping for the uniform quantization process. For the uniform quantization process, the quantization step size (denoted by Δ1) is identical for the entire range of input charge quantity. In contrast, for the non-uniform quantization process, the quantization step sizes are different depending on the input charge quantity. For example, the quantization step size for a low input charge quantity (denoted by ΔS) is smaller than the quantization step size for a large input charge quantity (denoted by ΔL). Moreover, for the same low input charge quantity, the quantization step size for the non-uniform quantization process (ΔS) can be made smaller than the quantization step size for the uniform quantization process (Δ1). One advantage of employing a non-uniform quantization scheme is that the quantization steps for quantizing low input charge quantities can be reduced, which in turn reduces the quantization errors for quantizing the low input charge quantities, and the minimum input charge quantities that can be differentiated by ADC616can be reduced. Therefore, the reduced quantization errors can push down the lower limit of the measureable light intensity of the image sensor, and the dynamic range can be increased. Moreover, although the quantization errors are increased for the high input charge quantities, the quantization errors may remain small compared with high input charge quantities. Therefore, the overall quantization errors introduced to the measurement of the charge can be reduced. On the other hand, the total number of quantization steps covering the entire range of input charge quantities may remain the same (or even reduced), and the aforementioned potential problems associated with increasing the number of quantization steps (e.g., increase in area, reduction in processing speed, etc.) can be avoided. FIG.11Dillustrates an example of quantizing an analog voltage by pixel ADC808using a non-uniform quantization process. Compared withFIG.11B(which employs a uniform quantization process), VREF increases in a non-linear fashion with each clock cycle, with a shallower slope initially and a steeper slope at a later time. The differences in the slopes are attributed to the uneven quantization step sizes. For lower counter count values (which correspond to a lower input quantity range), the quantization steps are made smaller, hence VREF increases at a slower rate. For higher counter count values (which correspond to a higher input quantity range), the quantization steps are made larger, hence VREF increases at a higher rate. The non-uniform VREF slope can be generated based on, for example, changing the frequency of counting of counter814, changing the relationship between the VREF voltage and the count values of counter914, etc. In some examples, the non-uniform quantization process ofFIG.11Dcan be employed for light intensity determination for low light intensity range1006and medium light intensity range1008. Referring back toFIG.9, controller920can perform a TTS quantization operation, a quantization operation to measure a quantity of overflow charge (herein after, “FD ADC” operation), and a quantization operation to measure a quantity of residual charge (hereinafter “PD ADC” operation). Controller920can also skip one or more of the quantization operations. Output logic circuits908can select which of the quantization operations to store the count value at memory bank912. Output logic circuits908can make the selection based on determining, based on the output of comparator906in each quantization operation, whether a quantity of the residual charge in photodiode PD exceeds a saturation threshold of the photodiode (e.g., corresponding to threshold1002ofFIG.10), and whether a quantity of the overflow charge in charge storage device902exceeds a saturation threshold of the charge storage device (e.g., corresponding to threshold1004ofFIG.10). If output logic circuits908detect that the quantity of the overflow charge exceeds threshold1004during the TTS operation, output logic circuits908can store the TTS output in memory bank912. If output logic circuits908detect that the quantity of the overflow charge does not exceed threshold1004but that the quantity of the residual charge exceeds threshold1002, output logic circuits908can store the FD ADC output in memory bank912. Lastly if output logic circuits908detect the quantity of the residual charge does not exceed threshold1002, output logic circuits908can store the PD ADC output in memory bank912. In some examples, output logic circuits908can include registers to store one or more indications of whether saturation of charge storage device902is detected and whether the saturation of photodiode PD is detected, which output logic circuits908can use to perform the selection. Reference is now made toFIG.12AandFIG.12B, which illustrate example sequences of the control signals of pixel cell602agenerated by controller920. BothFIG.12AandFIG.12Billustrate the change of AB, RST, COMP_RST, TG, LG, and VREF with respect to time. Referring toFIG.12A, the period between times T0and T1can correspond to a first reset phase, in which charge storage device902and comparator906can be put in a reset state by controller920by asserting the RST and COMP_RST signals, while the AB signal can be asserted to prevent charge generated by photodiode PD from reaching charge storage device902. Both RST and LG signals are asserted to reset CFDand CEXTcapacitors to set PIXEL_OUT at the reset level. With COMP_RST signal asserted and the positive terminal of comparator906connected to Vref_high, COMP_IN can be set to a sum of Vref_highand comparator offset Vcomp_offset. Moreover, with RST signal asserted, PIXEL_OUT can be set the reset voltage Vpixel_out_rstand can include reset noise VσKTC. A first sampling operation can be performed by the CC capacitor to store a VCCvoltage including the components of the comparator offset, the reset noise, and PIXEL_OUT voltage at the reset level, as follows: Vcc(T1)=(Vref_high+Vcomp_offset)−(Vpixel_out_rst+BσKTC)  (Equation 1) At time T1, the RST signal, the AB signal, and the COMP_RST signal are released, which starts an exposure period (labelled Texposure) in which photodiode PD can accumulate and transfer charge. Exposure period Texposurecan end at time T4when the AB signal is asserted. Between times T1and T3, TG signal can set charge transfer transistor M1in a partially turned-on state to allow PD to accumulate residual charge before photodiode PD saturates. If the light intensity in the medium or high intensity ranges ofFIG.10, photodiode PD can saturate and transfer overflow charge via charge transfer transistor M1. LG signal can remain asserted to operate in low gain mode, in which both CFDcapacitor and CEXTcapacitor are connected in parallel to form charge storage device902to store the overflow charge. The overflow charge develops a new PIXEL_OUT voltage, Vpixel_out_sig1. The CC capacitor can AC couple the new PIXEL_OUT voltage Vpixel_out_sig1into COMP_IN voltage by adding the VCCvoltage, which includes the reset noise and comparator offset component. The new PIXEL_OUT voltage also includes reset noise, which can be cancelled by the reset noise component of the VCCvoltage. The COMP_IN voltage at time Tx between times T1and T3can be as follows: Vcomp_in(Tx)=Vpixel_out_sig1−Vpixel_out_rst+Vref_high+Vcomp_offset(Equation 2) In Equation 2, the difference between Vpixel_out_sig1−Vpixel_out_rstrepresents the quantity of overflow charge stored in charge storage device902. The comparator offset in the COMP_IN voltage can also cancel out the comparator offset introduced by comparator906when performing the comparison. Between times T1and T3, two phases of measurement of the COMP_IN voltage can be performed, including a time-to-saturation (TTS) measurement phase for high light intensity range1010and an FD ADC phase for measurement of overflow charge for medium light intensity1008. Between times T1and T2the TTS measurement can be performed by comparing COMP_IN voltage with a static Vref_lowrepresenting a saturation level of charge storage device902by comparator906. When PIXEL_OUT voltage reaches the static VREF, the output of comparator906(VOUT) can trip, and a count value from counter914at the time when VOUT trips can be stored into memory bank912. At time T2, controller920can perform a check1202of the state of comparator906. If the output of comparator906trips, controller920can store the state in a register of output logic circuits908indicating that the overflow charge in charge storage device902exceeds threshold1004. The storage of the state can also prevent subsequent measurement phases (FD ADC and PD ADC) from overwriting the count value stored in memory bank912. The count value from TTS operation can then be provided to represent the intensity of light received by photodiode PD during the exposure period. Between times T2and T3, the FD ADC operation can be performed by comparing COMP_IN voltage with a ramping VREF voltage that ramps from Vref_lowto Vref_high, which represents the saturation level of photodiode PD (e.g., threshold1002), as described inFIG.11B. If VOUT of comparator906trips during FD ADC, the count value of counter914at the time when VOUT trips can be stored in memory bank912, if the state flag in output logic circuits908is not asserted which indicated that charge storage device902does not saturate in the TTS operation. Although exposure period ends at time T2, between times T2and T3photodiode PD remains capable of accumulating residual charge (if not saturated) or transferring overflow charge to charge storage device902. At time T3, the controller can perform a check1204of the state of comparator906of the state of comparator906. If the output of comparator906trips, and the state flag in output logic circuits908is not asserted from the TTS operation, controller920can assert the state flag in output logic circuits908to indicate that the overflow charge in charge storage device902exceeds threshold1004. The assertion of the state flag can also prevent subsequent PD ADC phase from overwriting the count value stored in memory bank912. The count value from FD ADC can then be provided to represent the intensity of light received by photodiode PD during the exposure period. Between times T3and T4can be the second reset phase, in which both RST and COMP_RST signals are asserted to reset charge storage device902(comprising the parallel combination of CFDcapacitor and CEXTcapacitor) and comparator906to prepare for the subsequent PD ADC operation. The VCCvoltage can be set according to Equation 1. After RST and COMP_RST are released, LG is turned off to disconnect CEXTfrom CFDto increase the charge-to-voltage conversion rate for the PD ADC operation. TG is set at a level to fully turn on the M1charge transfer transistor to transfer the residual charge stored in photodiode PD to CFD. The residual charge develops a new PIXEL_OUT voltage, Vpixel_out_sig2. The CC capacitor can AC couple the new PIXEL_OUT voltage Vpixel_out_sig2into COMP_IN voltage by adding the VCCvoltage. Between times T3and T4, photodiode PD remains capable of generating additional charge in addition to the charge generated between times T1to T3, and transferring the additional charge to charge storage device902. The Vpixel_out_sig2also represents the additional charge transferred between times T3and T4. At time T4, the COMP_IN voltage can be as follows: Vcomp_in(T4)=Vpixel_out_sig2−Vpixel_out_rst+Vref_high+Vcomp_offset(Equation 3) In Equation 3, the difference between Vpixel_out_sig2−Vpixel_out_rstrepresents the quantity of charge transferred by the photodiode to charge storage device902between times T3and T4. The comparator offset in the COMP_IN voltage can also cancel out the comparator offset introduced by comparator906when performing the comparison. At time T4, the AB signal is asserted to prevent photodiode PD from accumulating and transferring additional charge. Moreover, VREF can be set a static level Vref_low_margin. Comparator906can compare the COMP_IN voltage with Vref_low_marginto determine whether photodiode PD saturates. Vref_low_marginis slightly higher than Vref_low, which represents the saturation level of photodiode PD (e.g., threshold1002), to prevent false tripping of comparator906when the quantity of residual charge is close to but does not exceed the saturation level. Between times T4and T5, controller920can perform the PD ADC operation by comparing the COMP_IN voltage with a VREF ramp that starts from Vref_low_marginto Vref_high. In PD ADC phase, Vref_highcan represent the minimum detectable quantity of residual charge stored in photodiode PD, whereas Vref_low_margincan represent the saturation threshold of photodiode PD with margin to account for dark current, as described above. If the state flag in output logic circuits908remains not asserted at this point, and if the output of comparator906trips, the count value of counter914when comparator906trips can be stored into memory bank912, and the count value from PD ADC can be provided to represent the intensity of light. Reference is now made toFIG.12B, which illustrates another example sequence of the control signals of pixel cell602agenerated by controller920. InFIG.12B, PD ADC operation can be performed between the TTS and FD ADC operations, which can reduce the accumulation of additional charge in charge storage device902or in photodiode PD after the TTS operation and improve shutter efficiency. As shown inFIG.12B, between times T0and T1is a first reset phase as inFIG.12A, in which both charge storage device902and comparator906can be put in a reset state by controller920by asserting the RST and COMP_RST signals. Moreover, LG signal is asserted, which allows CFDand CEXTcapacitors to be reset by the RST signal and the PIXEL_OUT signal is set at the reset level. With COMP_RST signal asserted and the positive terminal of comparator1102connected to Vref_high, COMP_IN can be set to a sum of Vref_highand comparator offset Vcomp_offset. Moreover, with RST signal asserted, PIXEL_OUT can be set the reset voltage Vpixel_out_rstand can include reset noise VσKTC. A first sampling operation can be performed by the CC capacitor to store a VCCvoltage including the components of the comparator offset, the reset noise, and PIXEL_OUT voltage at the reset level, as described in Equation 1 above: Vcc(T1)=(Vref_high+Vcomp_offset)−(Vpixel_out_rst+VσKTC)  (Equation 1) Moreover, AB signal can be asserted to prevent charge generated by photodiode PD from reaching charge storage device902. At time T1, the AB, COMP_RST, and the RST signals are released, which starts the exposure period in which photodiode PD can accumulate and transfer charge. TG signal can set transfer transistor M1in a partially turned-on state to allow PD to transfer overflow charge to charge storage device902. LG signal can remain asserted to operate in low gain mode, in which both CFDcapacitor and CEXTcapacitor are connected in parallel to form charge storage device902to store the overflow charge. The overflow charge develops a new PIXEL_OUT voltage, Vpixel_out_sig1. The CC capacitor can AC couple the PIXEL_OUT voltage to become the COMP_IN voltage. The COMP_IN voltage between times T1and T2can be set based on Equation 1 above. Between times T1and T2, a time-to-saturation (TTS) measurement can be performed by comparator906comparing COMP_IN voltage with a static Vref_lowto generate VOUT. At time T2, controller920can perform a check1212of the state of comparator906. If the output of comparator906trips, controller920can store the state in a register of output logic circuits908indicating that the overflow charge in charge storage device902exceeds threshold1004as inFIG.12A. Following the TTS measurement, between times T2and T5, the PD ADC operation can be performed to measure the residual charge stored in photodiode PD. The LG signal is de-asserted to disconnect CEXTfrom CFDto increase charge-to-voltage conversion ratio, as described above. The overflow charge (if any) is divided between CFDand CEXTbased on a ratio of capacitances between CFDand CEXTsuch that CFDstores a first portion of the overflow charge and CEXTstores a second portion of the overflow charge. Vpixel_out_sig1can correspond to the first portion of the overflow charge stored in CFD. To prepare for the PD ADC operation, between times T2and T3, COMP_RST signal is asserted again to reset comparator1102. The resetting of comparator1102can set a new VCCvoltage across the CC capacitor based on a difference between Vpixel_out_sig1and the output of comparator1102in the reset state, as follows: Vcc(T2)=(Vref_high+Vcomp_offset)−(Vpixel_out_sig1(T3)+VσKTC)  (Equation 4) Optionally, the RST signal can be asserted between times T2and T3to reset CFDand to remove the first portion of the overflow charge, prior to the transfer of the residual charge. This allows the subsequent PD ADC operation to quantize only the residual charge rather than a mixture of the residual charge and the first portion of the overflow charge. Such arrangements can improve the accuracy of measurement of low light intensity as there is no need to remove the overflow charge component (based on the result of the subsequent FD ADC operation) from the PD ADC operation output which could otherwise introduce additional errors. On the other hand, not asserting the RST signal between times T2and T3can be advantageous, as such arrangements can introduce redundancy in the PD ADC and FD ADC operations and increase the signal-to-noise ratio, as both operations measure a mixture of residual and overflow charge. Between times T3and T4, COMP_RST signal is released so that comparator1102exits the reset state. Moreover, the TG signal can set transfer transistor M1in a fully turned-on state to transfer the residual charge to CFD. The residual charge can be transferred to CFD, which changes the PIXEL_OUT voltage to Vpixel_out_sig2. The new PIXEL_OUT voltage can be AC coupled into a new COMP_IN voltage at time T4, as follows: Vcomp_in(T4)=Vpixel_out_sig2−Vpixel_out_sig1+Vref_high+Vcomp_offset(Equation 5) In Equation 5, the difference between Vpixel_out_sig2−Vpixel_out_sig1represents the quantity of residual charge transferred by the photodiode to charge storage device902between times T3and T4. After TG is fully turned-on between times T3and T4, the TG is de-asserted to disconnect photodiode PD from CFDand CEXT. As a result, no additional charge is transferred to CFDand CEXTafter time T4until the start of next exposure period. Compared with the arrangements ofFIG.12Awhere additional charge can be accumulated in photodiode PD during the FD ADC operation which typically takes a long time, inFIG.12Bthe additional charge is accumulated only during the reset period T2-T3and the transfer period T3-T4, both of which are typically much shorter than a FD ADC operation. Moreover, after T4, no additional overflow charge is accumulated at charge storage device608a. As a result, both FD ADC and PD ADC can process charge accumulated in almost the same exposure period as the TTS operation, which can improve the shutter efficiency of the image sensor. Between times T4and T5, controller920can perform the PD ADC operation by comparing the COMP_IN voltage with a VREF ramp that starts from Vref_highto Vref_low_margin. In PD ADC phase, Vref_highcan represent the minimum detectable quantity of residual charge stored in photodiode PD, whereas Vref_low_margincan represent the saturation threshold of photodiode PD with margin to account for dark current, as described above. If photodiode PD does not saturate, COMP_IN can go above the VREF ramp. An inverted VOUT (VOUTb) can become a logical one and cause a count value to be stored in memory bank912for PD ADC. At time T5, the controller can perform a check1214of the state of comparator906of the state of comparator906. If the output of comparator906trips, and the state flag in output logic circuits908is not asserted from the TTS operation, controller920can assert the state flag in output logic circuits908to indicate that the residual charge exceeds threshold1002. The assertion of the state flag can also prevent subsequent FD ADC phase from overwriting the count value stored in memory bank912. The count value from PD ADC can then be provided to represent the intensity of light received by photodiode PD during the exposure period. Between times T5and T8, a FD ADC operation can be made to measure the overflow charge transferred by photodiode PD within the exposure period. As photodiode PD remains disconnected from CFDand CEXT, no additional charge is transferred to CFDand CEXT, and the total charge stored in CFDand CEXTis mostly generated in the exposure period Texposure, together with additional charge generated by the photodiode between times T3and T4. At time T5, the LG signal is asserted to connect CFDwith CEXT, which allows the second portion of the overflow charge stored in CEXTto combine with the residual charge stored in CFD(and the first portion of the overflow charge if RST is not asserted between times T2and T3), and a new PIXEL_OUT voltage Vpixel_out_sig3can develop at the parallel combination of CFDand CEXTand is to be quantized. Between times T5and T7, a noise sampling operation can be performed to mitigate the effect of reset noise and comparator offset on the FD ADC operation. Between times T5and T6, comparator1102can be reset as part of the first sampling operation. The positive terminal of comparator1102is connected to the lower end of VREF, Vref_low. The VCCvoltage can include components of reset noise and comparator offset as described above. The VCCvoltage can be as follows: Vcc(T5)=(Vref_low+Vcomp_offset)−(Vpixel_out_sig3+VσKTC1)  (Equation 6) Between times T6and T7, both CFDand CEXTcan be reset, while comparator1102exits the reset state, as part of a second sampling operation. As a result of resetting, PIXEL_OUT can be reset to a reset voltage Vpixel_out_rst. Moreover, second reset noise charge is also introduced into charge storage device608, which can be represented by VσKTC2. The second reset noise charge typically tracks the first reset noise charge. At time T6, as the result of the second sampling operation, Vpixel_outcan be as follows: Vpixel_out(T6)=Vpixel_out_rst+VσKTC2(Equation 7) At time T7, COMP_RST is released, and comparator1102exits the reset state. Via AC-coupling, the COMP_IN voltage can track Vpixel_out(T6) in addition to Vcc(T5) as follows: Vcomp_in(T7)=(Vref_low+Vcomp_offset)+(Vpixel_out_rst−Vpixel_out_sig3)+(VσKTC2−VσKTC1)  (Equation 8) Following the second sampling operation, the COMP_IN voltage can be quantized by comparing against a VREF ramp between times T7and T8. When VREF goes above COMP_IN, VOUT can become a logical one. If the state flag in output logic circuits908remains not asserted at this point, the count value of counter914when comparator906trips can be stored into memory bank912, and the count value from FD ADC can be provided to represent the intensity of light. After time T8, the digital value stored in memory bank912can be read out to represent the intensity of light received by photodiode PD within the integration, at time T9. In a case where one image frame is generated in a single frame period, the frame period can span from time T0to T8. AlthoughFIG.12AandFIG.12Bshow TTS, FD ADC and PD ADC operations are performed, it is understood that ADC616(and pixel cell602a) needs not perform all of these operations, and can skip some of them. As to be described below, the quantization operations may vary for different photodiodes within pixel cell602a. In bothFIG.12AandFIG.12B, the duration of the TTS measurement operations can track linearly with the duration of the exposure period. For example, inFIG.12A, the duration of the TTS measurement operation can be set based on the duration of the exposure period (Texposure) minus the duration for the FD ADC operation. As another example, inFIG.12B, the duration of the TTS measurement operation can be set to be equal to the duration of the exposure period (Texposure). In some examples, the duration of the TTS measurement operation and the duration of the exposure period can be set by controller920ofFIG.9based on a state machine that counts a number of cycles of a clock signal supplied to the controller. For example, inFIG.12A, controller920can store a first target count value and a second target count value representing, respectively, the end time of the TTS measurement operation and the end time of the exposure period. Controller920can start both the TTS measurement operation and the exposure period when the count is zero, end the TTS measurement operation (e.g., based on resetting counter914) when the count is of a first value, and end the exposure period (e.g., enabling the AB switch inFIG.12A, resetting charge sensing unit614and comparator906inFIG.12B, etc.) when the count is of a second value. Moreover, inFIG.12B, controller920can start both the TTS measurement operation and the exposure period when the count is zero, and end both the TTS measurement operation and the exposure period when the count is of a first value, so that the TTS measurement operation and the exposure period start and end at the same time. While controlling the durations of the TTS measurement operation and the exposure period based on the same clock signal can reduce the complexity of controller920, such arrangements can lead to the duration of the TTS measurement operation scaling linearly with the duration of the exposure period.FIG.13illustrates examples of exposure periods for different frequencies/periods of a clock signal1302supplied to controller920. The clock signal is also supplied to counter914to control the rate at which counter914updates the count value, and the TTS measurement operation ends when the count value reaches a maximum (e.g., 255). As shown in plots1304,1306, and1308ofFIG.13, the exposure period and the TTS measurement operation have the same duration and both are controlled by the period of the clock signal. As the period of the clock signal reduces, the duration of both the exposure period and the TTS measurement operation also reduces together by the same proportion with respect to the clock signal period. For example, as the period of the clock signal reduces approximately by half across plots1304,1306, and1308, the durations of exposure periods A, B, and C also reduce approximately by half across the plots. There are various scenarios in which the period of clock signal1302supplied to controller920can be increased. For example, controller920, as well as pixel cell602aand image sensor600, can operate at a lower frame rate, which allows controller920to operate at a lower operation speed and a lower clock rate. Lowering the clock rate of controller920can also reduce the power dissipation at the digital circuits of image sensor600that operate based on or otherwise synchronized with clock signal1302, such as memory bank912, counter914, controller920, etc. Moreover, the period of clock1302can also be increased to extend the exposure period in a case where the image sensor operates in a low ambient light environment. The extended exposure period allow the photodiode to generate more charge within the exposure period for measurement, which can reduce the signal-to-noise ratio. Although the TTS measurement operations inFIG.12AandFIG.12Bcan reduce the non-linearity caused by the saturation of the capacitors and increase the upper limit of the dynamic range, various issues can arise if the TTS measurement is performed within the entirety of the exposure period of the photodiode, or at least the duration of the TTS measurement operation scales up linearly with the duration of the exposure period. One potential issue is power consumption. Specifically, referring back toFIG.9, during the TTS measurement operation both voltage buffer904of charge sensing unit614and comparator906of the ADC616are powered on to compare the buffered voltage (COMP_IN) with the static threshold to generate the decision (VOUT). Both voltage buffer904and comparator906are analog devices and can consume huge static power when powered on. If the exposure period has a relatively long duration, and the TTS measurement operation is performed within the entirety of the exposure period, both voltage buffer904and comparator906can consume a huge amount of power for a long period of time, leading to a huge power consumption at the image sensor. In addition, performing the TTS measurement within the entirety of the exposure period may allow only one photodiode, within a group of photodiodes that shares ADC616, to perform the TTS measurement operation.FIG.14Aillustrates an example image sensor600including multiple photodiodes that share ADC616. The part of image sensor600illustrated inFIG.14Acan be of pixel cell602aor can be of different pixel cells. As shown inFIG.14A, ADC616and memory bank912can be shared among multiple photodiodes. For example, photodiode PD1, which can correspond to photodiodes612aofFIG.6, can be connected to charge sensing unit616a, whereas photodiode PD2, which can correspond to photodiode612bofFIG.6, can be connected to a charge sensing unit616b. The voltage buffer of each charge sensing unit can share a current source Mbias. The exposure period for the each photodiode can be controlled by the optional M0aand M0btransistors (based on assertion/de-assertion of the AB1and AB2signals), or based on resetting via transfer switches M1aand M1b. Photodiodes PD1and PD2can be configured to detect different frequency components of light. Charge sensing unit616aand616bcan take turn, based on selection signals SELa and SELb, in accessing ADC616and memory bank912. Compared with a case where a separate set of charge sensing unit614, ADC616, and memory bank912is provided for each photodiode, such arrangements can reduce the footprint and power consumption of the processing circuits and memory, which allows image sensor600to include more photodiodes to improve resolution. The control signals (e.g., RSTa, RSTb, LGa, LGb, SELa, SELb, COMP_RST, etc.) can be provided by controller920(not shown inFIG.14A). FIG.14Billustrates example quantization operations of PD1and PD2of image sensor600ofFIG.14A. The quantization operations can be performed by ADC616and counter914, and based on control signals provided by controller920. InFIG.14B, between times T0and T1charge sensing units616aand616band comparator906are reset. Exposure periods for PD1and PD2starts at time T1, and the exposure period for PD1ends at time T2. As inFIG.12B, the TTS operation for PD1can extend through the entirety of the exposure period of PD1between times T1and T2. After the TTS operation for PD1completes, FD ADC and/or PD ADC operations can be performed for PD1, based on the schemes described inFIG.12AandFIG.12B, between times T2and T3. On the other hand, TTS operation is not performed for PD2. Instead, after the TTS operation completes for PD1, comparator906can be reset between times T3and T4. The exposure period for PD2also ends at time T4. FD ADC and/or PD ADC operations can then be performed for PD2between times T4and T5. Such arrangements can be due to PD1being granted access to ADC616and memory bank912for the TTS operation within the entirety of the exposure period of PD1, between times T1and T2. As a result, PD2has no access to ADC616during that time. Moreover, to improve global shutter operation, the exposure periods for PD1and PD2should overlap as much as possible. To maximize the overlap, the exposure periods for PD1and PD2scan start at the same time (time T1), while the exposure period for PD2is only extended to time T4(while the exposure period for PD1ends at time T2) to accommodate the FD ADC and/or PD ADC operation of PD1. Due to the substantial overlap between the exposure periods for PD1and PD2, and that the TTS operation for PD1is performed within the entirety of the exposure period of PD1(and large part of the exposure period of PD2), there does not exist a time slot within the exposure period of PD2for the TTS operation for PD2. While the resource-sharing arrangements inFIG.14AandFIG.14Bcan reduce total power and footprint of ADC616and memory bank912in image sensor600, such arrangements can lead to different dynamic ranges among the photodiodes. Specifically, while the upper limit of the dynamic range of PD1can be improved by the TTS measurement operation, other photodiodes that do not have such improvement as TTS measurement operations are not performed for those photodiodes. As a result, different photodiodes within image sensor600can have different dynamic ranges. In a case where the photodiodes detect different frequency components of light, the arrangements inFIG.14Bcan lead to uneven performances of the image sensor in measuring the different frequency components of incident light. FIG.15Aillustrates an example image sensor600which allows the duration of the TTS measurement operation to be set separately from the duration of the exposure period, which can address at least some of the issues above. The components shown inFIG.15Acan be part of a pixel cell602aor external to pixel cell602a. As shown inFIG.15A, controller920can receive, as inputs, TTS duration setting1502, exposure period setting1504, and clock signal1506. Controller920can operate based on clock signal1506, which can define the speed of operation of controller920. In addition, counter914can receive and operate based on a clock signal1508, which can define the frequency at which the count values are updated. In some examples, clock signal1508can be generated by controller920using a programmable divider from clock signal1506. TTS duration setting1502can define a duration of the TTS measurement operation for photodiode PD, whereas exposure period setting1504can define a duration of the exposure period for photodiode PD. Based on the durations defined in the settings, as well as the frequency of clock signal1506, controller920can determine a first target count value representing the end time of the TTS measurement operation as well as a second target count value representing the end time of the exposure period. Controller920can control the start and end of the exposure period duration and the TTS measurement operation duration based on counting the clock cycles of clock signal1506, and comparing the counts with the target count values derived from TTS duration setting1502and exposure period setting1504. Controller920can start both the exposure period and TTS measurement operation when the counter value is zero, end the TTS measurement operation (e.g., by resetting counter914) when the count value equals the first target count value, and end the exposure period (e.g., enabling the AB switch inFIG.12A, resetting charge sensing unit614and comparator906inFIG.12B, etc.) when the count value equals the second target count value. Pixel cell602acan implement various techniques to allow the duration of the TTS measurement operation to be set separately from the duration of the exposure period. Specifically, TTS duration setting1502and exposure period setting1504can be supplied from separate registers, which allow the two settings to be individually programmable. In addition, clock signal1508, which is supplied to counter914and sets the frequency at which counter914updates the count value, can have a different frequency from clock signal1506supplied to controller920, whereas the frequency of clock signal1508can also be set based on TTS duration setting1502. For example, given the maximum count of counter914(e.g., 255) and the duration of TTS measurement operation, controller920can determine the frequency of clock signal1508such that counter914can sweep through the entire range of count values within the TTS measurement operation. In addition, controller920can also use TTS duration setting1502to set a lower limit of the exposure period, to ensure that the exposure period does not end before the TTS measurement operation completes. In a case where exposure period setting1504sets a shorter exposure period than the duration of the TTS measurement operation, controller920can override exposure period setting1504and set the exposure period to be at least equal to the duration of the TTS measurement operation set according to TTS duration setting1502. FIG.15Billustrates example relationships between the duration of exposure period and the duration of the TTS measurement operation provided by pixel cell602aofFIG.15A. For example, referring toFIG.15B, the frequency of clock signal1506(supplied to controller920) can be adjusted to operate controller920at a different speeds to, for example, support different frame rates, to operate in environments having different ambient light conditions, etc., and the duration of the exposure period can vary as a result. InFIG.15B, exposure period A can correspond to clock signal1506having a relatively low frequency, exposure period B can correspond to clock signal1506having a medium frequency, whereas exposure period C can correspond to clock signal1506having a relatively high frequency. But duration1510of TTS measurement operation, within which counter914counts from zero to the maximum count of 255, can remain constant among each of exposure periods A, B, and C. Moreover, inFIG.15B, exposure period C can represent the minimum exposure period for the photodiode set by TTS duration1510. Various techniques are proposed to improve the performance of pixel cell602abased on the difference in durations between the TTS measurement operation and the exposure period. Example operations of pixel cell602aofFIG.15Aare illustrated inFIG.15CandFIG.15D. The sequence of operations inFIG.15Cis based on the example sequence described inFIG.12A, whereas the sequence of operations inFIG.15Dis based on the example sequence described inFIG.12B. As shown inFIG.15CandFIG.15D, instead of extending the TTS operation between times T1and T2. which spans most of the exposure period Texposureas inFIG.12Aor the entirety of the exposure period Texposureas inFIG.12B, the duration of TTS operation can be shortened such that the TTS operation can stop at time T2′, which is before time T2. The frequency of clock signal1506can be configured such that counter914reaches the maximum count at time T2′. Between times T2′ and T2, switchable buffer904and comparator906can be turned off to reduce power consumption during that time. Switchable buffer904and comparator906can be turned back on at time T2to continue with the subsequent FD ADC and PD ADC operations. In addition, in a case where the overflow charge in charge sensing unit614exceeds the saturation threshold and a TTS measurement is stored in memory bank912, controller920can also perform a read out of the TTS measurement from memory bank912at time T2′, or at any time before the exposure period Texposureends, and provide the TTS measurement to the application that consumes the light intensity data from image sensor600. This can reduce the latency in providing the light intensity measurement results and allow the application to operate at a higher speed. In addition, in bothFIG.15CandFIG.15Dwhere the TTS duration is shorter than the exposure period, the threshold for saturation detection (and TTS measurement) can be scaled from a reference threshold. The reference threshold can correspond to a case where the exposure period and the TTS measurement period have the same duration. The reduced threshold can account for the fact that the total quantity of charge generated by the photodiode within the TTS measurement period is less than within the exposure period. As the subsequent FD ADC and PD ADC operations measure the total quantity of charge generated by the photodiode within the exposure period, while the TTS measurement is based on a reduced quantity of charge generated within the shortened TTS measurement period, if the reference threshold is used for saturation detection, a dead zone in the range of measurable light intensities can be introduced. FIG.15Eillustrates an example of scaling of the threshold for saturation detection to reduce or eliminate the dead zone. InFIG.15E, voltage graphs1520and1522illustrate the change of COMP_IN voltage (representing the output of charge sensing unit614) within the exposure period as charge sensing unit614continues to accumulate overflow charge from the photodiode PD. Voltage graph1520can correspond to a first intensity of light, whereas voltage graph1522can correspond to a second intensity of light. The second intensity is lower than the first intensity, as a result graph1522changes at a lower rate (and have a smaller slope) than graph1520. As shown inFIG.15E, assuming exposure time starts at time 0, if controller920detects saturation using the reference threshold (labelled “ref_threshold” inFIG.15E), controller920can detect that voltage graph1520intersects with the reference threshold at a time X near the end of TTS duration1510, and outputs the time as the TTS measurement result to represent the first intensity. But voltage graph1522does not intersect with the reference threshold within TTS duration1510. Therefore, no TTS measurement result will be generated to represent the second intensity. On the other hand, the total overflow charge accumulated at charge sensing unit614at time Y, around the end of the exposure period, reaches the saturation limit. As a result, neither FD ADC operation nor PD ADC operation generates an output for the second intensity, if FD ADC operation requires the overflow charge to be below the saturation limit and PD ADC operation requires no overflow charge accumulated at charge sensing unit614. Because of this, pixel cell602amay be unable to generate an output for the second intensity. The range of light intensities between the first intensity (represented by voltage graph1520) and the second intensity (represented by voltage graph1522) cannot be measured by either the TTS operation, the FD ADC operation, or the PD ADC operation, which introduces a dead zone in the range of measurable light intensities. To reduce or eliminate the dead zone in the range of measurable light intensities, controller920can use a scaled version of the reference threshold (labelled as “scaled_threshold” inFIG.15E) to perform saturation detection and TTS measurement. As shown inFIG.15E, voltage graph1520intersects with the scaled threshold at time X′, whereas voltage graph1522intersects with the scaled threshold at time X. With such arrangements, the range of light intensities between the first intensity (represented by voltage graph1520) and the second intensity (represented by voltage graph1522) can be measured by the TTS operation, and the aforementioned dead zone in the range of measurable light intensities can be eliminated as a result. The scaled threshold can be derived from the reference threshold based on a ratio between the duration of the TTS operation and the duration of the exposure period based on the following equation: Scaled_threshold=Ref_threshold×TTS⁢⁢durationexposure⁢⁢period⁢⁢duration(Equation⁢⁢9) Referring to the example ofFIG.15E, if reference threshold (the original threshold which corresponds to the TTS duration being equal to the exposure period duration) equals M, TTS duration equals X, whereas the exposure period equals Y, the scaled threshold equals to M×(X/Y). The technique of separately programmable TTS duration can be used in a scenario where multiple photodiodes share ADC616to perform quantization, to enable more than one photodiode to perform TTS operations.FIG.16Aillustrates an example image sensor600including multiple photodiodes that share ADC616. The part of image sensor600illustrated inFIG.16Acan be of pixel cell602aor can be of different pixel cells. As shown inFIG.16A, in addition to photodiodes PD1and PD2, charge sensing units614aand614b, and comparator906, image sensor600ofFIG.16Aincludes an output logic circuit908a, an output logic circuit908b, a memory bank912a, and a memory bank912b. Each memory bank includes multiple bits to store a count value from counter914from the TTS, FD ADC, or PD ADC operations. As inFIG.14A, charge sensing unit616aand616bcan take turn, based on selection signals SELa and SELb, in accessing ADC616to perform quantization operations for photodiodes PD1and PD2. But unlike inFIG.14Awhere the TTS operation is performed for only one photodiode (e.g., PD1), inFIG.16ATTS operation, FD ADC operation, and PD ADC operation can be performed based on the outputs of charge sensing units614aand614bfor photodiodes PD1and PD2. From the decision outputs of comparator906in processing the outputs of charge sensing unit616a, output logic circuits908acan store a first indication of whether there is overflow charge accumulated in charge sensing unit614a, and a second indication of whether the overflow charge accumulated in charge sensing unit614aexceeds the saturation threshold. Based on the first indication and the second indication stored in output logic circuits908a, output logic circuits908acan control memory912ato store an output from one of the TTS, FD ADC, or PD ADC operations for PD1. Moreover, from the decision outputs of comparator906in processing the outputs of charge sensing unit616b, output logic circuits908bcan store a first indication of whether there is overflow charge accumulated in charge sensing unit614b, and a second indication of whether the overflow charge accumulated in charge sensing unit614bexceeds the saturation threshold. Based on the first indication and the second indication stored in output logic circuits908b, output logic circuits908bcan control memory912bto store an output from one of the TTS, FD ADC, or PD ADC operations for PD2. In the example ofFIG.16A, the number of output logic circuits and memories can be based on a number of photodiodes for which the TTS operation is to be performed. For example, if the TTS operation is to be performed form number of photodiodes, image sensor600can include m output logic circuits908and m memory banks912to store a set of m digital outputs, one for each photodiode. In addition, controller920can also receive TTS duration setting1602, exposure period setting1604, and clock signal1606, to control the TTS duration and exposure period for PD1and PD2, whereas counter914receives a separate clock signal1608which, in some examples, can be provided by controller920by dividing clock signal1606by a ratio set based on the target TTS duration and the maximum count of counter914, as explained above. As inFIG.15A, with controller920and counter914operating based on different clocks, and TTS duration setting1602and exposure period setting1604being supplied from separate registers, the TTS durations of PD1and PD2can be programmed separately from the exposure periods for PD1and PD2. This allows the TTS durations of PD1and PD2to be reduced while the exposure periods for PD1and PD2are extended. As to be described below, such arrangements allow TTS operations to be performed for PD1and PD2while maintaining the global shutter operation. The threshold for saturation detection (and TTS measurement) can be scaled based on Equation 9 to account for the reduced TTS duration, as explained above. FIG.16B,FIG.16C,FIG.16D, andFIG.16Eillustrate example quantization operations of PD1and PD2of image sensor600ofFIG.16A. The quantization operations can be performed by ADC616and counter914, and based on control signals provided by controller920. As shown inFIG.16B, between times T0and T1charge sensing units616aand616band comparator906are reset. Exposure periods for PD1and PD2starts at time T1. The TTS operation for PD1also starts at time T1and ends at time T2, followed by a reset operation of comparator906between times T2and T3. The TTS operation for PD2can be performed between times T3and T4, followed by a reset operation of comparator906and charge sensing units616aand616bbetween times T4and T5. At time T5, the exposure period for PD1can end. After the end of the exposure period for PD1, FD ADC and/or PD ADC operations can be performed for PD1, based on the schemes described inFIG.12AandFIG.12B, between times T5and T6. The outputs from one of the TTS, PD ADC, or FD ADC operations for PD1can then be stored in memory bank912a. Controller920may read out the TTS measurement result (as output0inFIG.16A) from memory bank912aat time T5when the exposure period for PD1ends, based on the indication that the overflow charge saturates charge sensing unit614a, and skip the subsequent FD ADC and PD ADC operations for PD1. In addition, a reset operation of comparator906can be performed between times T6and T7. The exposure period for PD2also ends at time T7. FD ADC and/or PD ADC operations can then be performed for PD2, based on the schemes described inFIG.12AandFIG.12B, between times T7and T8. The outputs from one of the TTS, PD ADC, or FD ADC operations for PD2can then be stored in memory bank912b. In some examples, controller920may read out the TTS measurement result from memory bank912b(as output1inFIG.16A) at time T7when the exposure period for PD2ends, based on the indication that the overflow charge saturates charge sensing unit614b, and skip the subsequent FD ADC and PD ADC operations for PD2. InFIG.16B, the exposure periods for PD1and PD2can overlap substantially, between times T1and T5, to support a global shutter operation for the photodiodes. In addition, the TTS durations for PD1and PD2are reduced to fit into the overlapping period (between times T1and T4) between exposure periods of PD1and PD2. This allows TTS operations to be performed for both PD1and PD2within the overlapping period. Such arrangements allow TTS operations to be performed for PD1and PD2to improve their dynamic ranges, while maintaining the global shutter operation. Although the arrangements inFIG.16Ballows TTS operations to be performed for both PD1and PD2, PD2may have a reduced dynamic range (e.g., having a reduced upper limit) compared with PD1. This is because the TTS operation of PD1has the same start time as the exposure period of PD1, which enables the TTS operation to measure the saturation time for high intensity light received by PD1at the beginning of the exposure period of PD1. On the other hand, the TTS operation of PD2starts after the TTS operation of PD1completes. Due to the delay in the start time of the TTS operation of PD2, when the TTS operation starts for PD2the quantity of overflow charge accumulated at charge sensing unit614bmay be close to or already have exceeded the saturation limit. As a result, the saturation time measured by the TTS operation may become artificially short and may not accurately reflect the light intensity, and the dynamic range of PD2can become degraded. Various techniques are proposed to improve the dynamic range of PD2. In some examples, as shown inFIG.16C, the duration of the TTS operation for PD1can be reduced to a duration labelled TTSreduced, which can reduce the delay between the start time of the TTS operation for PD2and the start time of the exposure period for PD2. It becomes less likely that the quantity of overflow charge accumulated at charge sensing unit614bis close to or has exceeded the saturation limit when the TTS operation for PD2starts. The duration of the TTS operation for PD1can be set at a minimum duration (labelled as TTSmininFIG.16D) for a particular signal-to-noise (SNR) target for PD1. Specifically, as the duration of the TTS operation for PD1reduces, the threshold for saturation detection can be further scaled based on the ratio between the TTS duration and the exposure period duration of PD1, which makes the saturation detection more susceptible to noise and increases the SNR. An SNR target can set the minimum duration TTSminfor the TTS operation of PD1, and the delay to the start of the TTS operation for PD2can be reduced by setting the duration of the TTS operation for PD1to the minimum duration TTSmin. In another example, as shown inFIG.16E, the start time of exposure period for PD2can be delayed to time T3, to allow the TTS operation for PD2to start at the same time as the exposure period. Such arrangements can reduce the aforementioned dynamic range reduction issue with PD2. FIG.17Aillustrates another example of image sensor600including multiple photodiodes that share a comparator. As shown inFIG.17A, image sensor600includes photodiodes PD1a, PD2a, PD3a, and PD4aof pixel cell602a, as well as photodiodes PD1b, PD2b, PD3b, and PD4bof pixel cell602b. The photodiodes of each pixel cell can be configured, such as being arranged in a stack structure (e.g., as shown inFIG.8A) or on a planar surface under a filter array (e.g., as shown inFIG.8B), to measure different frequency components of light. For example, PD1aand PD1bcan be configured to detect both visible and infra-red light (e.g., being associated with a monochrome channel), PD2aand PD2bcan be configured to detect visible red light, PD3aand PD3bcan be configured to detect visible green light, whereas PD4aand PD4bcan be configured to detect blue light. To reduce footprint and power consumption, the photodiodes of each pixel cell can share access to a charge sensing unit, whereas the charge sensing units share access to a comparator. Two memory banks, each controlled by corresponding output logic circuits, are provided to store the digital outputs for each pixel cell. For example, the photodiodes PD1a-PD4aof pixel cell602acan share access to charge sensing unit614a, whereas the photodiodes PD1b-PD4bof pixel cell602bcan share access to charge sensing unit614b. Charge sensing units614aand614bcan share access to comparator906. Via charge sensing unit614a, each photodiode of pixel cell602acan take turn in performing quantization operations to store a digital output at memory bank912a. Moreover, via charge sensing unit614b, each photodiode of pixel cell602bcan take turn in performing quantization operations to store a digital output at memory bank912b. InFIG.17A, controller920can perform, using ADC616and counter914, TTS, FD ADC, and PD ADC operations for one photodiode of each pixel cell, followed by PD ADC operations for the rest of photodiodes. For example, TTS operations can be performed for photodiodes PD1aand PD1bof pixel cells602aand602b, since these photodiodes detect light of a wider frequency range (monochrome channel) and are likely to receive components of light of a high intensity range. On the other hand, PD ADC operations can be performed for photodiodes PD1a, PD2a, PD3a, PD1b, PD2b, and PD3bof pixel cells602aand602b, since these photodiodes detect a relatively narrow frequency range of light and may receive components of light of a low intensity range, and these photodiodes do not have access to charge sensing unit to store overflow charge during their exposure periods. In addition, controller920can also receive TTS duration setting1702, exposure period setting1704, and clock signal1706, to control the TTS durations and exposure periods of photodiodes PD1aand PD1b, as well as the exposure periods of photodiodes PD1a, PD2a, PD3a, PD1b, PD2b, and PD3b. Moreover, counter914receives a separate clock signal1708which, in some examples, can be provided by controller920by dividing clock signal1706by a ratio set based on the target TTS duration and the maximum count of counter914, as explained above. As inFIG.14AandFIG.15A, with controller920and counter914operating based on different clocks, and TTS duration setting1702and exposure period setting1704being supplied from separate registers, the TTS durations of PD1aand PD1bcan be programmed separately from the exposure periods for PD1aand PD1b. This allows the TTS durations of PD1aand PD1bto be reduced while the exposure periods for PD1aand PD1bare extended. As explained above, such arrangements allow TTS operations to be performed for PD1aand PD1bwhile maintaining the global shutter operation. The threshold for saturation detection (and TTS measurement) can be scaled based on Equation 9 to account for the reduced TTS duration, as explained above. FIG.17Billustrates example quantization operations of pixel cells602aand602bof image sensor600ofFIG.17A. InFIG.17B, the time periods for reset and memory read out are omitted for brevity. The quantization operations can be performed by controller920using comparator906and counter914. As shown inFIG.17B, the exposure periods for all photodiodes starts at time T0. The TTS operation for PD1aalso starts at time T0and ends at time T1, followed by the TTS operation for PD1bbetween times T1and T2. Both TTS operations can span across the exposure period for PD1abetween times T0and T2, which also overlaps with the exposure period for PD1bwhich spans between times T0and T3. As described above, as the TTS durations for photodiodes PD1aand PD1bare reduced with respect to the exposure periods, two (or more) TTS operations of multiple photodiodes can be performed within the overlapping portion of the exposure periods among the photodiodes, which allow the dynamic ranges of these photodiodes to be improved while maintaining the global shutter operation. In some examples, the duration of TTS operation for PD1acan be reduced, and/or the exposure period for photodiode PD1bcan be delayed, to improve the dynamic range of PD1bas described above inFIG.16CandFIG.16D. Following TTS operation for PD1b, FD ADC and/or PD ADC operations for PD1acan be performed between times T2and T3, after the exposure period for PD1aends at time T2. Output logic circuits908acan store the output of one of the TTS, FD ADC, or PD ADC operations at memory bank912abased on whether there is overflow charge accumulated at charge sensing unit614a, and if there is, whether the overflow charge from PD1areaches the saturation limit. Moreover, FD ADC and/or PD ADC operations for PD1bcan be performed between times T3and T4, after the exposure period for PD1bends at time T3. Output logic circuits908bcan store the output of one of the TTS, FD ADC, or PD ADC operations at memory bank912bbased on whether there is overflow charge accumulated at charge sensing unit614b, and if there is, whether the overflow charge from PD1breaches the saturation limit. Following the quantization operations for PD1aand PD1b, controller920can control ADC616to perform PD ADC operations for the remaining photodiodes. Specifically, PD ADC operation for PD2acan be performed between times T4and T5to store the output of PD2aat memory bank912a, after the exposure period for PD2aends at time T4. Moreover, PD ADC operation for PD3acan be performed between times T5and T6to store the output of PD3aat memory bank912a, after the exposure period for PD3aends at time T6. This is followed by PD ADC operation for PD4a, which can be performed between times T6and T7to store the output of PD4aat memory bank912a, after the exposure period for PD4aends at time T6. Moreover, PD ADC operation for PD2bcan be performed between times T7and T8to store the output of PD2bat memory bank912b, after the exposure period for PD2bends at time T7. Moreover, PD ADC operation for PD3bcan be performed between times T8and T9to store the output of PD3bat memory bank912b, after the exposure period for PD3bends at time T8. This is followed by PD ADC operation for PD4b, which can be performed between times T9and T10to store the output of PD4bat memory bank912b, after the exposure period for PD4bends at time T9. FIG.17Cillustrates another example quantization operations of pixel cells602aand602bof image sensor600ofFIG.17A. InFIG.17C, the time periods for reset and memory read out are omitted for brevity. The quantization operations can be performed by controller920using comparator906and counter914. As shown inFIG.17C, the start times of the exposure periods for all photodiodes can stagger in a similar manner as inFIG.16E, to reduce the difference in durations among the exposure periods of the photodiodes. Such arrangements can reduce the likelihood of the charge sensing units (for TTS/FD ADC operations) and the photodiodes (for PD ADC operations) being saturated during the respective exposure period, which can improve dynamic range. The exposure period for PD1astarts at time T0. The TTS operation for PD1aalso starts at time T0and ends at time T1. The exposure period for PD1bstarts at time T1right before the TTS operation for PD1b, to reduce the likelihood of charge sensing unit614bbeing saturated by the overflow charge from PD1bby the time the TTS operation for PD1bstarts. Such arrangements can improve the dynamic range of PD1bas explained above inFIG.16E. The exposure period for PD1bends at time T3as inFIG.17B. In addition, the exposure periods for photodiodes PD2a, PD3a, and PD4acan start at, respectively, times Ta, Tb, and Tc. Moreover, the exposure periods for photodiodes PD2b, PD3b, and PD4bcan start at, respectively, times Td, Te, and Tf. By delaying the start times of the exposure periods, the durations of the exposure periods can have identical or similar durations. It becomes less likely that the photodiodes can be saturated by the residual charge within their respective exposure periods, which can improve the dynamic ranges of these photodiodes. FIG.18illustrates a method1800for performing a light intensity measurement by an image sensor having at least a photodiode, such as pixel cell602aofFIG.15A. Method1800can be performed by controller920in conjunction with other components of image sensor600. Method1800starts with step1802, in which controller920sets an exposure period to have a first duration. Within the exposure period, controller920enables the photodiode PD to generate and output charge in response to incident light. Controller920can set the exposure period based on exposure period setting1504, which can define a duration of the exposure period for photodiode PD. Based on the duration specified in exposure period setting1504, as well as the frequency of clock signal1506which can define the speed of operation of controller920, controller920can determine a first target count value representing the end time of the exposure period. In step1804, controller920sets a measurement period to have a second duration. The measurement period can include a time-to-saturation (TTS) measurement period. Within the TTS measurement period, controller920can perform a TTS measurement operation based on comparing the voltage output by charge sensing unit614with a flat saturation threshold to determine whether the charge accumulated in the charge sensing unit exceeds the saturation threshold, and if it does, a time-to-saturation measurement for the time it takes for the charge to exceed the saturation threshold. Controller920can set the first duration based on TTS duration setting1502as well as the frequency of clock signal1506. For example, controller920can determine a second target count value representing the end time of the TTS measurement operation. In addition, controller920can also determine the frequency of clock signal1508based on TTS duration setting1502. For example, given the maximum count of counter914(e.g., 255) and the duration of TTS measurement operation, controller920can determine the frequency of clock signal1508such that counter914can sweep through the entire range of count values within the TTS measurement operation. In step1804, the duration of the TTS measurement period is set separately from the duration of the exposure period. Specifically, TTS duration setting1502and exposure period setting1504can be supplied from separate registers, which allow the two settings to be individually programmable. In addition, clock signal1508, which is supplied to counter914and sets the frequency at which counter914updates the count value, can have a different frequency from clock signal1506supplied to controller920, and the frequency of clock signal1508can also be set based on TTS duration setting1502. This allows the TTS measurement period to be set using a fast clock (e.g., clock signal1508) whereas the exposure period can be set using a slow clock (e.g., clock signal1506). In addition, controller920can also use TTS duration setting1502to set a lower limit of the exposure period, to ensure that the exposure period does not end before the TTS measurement operation completes. In a case where exposure period setting1504sets a shorter exposure period than the duration of the TTS measurement operation, controller920can override exposure period setting1504and set the exposure period to be at least equal to the duration of the TTS measurement operation set according to TTS duration setting1502. In step1806, controller920enables the photodiode PD to generate a charge in response to light within the exposure period having the first duration. Controller920can control the start and end of the exposure period using an internal counter that operates on clock signal1506and based on comparing the count values with the first target count value representing the end time of the exposure period. For example, controller920can disable the AB gate and/or release the photodiode from the reset state when the count value is zero, and enable the AB gate and/or reset the photodiode when the count value reaches the target count value. The photodiode PD can accumulate at least a first part of the charge as residual charge, while a second part of the charge can be accumulated as overflow charge at charge sensing unit614 In step1808, controller920determines, using a quantizer (e.g., comparator906and counter914), and within the TTS measurement period having a second duration, whether a first quantity of the overflow charge exceeds a threshold, and a time-to-saturation (TTS) measurement representing the time it takes for the first quantity to exceed the threshold, as part of the TTS measurement operation. Controller920can start counter914when the count value of the internal counter is zero and allow counter914to run, and reset counter914when the count value reaches the second target count value representing the end time of the TTS measurement operation. As described above, controller920can using comparator906to compare the voltage output by charge sensing unit614with a flat saturation threshold to determine whether the overflow charge accumulated in the charge sensing unit exceeds the saturation threshold to generate a decision. If the decision indicates that the overflow charge exceeds the saturation threshold, the decision can cause memory bank912to store a count value from counter914to represent the TTS measurement result. As described above, the threshold is scaled based on a ratio between the first duration (of the exposure period) and the second duration (of the TTS measurement period) to reduce or eliminate dead zone in the measurable range of light intensities. In step1810, based on whether the first quantity exceeds the threshold, controller920outputs a first value representing the TTS measurement or a second value representing a second quantity of the charge generated by the photodiode within the exposure period to represent the intensity of light. The second value can be generated based on, for example, a FD ADC operation to measure the quantity of overflow charge, a PD ADC operation to measure the quantity of residual charge, etc. Output logic circuits908can store a first indication of whether overflow charge is accumulated at charge sensing unit614, and a second indication of whether the overflow charge exceeds the saturation threshold, and store one of the TTS measurement result, the FD ADC result, or the PD ADC result at memory bank912. FIG.19AandFIG.19Billustrate a method1900for performing a light intensity measurement by an image sensor having multiple photodiodes that share a quantizer (e.g., comparator906), such as image sensor600ofFIG.16AandFIG.17A. Method1900can be performed by controller920in conjunction with other components of image sensor600. Method1900starts with step1902, in which controller920sets a first exposure period of a first photodiode (e.g., photodiode PD1ofFIG.16A, photodiode PD1aofFIG.17A, etc.) to have a first duration. Controller920can set the first exposure period based on exposure period setting1604/1704. In step1904, controller920sets a first time-to-saturation (TTS) measurement period to have a second duration. The duration of the first TTS measurement period is set separately from the duration of the first exposure period, but the first TTS measurement period sets a lower limit of the first exposure period, as explained above inFIG.16A,FIG.17A, andFIG.18. In step1906, controller920sets a second exposure period of a second photodiode (e.g., photodiode PD2ofFIG.16A, photodiode PD1bofFIG.17A, etc.) to have a third duration. Controller920can set the second exposure period based on exposure period setting1604/1704. In step1908, controller920sets a second time-to-saturation (TTS) measurement period to have a fourth duration. The duration of the second TTS measurement period is set separately from the duration of the second exposure period, but the second TTS measurement period sets a lower limit of the second exposure period, as explained above inFIG.16A,FIG.17A, andFIG.18. In some examples, as shown inFIG.16B-FIG.16D, andFIG.17B, controller920can set start time of the second exposure period of the second photodiode to be the same as the start time of the first exposure period of the first photodiode. Moreover, the first and second TTS measurement periods can fit into the overlapping period between the first and second exposure periods, to allow TTS operations to be performed for light received within the overlapping period for both photodiodes. Such arrangements can improve the global shutter operation. On the other hand, in some examples, to improve the dynamic range of the second photodiode, the duration of the first TTS operation can be reduced based on a target SNR of the first photodiode, as described inFIG.16D. Moreover, the start time of the second exposure period can be delayed till after the first TTS operation completes to align with the start time of the second TTS operation for the second photodiode, as described inFIG.16E. In step1910, controller920enables the first photodiode to generate a first charge in response to a first component of light within the first exposure period having the first duration. Controller920can track the start and end of the first exposure period using an internal counter that operates on clock signal1606/1706and based on comparing the count values with the target count value representing the end time of the first exposure period. The first photodiode can accumulate at least a first part of the first charge as a first residual charge, while a second part of the first charge can be accumulated as a first overflow charge at charge sensing unit614a. In step1912, controller920enables the second photodiode to generate a second charge in response to a second component of the light within the second exposure period having the second duration. Controller920can track the start and end of the second exposure period using the internal counter that operates on clock signal1606/1706and based on comparing the count values with the target count value representing the end time of the second exposure period. The second photodiode can accumulate at least a first part of the second charge as a second residual charge, while a second part of the second charge can be accumulated as a second overflow charge at charge sensing unit614b. Referring toFIG.19B, in step1914, controller920determines, using a quantizer (e.g., comparator906and counter914), and within the first TTS measurement period having the second duration, whether a first quantity of the first overflow charge exceeds a first threshold, and a first time-to-saturation (TTS) measurement representing a first time it takes for the first quantity to exceed the first threshold, as part of the first TTS measurement operation. The first threshold is determined based on scaling a reference threshold with a ratio between the second duration of the first TTS measurement period and the first duration of the first exposure period. In step1916, based on whether the first quantity of the first overflow charge exceeds the first threshold, controller920generates a first value representing the first TTS measurement or a second value representing a second quantity of the first charge generated by the first photodiode within the first exposure period to represent the intensity of the first component of light. The second value can be generated based on, for example, a FD ADC operation to measure the quantity of first overflow charge, a PD ADC operation to measure the quantity of first residual charge, etc. One of the first value or the second value can be stored in a memory (e.g., memory912) and output to represent the intensity of the first component of light. In step1918, controller920determines, using the quantizer, and within the second TTS measurement period having the fourth duration, whether a third quantity of the second overflow charge exceeds a second threshold, and a second time-to-saturation (TTS) measurement representing a second time it takes for the first quantity to exceed the second threshold, as part of the second TTS measurement operation. The second threshold is determined based on scaling the reference threshold with a ratio between the fourth duration of the second TTS measurement period and the second duration of the second exposure period. In step1920, based on whether the third quantity of the second overflow charge exceeds the second threshold, controller920generates third value representing the second TTS measurement or a fourth value representing a second quantity of the second charge generated by the second photodiode within the second exposure period to represent the intensity of the second component of light. The fourth value can be generated based on, for example, a FD ADC operation to measure the quantity of second overflow charge, a PD ADC operation to measure the quantity of second residual charge, etc. One of the third value or the fourth value can be stored in a memory (e.g., memory912) and output to represent the intensity of the first component of light. Some portions of this description describe the examples of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, and/or hardware. Steps, operations, or processes described may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In some examples, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Examples of the disclosure may also relate to an apparatus for performing the operations described. The apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Examples of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any example of a computer program product or other data combination described herein. The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the examples is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.
169,793
11943562
DETAILED DESCRIPTION Examples of the present disclosure improve the functionality of electronic software and systems by enhancing users' experience of utilizing a camera of a client device. Examples of the present disclosure further improve the functionality of electronic software and systems by reducing the amount of storage space and processing resources associated with generating a video file based on a sequence of frames captured in the process of video recording. In some examples, the reducing of the amount of storage space and processing resources required for creating a video file results from discarding some of the recorded video frames before creating and storing the video file. In order to start and stop video recording, a user may activate the capture button provided in the user interface (UI) of the associated camera application. The camera of the client device, such as a smartphone, for example, captures the output of the digital image sensor of the camera, and, upon the ending of the recording session, the system generates a video file (also referred to as simply a video) using frames captured during the video recording process. The resulting video can be then saved and stored for future viewing. However, there are times when a user may wish to view the already-recorded frames, while the recording is still in progress. Furthermore, as mentioned above, depending on the circumstances surrounding a recording session, a recorded video may include a portion of frames at the beginning of the video that are of little or no value to the user. The technical problem of generating a video that has the starting point later in time than the starting time of the associated recording session is addressed by providing a real time video editing functionality. In some examples, a real time video editing functionality is in the form of a real time video editor provided by a messaging system for exchanging data over a network, which is described further below, with reference toFIG.1-5. The use of a real time video editor can be described as follows. A user starts the video recording process by activating the capture button provided in the camera view user interface (UI) of the associated camera application and determines, at a later time, but while the video recording is still in progress, that the first portion of the video is not of interest to them. The user can then perform a predetermined gesture directed to the camera view UI, such as a left to right swiping gesture, which causes the camera view UI to display the captured frames to be displayed in reverse order, thus imitating or visualizing a process of rewinding the video. As the gesture stops, so stops the visualizing of the rewinding process and the user is presented with one or more frames corresponding to the place in the sequence of frames, up to which the video was rewound. The user may then be presented with a pop-up message requesting to either cancel or to confirm that the video file that would be created, once the video recording process is stopped, should start not with the first frame in the original sequence of frames (the first frame recorded at the time the recording process was commenced), but with the one or more frames currently displayed in the camera view UI, up to which the video was rewound. An example of operation of a real time video editor is described further below, with reference toFIG.6. As mentioned above, in some examples a real time video editor is provided by a messaging system for exchanging data over a network, which is described below. Networked Computing Environment FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104. Each messaging client104is communicatively coupled to other instances of the messaging client104and a messaging server system108via a network106(e.g., the Internet). A messaging client104is able to communicate and exchange data with another messaging client104and with the messaging server system108via the network106. The data exchanged between messaging client104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client104. While certain functions of the messaging system100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include, as examples, message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, live event information, as well as images and video captured with a front facing camera of an associated client device using a viewfinder ring flash. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. For example, the messaging client104can present a camera view UI that displays the output of a digital image sensor of a camera provided with the client device102, a camera view UI that displays output of a digital sensor of the camera and a shutter user selectable element activatable to start a video recording process. Some examples of a camera view UI are described further below, with reference toFIG.7-9. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, application servers112. The application servers112are communicatively coupled to a database server118, which facilitates access to a database120that stores data associated with messages processed by the application servers112. Similarly, a web server124is coupled to the application servers112, and provides web-based interfaces to the application servers112. To this end, the web server124processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server110receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers112. The Application Program Interface (API) server110exposes various functions supported by the application servers112, including account registration, login functionality, the sending of messages, via the application servers112, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server114, and for possible access by another messaging client104, the settings of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the addition and deletion of entities (e.g., friends) to an entity graph (e.g., a social graph), the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client104). The application servers112host a number of server applications and subsystems, including for example a messaging server114, an image processing server116, and a social network server122. The messaging server114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available to the messaging client104. In some examples, a collection may include an a video generated using the real time video editor. Other processor and memory intensive processing of data may also be performed server-side by the messaging server114, in view of the hardware requirements for such processing. The application servers112also include an image processing server116that is dedicated to performing various image processing operations, typically with respect to images or video within the payload of a message sent from or received at the messaging server114. Some of the various image processing operations may be performed by various AR components, which can be hosted or supported by the image processing server116. In some examples, an image processing server116is configured to provide the functionality of the real time video editor described herein. The social network server122supports various social networking functions and services and makes these functions and services available to the messaging server114. To this end, the social network server122maintains and accesses an entity graph306(as shown inFIG.3) within the database120. Examples of functions and services supported by the social network server122include the identification of other users of the messaging system100with which a particular user has a “friend” relationship or is “following,” and also the identification of other entities and interests of a particular user. System Architecture FIG.2is a block diagram illustrating further details regarding the messaging system100, according to some examples. Specifically, the messaging system100is shown to comprise the messaging client104and the application servers112. The messaging system100embodies a number of subsystems, which are supported on the client-side by the messaging client104, and on the sever-side by the application servers112. These subsystems include, for example, an ephemeral timer system202, a collection management system204, an augmentation system206, and a real time video editor208. The real time video editor208is configured to facilitate changing the starting point of a video recording while the recording process is in progress, as described in further detail below, with reference toFIG.6-9. The ephemeral timer system202is responsible for enforcing the temporary or time-limited access to content by the messaging client104and the messaging server114. The ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively enable access (e.g., for presentation and display) to messages and associated content via the messaging client104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing sets or collections of media (e.g., collections of text, image, video, and audio data). A collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. In a further example, a collection may include content, which was generated using one or more AR components. In some examples, a media content item in a collection is generated using the real time video editor. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client104. The collection management system204furthermore includes a curation interface212that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface212enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain examples, compensation may be paid to a user for the inclusion of user-generated content into a collection. In such cases, the collection management system204operates to automatically make payments to such users for the use of their content. The augmentation system206provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content, which may be associated with a message. For example, the augmentation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The media overlays may be stored in the database120and accessed through the database server118. The augmentation system206provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content associated with a message. For example, the augmentation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The augmentation system206operatively supplies a media overlay or augmentation (e.g., an image filter) to the messaging client104based on a geolocation of the client device102. In another example, the augmentation system206operatively supplies a media overlay to the messaging client104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the augmentation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server120. In some examples, the augmentation system206is configured to provide access to AR components that can be implemented using a programming language suitable for app development, such as, e.g., JavaScript or Java and that are identified in the messaging server system by respective AR component identifiers. An AR component may include or reference various image processing operations corresponding to an image modification, filter, media overlay, transformation, and the like. These image processing operations can provide an interactive experience of a real-world environment, where objects, surfaces, backgrounds, lighting etc., captured by a digital image sensor or a camera, are enhanced by computer-generated perceptual information. In this context an AR component comprises the collection of data, parameters, and other assets needed to apply a selected augmented reality experience to an image or a video feed. In some embodiments, an AR component includes modules configured to modify or transform image data presented within a graphical user interface (GUI) of a client device in some way. For example, complex additions or transformations to the content images may be performed using AR component data, such as adding rabbit ears to the head of a person in a video clip, adding floating hearts with background coloring to a video clip, altering the proportions of a person's features within a video clip, or many numerous other such transformations. This includes both real-time modifications that modify an image as it is captured using a camera associated with a client device and then displayed on a screen of the client device with the AR component modifications, as well as modifications to stored content, such as video clips in a gallery that may be modified using AR components. Various augmented reality functionality that may be provided by an AR component include detection of objects (e.g. faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked. In various embodiments, different methods for achieving such transformations may be used. For example, some embodiments may involve generating a 3D mesh model of the object or objects, and using transformations and animated textures of the model within the video to achieve the transformation. In other embodiments, tracking of points on an object may be used to place an image or texture, which may be two dimensional or three dimensional, at the tracked position. In still further embodiments, neural network analysis of video frames may be used to place images, models, or textures in content (e.g. images or frames of video). AR component data thus refers to both to the images, models, and textures used to create transformations in content, as well as to additional modeling and analysis information needed to achieve such transformations with object detection, tracking, and placement. Data Architecture FIG.3is a schematic diagram illustrating data structures300, which may be stored in the database120of the messaging server system108, according to certain examples. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table302. This message data includes, for any particular one message, at least message sender data, message recipient (or receiver) data, and a payload. The payload of a message may include content generated using a viewfinder ring flash. Further details regarding information that may be included in a message, and included within the message data stored in the message table302is described below with reference toFIG.4. An entity table304stores entity data, and is linked (e.g., referentially) to an entity graph306and profile data308. Entities for which records are maintained within the entity table304may include individuals, corporate entities, organizations, objects, places, events, and so forth. Regardless of entity type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph306stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. With reference to the functionality provided by the AR component, the entity graph306stores information that can be used, in cases where the AR component is configured to permit using a portrait image of a user other than that of the user controlling the associated client device for modifying the target media content object, to determine a further profile that is connected to the profile representing the user controlling the associated client device. As mentioned above, the portrait image of a user may be stored in a user profile representing the user in the messaging system. The profile data308stores multiple types of profile data about a particular entity. The profile data308may be selectively used and presented to other users of the messaging system100, based on privacy settings specified by a particular entity. Where the entity is an individual, the profile data308includes, for example, a user name, telephone number, address, settings (e.g., notification and privacy settings), as well as a user-selected avatar representation (or collection of such avatar representations). A particular user may then selectively include one or more of these avatar representations within the content of messages communicated via the messaging system100, and on map interfaces displayed by messaging clients104to other users. The collection of avatar representations may include “status avatars,” which present a graphical representation of a status or activity that the user may select to communicate at a particular time. The database120also stores augmentation data in an augmentation table310. The augmentation data is associated with and applied to videos (for which data is stored in a video table314) and images (for which data is stored in an image table316). In some examples, the augmentation data is used by various AR components, including the AR component. An example of augmentation data is augmented reality (AR) tools that can be used in AR components to effectuate image transformations. Image transformations include real-time modifications, which modify an image (e.g., a video frame) as it is captured using a digital image sensor of a client device102. The modified image is displayed on a screen of the client device102with the modifications. A story table312stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table304). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. In some examples, the story table312stores one or more images or videos that were created using a viewfinder ring flash. As mentioned above, the video table314stores video data that, in one example, is associated with messages for which records are maintained within the message table302. In some examples, the video table314stores one or more videos created using a real time video editor. Similarly, the image table316stores image data, which may be associated with messages for which message data is stored in the entity table304. The entity table304may associate various augmentations from the augmentation table310with various images and videos stored in the image table316and the video table314. Data Communications Architecture FIG.4is a schematic diagram illustrating a structure of a message400, according to some examples, generated by a messaging client104for communication to a further messaging client104or the messaging server114. The content of a particular message400is used to populate the message table302stored within the database120, accessible by the messaging server114. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application servers112. The content of a message400, in some examples, includes an image or a video that was created using the AR component. A message400is shown to include the following example components:message identifier402: a unique identifier that identifies the message400.message text payload404: text, to be generated by a user via a user interface of the client device102, and that is included in the message400.message image payload406: image data, captured by a camera component of a client device102or retrieved from a memory component of a client device102, and that is included in the message400. Image data for a sent or received message400may be stored in the image table316.message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102, and that is included in the message400. Video data for a sent or received message400may be stored in the video table314. The video data may include content generated using a real time video editor.message audio payload410: audio data, captured by a microphone or retrieved from a memory component of the client device102, and that is included in the message400.message augmentation data412: augmentation data (e.g., filters, stickers, or other annotations or enhancements) that represents augmentations to be applied to message image payload406, message video payload408, message audio payload410of the message400. Augmentation data for a sent or received message400may be stored in the augmentation table310.message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client104.message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).message story identifier418: identifier values identifying one or more content collections (e.g., “stories” identified in the story table312) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the Client device102on which the message400was generated and from which the message400was sent.message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table316. Similarly, values within the message video payload408may point to data stored within a video table314, values stored within the message augmentations412may point to data stored in an augmentation table310, values stored within the message story identifier418may point to data stored in a story table312, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table304. Time-Based Access Limitation Architecture FIG.5is a schematic diagram illustrating an access-limiting process500, in terms of which access to content (e.g., an ephemeral message502, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message group504) may be time-limited (e.g., made ephemeral). The content of an ephemeral message502, in some examples, includes an image or a video that was created using a viewfinder ring flash. An ephemeral message502is shown to be associated with a message duration parameter506, the value of which determines an amount of time that the ephemeral message502will be displayed to a receiving user of the ephemeral message502by the messaging client104. In one example, an ephemeral message502is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter506. In some examples, the ephemeral message502may include a video created using a real time video editor. The message duration parameter506and the message receiver identifier424are shown to be inputs to a message timer512, which is responsible for determining the amount of time that the ephemeral message502is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message502will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter506. The message timer512is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message502) to a receiving user. The ephemeral message502is shown inFIG.5to be included within an ephemeral message group504(e.g., a collection of messages in a personal story, or an event story). The ephemeral message group504has an associated group duration parameter508, a value of which determines a time duration for which the ephemeral message group504is presented and accessible to users of the messaging system100. The group duration parameter508, for example, may be the duration of a music concert, where the ephemeral message group504is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the group duration parameter508when performing the setup and creation of the ephemeral message group504. Additionally, each ephemeral message502within the ephemeral message group504has an associated group participation parameter510, a value of which determines the duration of time for which the ephemeral message502will be accessible within the context of the ephemeral message group504. Accordingly, a particular ephemeral message group504may “expire” and become inaccessible within the context of the ephemeral message group504, prior to the ephemeral message group504itself expiring in terms of the group duration parameter508. The group duration parameter508, group participation parameter510, and message receiver identifier424each provide input to a group timer514, which operationally determines, firstly, whether a particular ephemeral message502of the ephemeral message group504will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message group504is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the group timer514operationally controls the overall lifespan of an associated ephemeral message group504, as well as an individual ephemeral message502included in the ephemeral message group504. In one example, each and every ephemeral message502within the ephemeral message group504remains viewable and accessible for a time period specified by the group duration parameter508. In a further example, a certain ephemeral message502may expire, within the context of ephemeral message group504, based on a group participation parameter510. Note that a message duration parameter506may still determine the duration of time for which a particular ephemeral message502is displayed to a receiving user, even within the context of the ephemeral message group504. Accordingly, the message duration parameter506determines the duration of time that a particular ephemeral message502is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message502inside or outside the context of an ephemeral message group504. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message502from the ephemeral message group504based on a determination that it has exceeded an associated group participation parameter510. For example, when a sending user has established a group participation parameter510of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message502from the ephemeral message group504after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message group504when either the group participation parameter510for each and every ephemeral message502within the ephemeral message group504has expired, or when the ephemeral message group504itself has expired in terms of the group duration parameter508. In certain use cases, a creator of a particular ephemeral message group504may specify an indefinite group duration parameter508. In this case, the expiration of the group participation parameter510for the last remaining ephemeral message502within the ephemeral message group504will determine when the ephemeral message group504itself expires. In this case, a new ephemeral message502, added to the ephemeral message group504, with a new group participation parameter510, effectively extends the life of an ephemeral message group504to equal the value of the group participation parameter510. Responsive to the ephemeral timer system202determining that an ephemeral message group504has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message group504to no longer be displayed within a user interface of the messaging client104. Similarly, when the ephemeral timer system202determines that the message duration parameter506for a particular ephemeral message502has expired, the ephemeral timer system202causes the messaging client104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message502. Process Flow and User Interfaces FIG.6is a flowchart illustrating a method600for real time video editing in accordance with some examples. While certain operations of the process600may be described as being performed by certain devices, in different examples, different devices or a combination of devices may perform these operations. For example, operations described below may be performed by the client device102or in combination with a server-side computing device (e.g., the message messaging server system108). The method600starts at operation610, with commencing a video recording process by a camera of a client device. The video recording process, while in progress, produces a sequence of frames, each frame from the sequence of frames associated with a time stamp. The resulting video is not finalized until the video recording process is ended, at which time the resulting video is finalized (encoded into a desired format, for example, and saved for future access). The commencing of the video recording process occurs, in some examples, in response to activation of a shutter user selectable element in a camera view user interface (UI) displayed at a client device. An example of a camera view UI is shown inFIG.7.FIG.7is a diagrammatic representation700of a camera view UI displaying the output of the digital image sensor of a camera. The output of the digital sensor of a camera is represented in area710, showing the sky with clouds in this example. The camera view UI shown inFIG.7also includes a shutter user selectable element720. In some examples, the camera view UI is provided by the messaging system for exchanging data over a network described above, with reference toFIG.1-5. At operation620, while the video recording process is in progress, the real time video editor detects a gesture directed at the camera view UI. The gesture can be, for example, a left to right swiping gesture, as illustrated inFIG.8.FIG.8is a diagrammatic representation800of a left to right swiping gesture directed at the camera view UI displaying the output of the digital image sensor of the camera in area820. InFIG.8, the curved arrow pointing right and the stylized picture of a hand with a pointing finger, identified by reference numeral810, are not part of the camera view UI, but rather a visualization of a swiping left to right gesture. In response to the detecting of the gesture, the real time video editor causes displaying the captured frames in a reverse order (in a descending order based on respective time stamps of the frames), in a manner imitating rewinding of the video. The displaying of the captured frames in a sequential reverse order continues until the gesture stops, at which point the currently displayed frame is considered to be potentially a new starting point of a video that would result from the video recording process. The frame displayed in the camera view UI at the time the swiping gesture stops is referred to as a new first frame, for the purposes of this description. At operation630, in response to the detecting of the gesture, the real time video editor causes displaying of the new first frame, where the new first frame is selected based on the duration of the gesture. For example, if the gesture is brief, the sequence of frames being captured is rewound just a few frames back. If the gesture is longer, the sequence of frames is rewound further back. In some examples, the real time video editor may use, in addition to or instead of the duration, other characteristics of the gesture, such as speed, acceleration, and so on. The time stamp of the new first frame indicates a point in time prior to a time when the gesture was detected. FIG.9is a diagrammatic representation900of a of a rewind action imitation in a camera view user interface, in accordance with some examples. InFIG.9, the curved arrow pointing right and the stylized picture of a hand with a pointing finger, identified by reference numeral910, are not part of the camera view UI, but rather a visualization of a swiping left to right swiping gesture. Compared toFIG.8, the stylized picture of a hand inFIG.9has a finger pointing to the right, which is a visualization of the ending of the gesture duration. Furthermore, while inFIG.7, which is a visualization of one of the earlier frames with respect to commencement of the video recording, the output of a digital sensor of the camera in area710shows an empty sky with clouds, while inFIG.8the corresponding area820shows two planes, which is a visualization of an event that a user may have been expecting and wishing to capture in a video. InFIG.9, frame920shows one plane, while frames930and940show just clouds, which is a visualization of the event of interest—arrival of the first plane in the sky—that occurred contemporaneously with capturing of frame940. The real time video editing methodology described herein permits a user to “rewind” the video in real time, while the recording session is still in progress, and set the new starting point for the video, e.g., starting with the frame940. In some examples, subsequent to the displaying a new first frame from the sequence of frames in the camera view UI, and while the video recording process is still in progress, the real time video editor obtains from a user a selection or confirmation to identify the new first frame as a new starting point of the video recording process. The obtaining of the selection may be in the form of a presentation of a user selectable element overlaid over the new first frame presented in the camera view UI. In order to make the new first frame a new starting point of the video that would result from the video recording process, the real time video editor may be configured to discard frames with time stamps indicating time prior to the time stamp of the new first frame (in other words, starting with the new first frame). At operation640, in response to ending or stopping of the video recording process, the real time vide editor generates a video file using frames captured during the video recording process, starting with the new first frame except for frames with time stamps indicating earlier time than the time stamp of the new first frame. The ending of the video recording process may be in response to a further activation of a shutter user selectable element in a camera view UI. Machine Architecture FIG.10is a diagrammatic representation of the machine1000within which instructions1008(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1000to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions1008may cause the machine1000to execute any one or more of the methods described herein. The instructions1008transform the general, non-programmed machine1000into a particular machine1000programmed to carry out the described and illustrated functions in the manner described. The machine1000may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1000may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1000may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1008, sequentially or otherwise, that specify actions to be taken by the machine1000. Further, while only a single machine1000is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1008to perform any one or more of the methodologies discussed herein. The machine1000, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine1000may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine1000may include processors1002, memory1004, and input/output I/O components1038, which may be configured to communicate with each other via a bus1040. In an example, the processors1002(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1006and a processor1010that execute the instructions1008. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.10shows multiple processors1002, the machine1000may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory1004includes a main memory1012, a static memory1014, and a storage unit1016, both accessible to the processors1002via the bus1040. The main memory1004, the static memory1014, and storage unit1016store the instructions1008embodying any one or more of the methodologies or functions described herein. The instructions1008may also reside, completely or partially, within the main memory1012, within the static memory1014, within machine-readable medium1018within the storage unit1016, within at least one of the processors1002(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1000. The I/O components1038may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1038that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1038may include many other components that are not shown inFIG.10. In various examples, the I/O components1038may include user output components1024and user input components1026. The user output components1024may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components1026may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components1038may include biometric components1028, motion components1030, environmental components1032, or position components1034, among a wide array of other components. For example, the biometric components1028include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1030include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components1032include, for example, one or more cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front facing cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front facing cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. In the examples where the front facing camera is used with a viewfinder ring flash described herein, the user has the ability to use augmented reality face filters in low light conditions, even in complete darkness, as the viewfinder ring flash illuminates the user's face without obscuring the output of the digital image sensor. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components1034include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1038further include communication components1036operable to couple the machine1000to a network1020or devices1022via respective coupling or connections. For example, the communication components1036may include a network interface Component or another suitable device to interface with the network1020. In further examples, the communication components1036may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1022may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components636may detect identifiers or include components operable to detect identifiers. For example, the communication components636may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF410, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1036, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory1012, static memory1014, and memory of the processors1002) and storage unit1016may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1008), when executed by processors1002, cause various operations to implement the disclosed examples. The instructions1008may be transmitted or received over the network1020, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components1036) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions1008may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices1022. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example examples, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
65,159
11943563
FIG.1ashows a perspective view of a schematic perspective view of a videoconferencing terminal100. The videoconferencing terminal100comprises at least one camera102positioned behind a display104. The display104is configured to display an image500of a remote user to a local user106who is positioned in front of the display104. The local user106is positioned in close proximity to the videoconferencing terminal100and the camera102is configured to capture on or more images, and or videos of the local user106. For example, the local user106is in the same room as the videoconferencing terminal100. In contrast, the remote user is not in close proximity to the videoconferencing terminal100or the local user106and the video stream and/or images of the local user106are transmitted to a videoconferencing terminal (not shown) associated with the remote user. In the embodiments described with reference to the Figures there are two users a local user106and a remote user. In other embodiments (not shown), there may be any number of local users106and remote users on the videoconference call. The process of receiving and transmitting video and image data between videoconferencing terminals100is carried out with respect to known techniques and will not be discussed in any further detail. In some embodiments, the remote user has an identical videoconferencing terminal100to the videoconferencing terminal100of the local user106. However, this is not necessary and only one of the users participating in the videoconference can have the videoconferencing terminal100according to the embodiments described in reference to the Figures. In a preferred embodiment, all users participating in the videoconference have a videoconferencing terminal100according to the embodiments. FIG.1bshows a schematic side view of a videoconferencing terminal100. The camera102comprises an axis A-A which is in some embodiments arranged substantially perpendicular to the plane of the surface of the display104.FIG.1bshows that the axis A-A is in alignment with the eyes108of the local user106. In this way, axis A-A is an “eye-contact” axis. In this arrangement, the local user106is looking directly along the axis of the camera102. This means that the camera102will capture an image or a video of the local user106looking directly at the camera102. This means the remote user will receive an image of the local user106with the eyes108of the local user in the correct direction to simulate a face to face meeting. In some alternative embodiments, the camera102is moveable with respect to the display104and the axis of the camera102can be positioned at an angle with respect to the plane of the display104. WhilstFIGS.1aand1bshow one camera102, in some embodiments there can be a plurality of cameras102for capturing and image or a video of a plurality of local users106or for capturing an image of a video of a large room. The embodiments described hereinafter are only described in reference to using one camera, but some embodiments use a plurality of cameras102are used instead. The camera102as shown inFIG.1is static and positioned in the centre of the display104. However, in some embodiments, the camera102is moveable with respect to the display104. The display104in some embodiments is a transparent OLED display104. The display104is substantially planar and can be any suitable size for the videoconferencing call. In other embodiments any other suitable transparent display can be used. For example, infrared cameras (not shown) can be used and the infrared cameras can see the local user106through the display104. In this way, the display104is transmissive to electromagnetic radiation which can be in the visible spectrum, near visible, infrared or ultraviolet or any other suitable frequency of electromagnetic radiation. Turning toFIG.7, the videoconferencing terminal100will be described in further detail.FIG.7shows a schematic view of a videoconferencing terminal100according to some embodiments. As previously mentioned, the videoconferencing terminal100comprises a camera102and a display104. The videoconferencing terminal100selectively controls the activation of the camera102and the display104. As shown inFIG.7, the camera102and the display104are controlled by a camera controller702and a display controller704respectively. The videoconferencing terminal100comprises a videoconferencing controller700. The videoconferencing controller700, the camera controller702and the display controller704may be configured as separate units, or they may be incorporated in a single unit. The videoconferencing controller700comprises a plurality of modules for processing the videos and images received from a remotely from an interface706and videos and images captured locally. The interface706and the method of transmitted and receiving videoconferencing data is known and will not be discussed any further. In some embodiments, the videoconferencing controller700comprises a face detection module710for detecting facial features and an image processing module712for modifying an image to be displayed on the display104. The face detection module710and the image processing module712will be discussed in further detail below. One or all of the videoconferencing controller700, the camera controller702and the display controller704may be at least partially implemented by software executed by a processing unit714. The face detection modules710and the image processing modules712may be configured as separate units, or they may be incorporated in a single unit. One or both of the modules710,712may be at least partially implemented by software executed by the processing unit714. The processing unit714may be implemented by special-purpose software (or firmware) run on one or more general-purpose or special-purpose computing devices. In this context, it is to be understood that each “element” or “means” of such a computing device refers to a conceptual equivalent of a method step; there is not always a one-to-one correspondence between elements/means and particular pieces of hardware or software routines. One piece of hardware sometimes comprises different means/elements. For example, a processing unit714may serve as one element/means when executing one instruction but serve as another element/means when executing another instruction. In addition, one element/means may be implemented by one instruction in some cases, but by a plurality of instructions in some other cases. Naturally, it is conceivable that one or more elements (means) are implemented entirely by analogue hardware components. The processing unit714may include one or more processing units, e.g. a CPU (“Central Processing Unit”), a DSP (“Digital Signal Processor”), an ASIC (“Application-Specific Integrated Circuit”), discrete analogue and/or digital components, or some other programmable logical device, such as an FPGA (“Field Programmable Gate Array”). The processing unit714may further include a system memory and a system bus that couples various system components including the system memory to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory may include computer storage media in the form of volatile and/or non-volatile memory such as read only memory (ROM), random access memory (RAM) and flash memory. The special-purpose software and associated control parameter values may be stored in the system memory, or on other removable/non-removable volatile/non-volatile computer storage media which is included in or accessible to the computing device, such as magnetic media, optical media, flash memory cards, digital tape, solid state RAM, solid state ROM, etc. The processing unit714may include one or more communication interfaces, such as a serial interface, a parallel interface, a USB interface, a wireless interface, a network adapter, etc, as well as one or more data acquisition devices, such as an A/D converter. The special-purpose software may be provided to the processing unit714on any suitable computer-readable medium, including a record medium, and a read-only memory. FIGS.1aand1bshow the videoconferencing terminal100which is operating optimally and the remote user and the local user106can make eye contact. However, calibration of the videoconferencing terminal100and dynamic modification of the displayed image500may be required in order for the local user106to experience a good connected feel during a video conference call. Calibration of the videoconferencing terminal100will now be discussed in reference toFIGS.2,3,4a,4b,4cand9.FIG.2shows a schematic cross-sectional side view of a videoconferencing terminal.FIG.3shows a schematic perspective view of a videoconferencing terminal.FIG.4a,4b, and4cshow a schematic view of a processing sequence for a captured camera image400on the videoconferencing terminal100.FIG.9shows a flow diagram of the operation of a videoconferencing terminal. During operation of the camera102and the display104the videoconferencing controller700can optionally interleave operation of the camera102and the display104. In this way, the camera102and the display104sequentially operate so that the camera102captures an image of the local user106when the display104is off. Likewise, the camera102is not capturing an image when the display106is displaying an image. For example, the camera102can be turned off or the shutter is closed when not capturing an image of the local user106. This means that the camera102takes an image when the display104is dark. As mentioned previously, in some embodiments the display is an OLED display. The OLED display has a low persistence, and this reduces pixel artifacts300which are received and captured by the camera102originating from the display104. However, the camera102may still receive light from pixel artifacts300from the display104. This can be a function of the display image500being displayed on the display104as well as the properties of the display104itself. Turning toFIG.2, the display104will be described in further detail. The display104comprises an LED matrix200of selectively operable pixels202. For the purposes of clarity, only one pixel202has been labelled inFIG.2. The LED matrix200can comprise any number of pixels202to achieve the required resolution for the videoconferencing call. An optically transmissive cover204such as a glass sheet, a transparent film or another clear medium is placed over the LED matrix200. In some circumstances, one or more light rays B can be reflected back from the optically transmissive cover204towards the camera102. In some embodiments, the videoconferencing controller700is configured to determine one or more pixel artifacts300captured by the at least one camera102from the display104as shown in900ofFIG.9. Once the pixel artifacts300have been determined, the videoconferencing controller700is configured to compensate the captured camera image400to remove the mapped one or more pixel artifacts300.FIG.3shows a perspective schematic representation of the video conferencing terminal100. The display104is shown with exemplary pixel artifacts300and occlusion artifacts302on the display104.FIG.4ashows the captured camera image400including a local user captured image406of the local user106together with the pixel artifacts300and/or occlusion artifacts302. Whilst the pixel artifacts300and occlusion artifacts302are represented by a series of vertical lines, the pixel artifacts300and occlusion artifacts302can be any distribution across the display104. In some embodiments, in order to compensate for the pixel artifacts300from the display104in the captured camera image400, the contribution from each pixel202of the display104in the captured camera image400is determined as shown in step900. Optionally, this is achieved with per-pixel information of the LED matrix200which maps the pixel output to the contribution as a pixel artifact map402in the captured camera image400. The pixel output is a function of the digital RGB (red green blue) colour output of the display image500and properties of the display104. The videoconferencing controller700uses information relating to displayed image500and the display104properties and determines each display pixel's contribution in the captured camera image400. In this way, the videoconferencing controller700determines a pixel artefact map402as shown inFIG.4b. The videoconferencing controller700then subtracts the contribution of all display pixels202in the pixel artifact map402to obtain a compensated camera image404as shown inFIG.4cand step902ofFIG.9. The videoconferencing controller700then determines the compensated camera image404as it would have looked without any light contribution of pixel artifacts300from the pixels202. The compensated camera image404comprises the local user captured image406as well. The videoconferencing controller700receives information relating to the digital RGB colours of the display image500sent to the display104. This means that the information relating to the digital RGB colours are directly available to the videoconferencing controller700for carrying out the compensation algorithm as shown inFIG.9. In some embodiments, the videoconferencing controller700optionally determines the display104properties can be determined in a calibration step. In the calibration step the videoconferencing controller700selectively controls the LED matrix200to light up each pixel202individually, at different illumination levels, to learn the mapping from digital RGB colour output to contribution in the captured camera image400. After the display pixel artifacts300have been removed, in some circumstances the captured camera image400may still have occlusion artifacts302in the captured camera image400from elements of the display104. The occlusion artifacts302arise from one or more elements of the display104in front of the camera102which blocks light from the local user106. The occlusion artifacts302can be described as having an occlusion factor between 0.0 and 1.0 wherein 0.0 indicates total occlusion and 1.0 indicates no occlusion. In some embodiments, the videoconferencing controller700determines the occlusion factors of the occlusion artifacts302in a calibration step, when the camera102is directed at a uniform (e.g., all white) and evenly illuminated target. This means that the camera image pixel levels are uniform if no occlusions artifacts302are present. FIG.4balso represents the determined occlusion artifact map408of occlusion artifacts302on the display occluded image after the calibration step. As mentioned above, in the calibration step the camera102is looking at a smooth white surface. The videoconferencing controller700determines the maximum pixel level of a particular pixel202in the LED matrix200. For each other pixel in the LED matrix200, the videoconferencing controller700divides its pixel value by the maximum pixel value to get the occlusion factor for each particular pixel200. In this way, the videoconferencing controller700sets, a notional “correct” level to be the one of the maximum pixels. The videoconferencing controller700implicitly assumes that the maximum pixel is unoccluded. If this is not the case, the effect is a uniformly darker image, but this is not an effect that is apparent to the local user106, and not experienced as a significant artifact. Accordingly, the videoconferencing controller700determines on or more occlusion artifacts302as shown in step904ofFIG.9. In a similar way, it may be the case that the target and illumination properties during calibration are such that the ideal, unoccluded, image is not uniform, but has slight variations. Typically, such variations are of low spatial frequency, and will cause low frequency artifacts in the compensated results that are either not noticeable at all to the user or not experienced as significant artifacts to the local user106. The videoconferencing controller700assumes that occlusions are not severe enough to completely occlude parts of a camera pixel (not shown) (e.g. occlusion factor 0.0), but only occlude parts of the incoming light, for each camera pixel. In some embodiments, at least some of the occluding display elements are out-of-focus. In some embodiments, the optics of the camera102are designed to keep occluding display elements are out-of-focus. The videoconferencing controller700then multiples the “correct”, “unoccluded”, pixel value is multiplied by. 0.0 gives total occlusion and 1.0 no occlusion. In this way by having information relating to the occlusion factor for each pixel202, the videoconferencing controller700can determine the compensated camera image404according to step906inFIG.9by dividing each pixel value by its occlusion factor, obtaining an unoccluded and compensated camera image404as shown inFIG.4c. Optionally the steps900,902relating to the compensation of the pixel artifacts300and steps904,906relating to the compensation of the occlusion artifacts302can be carried out in a different order than as show inFIG.9. Furthermore, optionally one, some or all of the steps900,902relating to the compensation of the pixel artifacts300and steps904,906relating to the compensation of the occlusion artifacts302can be omitted. For example, compensation for pixel artifacts300can be omitted. Likewise, additionally or alternatively, compensation for occlusion artifacts302can be omitted. Steps900,902,904,906are dependent on the position of the camera102with respect to the display104. Accordingly, the compensation of the pixel artifacts300and compensation for occlusion artifacts302is based on the relative position of the camera102with respect to the display104. This means that if the camera102moves with respect to the display104, one or more of the steps as shown inFIG.9are repeated to recalibrate the video conferencing terminal100. In this way, videoconferencing controller700modifies an image based on the camera position of the at least one camera102with respect to the display. Another embodiment will now be described in reference toFIGS.5,6and8.FIGS.5and6show a schematic perspective view of a videoconferencing terminal100andFIG.8shows a flow diagram of the operation of a videoconferencing terminal. Optionally, the method steps discussed with respect toFIG.9can be used together with the method steps inFIG.8, but this is not necessary. Turning toFIG.5, again the axis A-A of the camera102is in alignment with the eyes108of the local user106. InFIG.5the eyes108of the local user106are aligned with eyes502of the displayed image500of the remote user. Accordingly, the local user106and the remote user are able to make direct eye contact. As can be seen fromFIG.5, if the local user106moves with respect to the display104, the local user106is no longer aligned with the axis A-A of the camera102.FIG.5shows one possible new position of the local user106represented by a dotted outline. In the new position, the local user's106line of sight B-B is still focused on the eyes502of the displayed image500of the remote user. However, the local user106is no longer looking directly at the camera102due the parallax error introduced by the local user106also moving with respect to the camera102. This means that the captured camera image400of the local user106will not be looking directly at the camera102. However,FIG.6shows the local user106in the new position shown inFIG.5. Here the position of the local user106is offset by a distance D1from the axis A-A of the camera102. This means that the eyes108of the local user106have moved from the axis A-A by a distance D1. Specifically, as shown inFIG.6, the local user106is lower than the axis A-A. However, in other embodiments the local user106can be offset from the axis A-A of the camera102in any direction. For example, the local user106may have moved sideways with respect to the axis A-A or may be standing and the eyes108of the local user are above the axis A-A. The videoconferencing controller700sends the image500of the remote user to be displayed to the face detection module710. The face detection module710determines the position of the eyes502of the displayed image500of the remote user as shown in step800inFIG.8. The face detection module710uses feature detection on an image500of the remote user to detect where the eyes502of the displayed image500of the remote user. The face detection module710then sends position information of the eyes502of the displayed image500of the remote user to the videoconferencing controller700. Then the videoconferencing controller700determines the position of the camera102with respect to the display104. If the camera102is fixed with respect to the display104, the videoconferencing controller700can store the position of the camera102and the axis of the camera102in memory. Alternatively, the videoconferencing controller700can determine the relative position of the camera102with respect to the display104based on movement information of the camera102. For example, the videoconferencing controller700determines the position of the camera102from servo information on a mechanism for moving the camera102. Alternatively, the videoconferencing controller700determines the position of the camera102based on reference points in the captured camera image400. For example, a reference point could be a QR code fixed to a wall behind the local user106. In this way, the videoconferencing controller700determines the position and orientation of the camera102and the axis A-A of the camera102as shown in step802ofFIG.8. Then the videoconferencing controller700sends a captured camera image400of the local user106to the face detection module710. The face detection module710determines the position of the eyes108of the local user in the image400as shown in step804inFIG.8. The face detection module710uses feature detection on the image400of the local user106to detect where the eyes108are in the image400. This is similar to the step800inFIG.8for determining the position of the eyes502of the displayed image500of the remote user. The videoconferencing controller700then determines a position of the eyes108of the local user106with respect to the display104. Based on the determined position of the camera102, the videoconferencing controller700determines an offset D1between the position of the eyes108of the local user106and an axis A-A of the at least one camera102. In this way, the videoconferencing controller700determines how much the local user106has moved from the axis A-A of the camera102. This means that the videoconferencing controller700determines, a new axis A′-A′ of the camera102based on a light ray from the new position of the local user106and the position of the camera102. Accordingly, A′-A′ is the new eye contact axis. The videoconferencing controller700determines a position of the eyes502of the displayed image500of the remote user with respect to the display104. That is, the videoconferencing controller700determines where the image500would be positioned on the display104with no modification to the image500. The videoconferencing controller700then determines whether the position of the eyes502of the displayed image500of the remote user is offset D2from the new axis A′-A based on the new position of the local user106. If the videoconferencing controller700determines that the displayed image500is offset greater than a predetermined threshold, the videoconferencing controller700sends an instruction to the image processing module712to modify the image500as show in step806inFIG.8. InFIG.6, the eyes502of the displayed image500of the remote user are translated downwards by a distance of D2to intersect the new axis A′-A′. In some embodiments, the videoconferencing controller700instructs the image processing module712to modify the image500when the new position of the local user106requires the local user106to adjust their line of sight through an arc having an angle greater than 10 degrees. In some embodiments, the image processing module712to modifies the image500when the local user106adjusts their line of sight through an arc having an angle greater than 10 degrees in a horizontal and/or a vertical directions from the axis A-A. In this way, if the local user106is required to move their head or the eyes108of the local user to maintain eye contact with the eyes502of the displayed image500of the remote user, the videoconferencing controller700modifies the image500and returns modified image600. This means that there is no parallax error that prevents direct eye contact between the local user106and the remote user because the videoconferencing controller700modifies an image based on the position of the camera102and the local user106with respect to the displayed image500. In some embodiments, the videoconferencing controller700sends an instruction that a co-ordinate corresponding to the centre of the eyes502of the displayed image500of the remote user is translated to a new position. The image processing module712returns a modified image600to the videoconferencing controller700. The modified image600of the remote user is shown inFIG.6. In this way, the eyes502of the displayed image500of the remote user are moved to intersect with the new axis A′-A′. In this way, the image processing module712modifies the image500such that the eyes502of the displayed image500of the remote user intersect with the new axis A′-A′. In the new position, the local user's106line of sight B-B is focused on the eyes502of the displayed image500of the remote user and aligned with the new axis A′-A′. In some embodiments, the image processing module712modifies the image500by translating, scaling, or transforming or any other suitable image modification to move the position of the eyes502of the displayed image500of the remote user. In this way, videoconferencing controller700modifies an image based on the camera position of the at least one camera102with respect to the display104and on the user position of the local user106with respect to the display104. As mentioned above, in some embodiments, there is only one video conferencing terminal100with a videoconferencing controller700and the image processing module712as discussed with reference to the previous embodiments. In these embodiments, the videoconferencing controller700performs the image processing as discussed with reference to embodiments as shown in the Figures e.g.FIGS.8and9for both the local video conferencing terminal100and the remote video conferencing terminal. This means that the advantages of the invention can be achieved for both sides of the video conference with only one video conferencing terminal100, e.g. the local video conferencing terminal100, according to the present invention. When the local video conferencing terminal100is modifying the image for both the local and the remote video conferencing terminals100, the videoconferencing controller700performs the methods described with references to the Figures for both local and the remote video conferencing terminals. The local videoconferencing controller700then sends instructions for modifying the displayed image to the remote video conferencing terminal. For example, translation coordinates for modifying the displayed image on the remote video conferencing terminal are sent by the local video conferencing controller700to the remote video conferencing terminal100. In another embodiment two or more embodiments are combined. Features of one embodiment can be combined with features of other embodiments. Embodiments of the present invention have been discussed with particular reference to the examples illustrated. However it will be appreciated that variations and modifications may be made to the examples described within the scope of the invention.
27,939
11943564
DETAILED DESCRIPTION In this specification, reference is made in detail to specific embodiments of the invention. Some of the embodiments or their aspects are illustrated in the drawings. For clarity in explanation, the invention has been described with reference to specific embodiments, however it should be understood that the invention is not limited to the described embodiments. On the contrary, the invention covers alternatives, modifications, and equivalents as may be included within its scope as defined by any patent claims. The following embodiments of the invention are set forth without any loss of generality to, and without imposing limitations on, the claimed invention. In the following description, specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the invention. In addition, it should be understood that steps of the exemplary methods set forth in this exemplary patent can be performed in different orders than the order presented in this specification. Furthermore, some steps of the exemplary methods may be performed in parallel rather than being performed sequentially. Also, the steps of the exemplary methods may be performed in a network environment in which some steps are performed by different computers in the networked environment. Some embodiments are implemented by a computer system. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein. During remote video sessions, lighting may be an issue for some users. When users are outside, for example, the video could appear as if heavily contrasted due to the bright sunlight. The opposite problem occurs when a user is in an environment which is not properly lit, such that the user and background both appear dark and unlit. Simply increasing or decreasing the brightness of the video to adjust for such conditions may lead to the user's skin tone appearing unnatural and no longer accurate. Thus, the user wishes to adjust the lighting of the video as if a light is being shined on their natural skin tone color, rather than their skin tone color being modified. In both cases, the user may want such configuration tools to adjust the appearance of the video being presented. However, they may have a preference to only have a slight amount of their appearance be touched up, or to only have a slight amount of the lighting adjusted. Not simply having a binary state of adjustment or non-adjustment, but rather having a granular level of control over the appearance, is desirable. In addition, the changes being made to the video should be made in real time as the user plays with this granular control within a setting, so that the user can instantly see the changes that take effect and dial in the exact amount of adjustment depth (e.g., the degree to which the adjustment is implement) desired. In some cases, the user may wish to have such changes be automatically applied when the need for them is detected by the system, but within a certain range of adjustment depth that the user has preconfigured. Thus, there is a need in the field of digital media to create a new and useful system and method for providing video appearance adjustments within a video communication session. The source of the problem is a lack of ability for participants to granularly adjust the appearance of themselves and/or the lighting within a video in real time while retaining their natural skin tones. The invention overcomes the existing problems by providing users with the ability to adjust their appearance within a video. The user can select one or more video settings options to touch up the user's appearance and/or adjust the video for low light conditions. The settings include a granular control element, such as a slider, which allows the user to select a precise amount of appearance adjustment depth and/or lighting adjustment depth. The system then performs the modification of the user's appearance or adjustment for low lighting in real time or substantially real time upon the user selecting the adjustment option. As the user adjusts the depth (e.g., by dragging the depth slider left or right), a preview window reflects the change to the video that results in real time or substantially real time. The adjustments are also performed in such a way that the user's natural skin tones are preserved. One embodiment relates to a method for providing video appearance adjustments within a video communication session. First, the system receives video content within a video communication session of a video communication platform, with the video content having multiple video frames. The system then receives an appearance adjustment request comprising an adjustment depth, and detects imagery of a user within the video content. The system then detects a face region within the video content. The system segments the face region into a number of skin areas. For each of the plurality of skin areas, the system classifies the skin area as a smooth texture region or rough texture region. If the skin area is classified as a smooth texture region, the system modifies the imagery of the user in real time or substantially real time by applying a smoothing process to the skin area, where the amount of smoothing applied corresponds to the adjustment depth. In some embodiments, methods and systems provide for low lighting adjustments within a video communication session. First, the system receives video content within a video communication session of a video communication platform, the video content having multiple video frames. The system then receives or generates a lighting adjustment request including a lighting adjustment depth, then detects an amount of lighting in the video content. The system then modifies the video content to adjust the amount of lighting, wherein the amount of adjustment of lighting corresponds to the adjustment depth, and wherein adjusting the amount of lighting is performed in real time or substantially real time upon receiving the lighting adjustment request. Further areas of applicability of the present disclosure will become apparent from the detailed description, the claims and the drawings. The detailed description and specific examples are intended for illustration only and are not intended to limit the scope of the disclosure. FIG.1Ais a diagram illustrating an exemplary environment in which some embodiments may operate. In the exemplary environment100, a user's client device is connected to a processing engine102and, optionally, a video communication platform140. The processing engine102is connected to the video communication platform140, and optionally connected to one or more repositories and/or databases, including a participants repository130, skin area repository132, and/or a settings repository134. One or more of the databases may be combined or split into multiple databases. The user's client device150in this environment may be a computer, and the video communication platform server140and processing engine102may be applications or software hosted on a computer or multiple computers which are communicatively coupled via remote server or locally. The exemplary environment100is illustrated with only one user's client device, one processing engine, and one video communication platform, though in practice there may be more or fewer client devices, processing engines, and/or video communication platforms. In some embodiments, the client device, processing engine, and/or video communication platform may be part of the same computer or device. In an embodiment, the processing engine102may perform the exemplary method ofFIG.2, the exemplary method ofFIG.3, or other method herein and, as a result, provide video appearance adjustments within a video communication session. In some embodiments, this may be accomplished via communication with the user's client device, processing engine, video communication platform, and/or other device(s) over a network between the device(s) and an application server or some other network server. In some embodiments, the processing engine102is an application, browser extension, or other piece of software hosted on a computer or similar device, or is itself a computer or similar device configured to host an application, browser extension, or other piece of software to perform some of the methods and embodiments herein. The user's client device150is a device with a display configured to present information to a user of the device. In some embodiments, the client device presents information in the form of a user interface (UI) with multiple selectable UI elements or components. In some embodiments, the client device150is configured to send and receive signals and/or information to the processing engine102and/or video communication platform140. In some embodiments, the client device is a computing device capable of hosting and executing one or more applications or other programs capable of sending and/or receiving information. In some embodiments, the client device may be a computer desktop or laptop, mobile phone, virtual assistant, virtual reality or augmented reality device, wearable, or any other suitable device capable of sending and receiving information. In some embodiments, the processing engine102and/or video communication platform140may be hosted in whole or in part as an application or web service executed on the client device150. In some embodiments, one or more of the video communication platform140, processing engine102, and client device150may be the same device. In some embodiments, the user's client device150is associated with a user account within a video communication platform. In some embodiments, optional repositories can include one or more of a participants repository130, skin area repository132, and/or settings repository134. The optional repositories function to store and/or maintain, respectively, participant information associated with a video communication session on the video communication platform140, segments of skin areas present within video feeds of users within a video communication session, and settings of the video communication session and/or preferences of users within a video communication platform. The optional database(s) may also store and/or maintain any other suitable information for the processing engine102or video communication platform140to perform elements of the methods and systems herein. In some embodiments, the optional database(s) can be queried by one or more components of system100(e.g., by the processing engine102), and specific stored data in the database(s) can be retrieved. Video communication platform140is a platform configured to facilitate video communication between two or more parties, such as within a conversation, video conference or meeting, message board or forum, virtual meeting, or other form of digital communication. The video communication session may be one-to-many (e.g., a speaker presenting to multiple attendees), one-to-one (e.g., two friends speaking with one another), or many-to-many (e.g., multiple participants speaking with each other in a group video setting). FIG.1Bis a diagram illustrating an exemplary computer system150with software modules that may execute some of the functionality described herein. User interface display module152functions to display a UI for each of the participants within the video communication session, including at least a settings UI element with configuration settings for video broadcasting within the video communication platform, participant windows corresponding to participants, and videos displayed within participant windows. Video display module154functions to display the videos for at least a subset of the participants, which may appear as live video feeds for each participant with video enabled. Adjustment selection module156functions to receive, from a client device, a selection of one or more video appearance adjustment elements within a settings UI. Segmentation module158functions to segment a face region of a user that appears within a video feed being broadcasted within a video communication session that corresponds to the user. The face region is segmented into multiple skin areas. Classification module160functions to classify the segmented skin areas of the face region as smooth texture regions or rough texture regions based on a received adjustment depth. Modification module162functions to modify the imagery of the user by applying a smoothing process to the skin area based on the received adjustment depth. The modification is performed in real time or substantially real time upon receiving an appearance adjustment request. The above modules and their functions will be described in further detail in relation to an exemplary method below. FIG.2is a flow chart illustrating an exemplary method that may be performed in some embodiments. At step210, the system receives video content within a video communication session of a video communication platform. In some embodiments, the video content has multiple video frames. In some embodiments, the video content is generated via an external device, such as, e.g., a video camera or a smartphone with a built-in video camera, and then the video content is transmitted to the system. In some embodiments, the video content is generated within the system, such as on the user's client device. For example, a participant may be using her smartphone to record video of herself giving a lecture. The video can be generated on the smartphone and then transmitted to the processing system, a local or remote repository, or some other location. In some embodiments, the video content is pre-recorded and is retrieved from a local or remote repository. In various embodiments, the video content can be streaming or broadcasted content, pre-recorded video content, or any other suitable form of video content. The video content has multiple video frames, each of which may be individually or collectively processed by the processing engine of the system. In some embodiments, the video content is received from one or more video cameras connected to a client device associated with the first participant and/or one or more client devices associated with the additional participants. Thus, for example, rather than using a camera built into the client device, an external camera can be used which transmits video to the client device. In some embodiments, the first participant and any additional participants are users of a video communication platform, and are connected remotely within a virtual video communication room generated by the video communication platform. This virtual video communication room may be, e.g., a virtual classroom or lecture hall, a group room, a breakout room for subgroups of a larger group, or any other suitable video communication room which can be presented within a video communication platform. In some embodiments, the video content is received and displayed on a user's client device. In some embodiments, the system displays a user interface for each of a plurality of participants within the video communication session. The UI includes at least a number of participant windows corresponding to participants, and video for each of at least a subset of the participants to be displayed within the corresponding participant window for the participant. In some cases, a participant may wish to not enable a video feed to be displayed corresponding to himself or herself, or may not have any video broadcasting capabilities on the client device being used. Thus, in some instances, for example, there may be a mix of participant windows with video and participant windows without video. The UI to be displayed relates to the video communication platform140, and may represent a “video window”, such as a window within a GUI that displays a video between a first participant, with a user account within the video platform, and one or more other user accounts within the video platform. The first participant is connected to the video communication session via a client device. In some embodiments, the UI includes a number of selectable UI elements. For example, one UI may present selectable UI elements along the bottom of a communication session window, with the UI elements representing options the participant can enable or disable within the video session, settings to configure, and more. For example, UI elements may be present for, e.g., muting or unmuting audio, stopping or starting video of the participant, sharing the participant's screen with other participants, recording the video session, displaying a chat window for messages between participants of the session, and/or ending the video session. A video settings UI element may also be selectable, either directly or within a menu or submenu. One example of a communication interface within a video communication platform is illustrated inFIG.4A, which will be described in further detail below. In some embodiments, one included UI element is a selectable video settings UI window. An example of this UI window is illustrated inFIG.4B, which will be described in further detail below. Examples of selectable settings within a video settings UI window may include, e.g., options to enable high-definition (HD) video, mirror the user's video, touch up the user's appearance within the video, adjust the video for low light, and more. In some embodiments, settings such as touching up the user's appearance and adjusting the video for low light may include UI elements for adjusting the depth of the effect. In some examples, such UI elements may be sliders. Another portion of the UI displays a number of participant windows. The participant windows correspond to the multiple participants in the video communication session. Each participant is connected to the video communication session via a client device. In some embodiments, the participant window may include video, such as, e.g., video of the participant or some representation of the participant, a room the participant is in or a virtual background, and/or some other visuals the participant may wish to share (e.g., a document, image, animation, or other visuals). In some embodiments, the participant's name (e.g., real name or chosen username) may appear in the participant window as well. One or more participant windows may be hidden within the UI, and selectable to be displayed at the user's discretion. Various configurations of the participant windows may be selectable by the user (e.g., a square grid of participant windows, a line of participant windows, or a single participant window). The participant windows are also configured to display imagery of the participant in question, if the participant opts to appear within the video being broadcasted, as will be discussed in further detail below. Some participant windows may not contain any video, for example, if a participant has disabled video or does not have a connected video camera device (e.g. a built-in camera within a computer or smartphone, or an external camera device connected to a computer). The videos displayed for at least a subset of the participants appear within each participant's corresponding participant window. Video may be, e.g., a live feed which is streamed from the participant's client device to the video communication session. In some embodiments, the system receives video content depicting imagery of the participant, with the video content having multiple video frames. The system provides functionality for a participant to capture and display video imagery to other participants. For example, the system may receive a video stream from a built-in camera of a laptop computer, with the video stream depicting imagery of the participant. At step212, the system receives an appearance adjustment request, including an adjustment depth, e.g., an adjustment amount or the degree to which the adjustment is implemented. In some embodiments, the request is received from a client device associated with a user. The client device in question may be, e.g., the user's client device150, where the user is a participant of the video session. In some embodiments, the user may have navigated within a user interface on their client device to the video settings UI window, and then checked a “touch up my appearance” checkbox or manipulated another such UI element. In some embodiments, the UI element may be selected by a participant by, e.g., clicking or holding down a mouse button or other component of an input device, tapping or holding down on the UI element with a finger, stylus, or pen, hovering over the UI element with a mouse or other input device, or any other suitable form of selecting a UI element. In some embodiments, upon selecting the UI element, a slider element, sub window, or other secondary UI element appears which provides the participant with the ability to granularly adjust the depth of the video appearance adjustment which is to be performed on the video of the participant. Upon selecting the desired adjustment depth, or simply allowing for the default adjustment depth without selecting one (the default depth may be, e.g., 100% or 50% depth), the selection of UI element(s) is sent to the system (e.g., the processing engine102) to be processed. In various embodiments, the appearance adjustment request may be related to, e.g., one or more of: making adjustments to the user's facial shape, applying virtual makeup or other beautification or aesthetic elements to the user's face, teeth whitening, teeth shape alteration, hairstyle modification, hair texture modification, addition of an accessory such as a hat or glasses, changes to the user's clothing, or any other suitable adjustment which may be contemplated. In some embodiments, rather than receiving the appearance adjustment request from a client device, the system detects that an appearance adjustment should be requested based on one or more adjustment detection factors, then automatically generates an appearance adjustment request including an adjustment depth. In these embodiments, a user does not, e.g., select a UI element within a Video Settings UI window in order to enable an appearance adjustment. Instead, the user may enable a setting to turn on automatic appearance adjustment. The system then detects when an appearance adjustment may be needed based on one or more factors. In some embodiments, such adjustment detection factors may include, e.g., detected facial features visible in the video content such as wrinkles, spots, blemishes, or skin non-uniformities. In some embodiments, a user may specify parameters for when the system should detect that an appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust appearance when skin blemishes show up on the screen. In some embodiments, the user may be able to select a range of skin tones that applies to them, and then the appearance adjustment can detect when there are discolorations, blemishes, spots, or skin non-uniformities based on those preselected skin tones. The appearance adjustment techniques can also preserve the user's skin tone based on the selected range of skin tones. At step214, the system detects imagery of a user within the video content. In some embodiments, the imagery of the user is detected via one or more video processing and/or analysis techniques. In some embodiments, the detection of the user's imagery may be performed by one or more Artificial Intelligence (AI) engines. Such AI engine(s) may be configured to perform aspects or techniques associated with, e.g., machine learning, neural networks, deep learning, computer vision, or any other suitable AI aspects or techniques. In some embodiments, such AI engine(s) may be trained on a multitude of differing images of user imagery appearing within video content, as well as images where user imagery does not appear within video content. In some embodiments, the AI engine(s) are trained to classify, within a certain range of confidence, whether a user appears or does not appear within a given piece of video content. In some embodiments, the system crops the video content to include only a head region of the user. In some embodiments, the system generates new video content and/or multiple new frames from the video content, with the video content or frames cropped to isolate the region of the user's imagery to just the user's head. As in detecting the imagery of the user above, one or more AI engine(s) may be utilized to perform this cropping of the video content or frames to just the user's head. In some embodiments, the system first determines a boundary about the user in the video frames in order to separate the user image from the background of the video, where the boundary has an interior portion and an exterior portion. In some embodiments, determining the boundary may partially or fully involve “image masking” techniques and/or backdrop removal techniques, whereby an image is separated from its background. Each of the video frames is a still image depicting the user. The outline of the user is detected by the system and used as the boundary about the user. The boundary has an interior portion, consisting of everything inside of the boundary or outline of the user; and an exterior portion, consisting of everything outside of the boundary or outline of the user. In some embodiments, the interior portion and exterior portion of the boundary each constitute layers which are separated into different images for each video frame. In various embodiments, image masking techniques used may include, e.g., layer masking, clipping mask, alpha channel masking, or any other suitable image masking techniques. In some embodiments, the boundary is updated each time the user moves, i.e., as additional video frames are received, such that the user moving around in the frame of the video leads to the boundary being updated. In some embodiments, once the boundary has been determined, the interior portion of the boundary is cropped to include just the head of the user. At step216, the system detects a face region within the video content. In some embodiments, as in previous steps, the system may detect the face region using one or more aspects or techniques of AI engine(s). For example, in some embodiments a deep learning model may be used for face detection. Such a deep learning model may be trained based on, e.g., a multitude of images of users' faces within cropped and/or uncropped images from video content. In some embodiments, one or more facial recognition algorithms are used. In some embodiments, feature-based methods may be employed. In some embodiments, statistical tools for geometry-based or template-based face recognition may be used, such as, e.g., Support Vector Machines (SVM), Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Kernel methods or Trace Transforms. Such methods may analyze local facial features and their geometric relationships. In some embodiments, techniques or aspects may be piecemeal, appearance-based, model-based, template matching-based, or any other suitable techniques or aspects for detecting a face region. At step218, the system segments the face region into multiple skin areas. In some embodiments, as in previous steps, the system may segment the face region into multiple skin areas using one or more aspects or techniques of AI engine(s). In some embodiments, one or more algorithms are used to implement human face and facial feature detection. In some embodiments, various techniques or aspects may be employed, including, e.g., template matching, Eigen faces, neural network models, deformable templates, combined facial features methods, or any other suitable techniques or aspects. In some embodiments, the face region is segmented into discrete regions representing, e.g., mouth, eyes, hair, nose, chin, forehead, and/or other regions. In some embodiments, the system detects skin color. In some embodiments, the system then segments the face region into multiple skin areas based on the detected skin color. In some embodiments, skin color may be a range of skin colors or skin tones which are determined for a user. Skin color may be detected based on various color spaces, such as, e.g., RGB, XYZ, CIE-Lab, HSV, or YcbCr. In some embodiments, hue and saturation domains are utilized in order to classify skin color, and one or more thresholds are set for these domains. For example, the hue and saturation values of each pixel in the image may be tested, and if they are within the interval formed by the thresholds, then the pixel is identified as a skin pixel. If the values are outside of the interval, then the pixel is not identified as a skin pixel. At step220, for each of the skin areas, the system classifies the skin area as either a smooth texture region or a rough texture region. In some embodiments, this classification is based on the adjustment depth which was provided along with the appearance adjustment request. The adjustment depth determines the threshold for whether a given skin area is to be classified as a smooth texture region as compared to a rough texture region. For example, if the adjustment depth received is 20%—i.e., the appearance adjustment should only be applied at 20% intensity to the user's image—then the system set a threshold for a skin area to be rough to be relatively high. The system then accordingly determines that most skin regions are to be classified as smooth (and thus do not need to be smoothed further). In contrast, if the appearance adjustment should be applied at 90% or 100% intensity, then the threshold for a skin area to be rough will be relatively low, such that most skin regions are to be classified as rough and in need of smoothing to be applied. In some embodiments, bilateral filtering may be employed to classify the skin areas. In some embodiments, segmenting the face region into multiple skin areas is based on a determined set of skin tones. For example, upon determining a set of skin tones for a user, the system can then separate out skin areas as differing from non-skin areas for the imagery of the user. In one example, the system first searches for a face region based on the skin color information, then identifies skin areas based on the skin color information. At step222, if the given skin area is classified as a smooth texture region, then the system modifies the imagery of the user in real time or substantially real time by applying a smoothing process to the skin area based on the adjustment depth. The smoothing process has the effect of appearing to smooth over certain irregularities visible on a face, such as, e.g., wrinkles, blemishes, spots, and skin non-uniformities. The smoothing process also restores or preserves the texture of rough edges within or adjacent to the skin area. In some embodiments, bilateral filtering may be employed to smooth the face of the participant and preserve edges of the skin areas. Within traditional bilateral filtering, each pixel is replaced by a weighted average of its neighboring pixels. Each neighboring pixel is weighted by a spatial component that penalizes distant pixels and a range component that penalizes pixels with a different intensity. The combination of both components ensures that only nearby similar pixels contribute to the final result. In some embodiments, variants of bilateral filtering or similar techniques may be efficient enough with available computing resources to enable the smoothing process to occur in real time or substantially real time upon the system receiving an appearance adjustment request. In some embodiments, the modification of the imagery is performed such that as soon as a user selects the UI element for touching up the user's appearance, a preview video is displayed in real time or substantially real time showing the user's video if the appearance adjustment is applied. The user may then, e.g., select different adjustment depths, or drag a slider UI element for the adjustment depth left or right, with the preview video registering the modifications and updated adjustments in real time or substantially real time. If a user selects a confirmation UI element, then the user's video appearance is adjusted accordingly for the video communication session, until the session ends or the user disables the appearance adjustment setting. In some embodiments, one or more corrective processes are applied to restore the skin tones in the imagery to a set of detected skin tones in the imagery. In some embodiments, the system may utilize edge-aware smoothing filters, such as bilateral filtering, in order to preserve facial feature structures while smoothing blemishes. For example, bilateral filtering techniques can be applied to preserve the edge of the user's eyes and nose, as well as the facial boundary, while smoothing areas adjacent to them. In some embodiments, one or more skin-mask generation algorithms may be applied, including, e.g., color pixel classification, Gaussian Mixture Model (GMM) methods, and/or deep learning-based facial feature segmentation approaches. In some embodiments, the techniques used are robust to skin tone variation. In some embodiments, the techniques used in steps222and224are configured to smooth over the low gradient parts in the image or video. Thus, the smoothing can be applied in a gradient, such that the smoothing is applied to a lesser degree to areas closer to rough sections of the face, and the smoothing is applied to a greater degree to areas closer to smooth sections of the face. FIG.3is a flow chart illustrating an exemplary method for providing video lighting adjustment that may be performed in some embodiments. In some embodiments, the exemplary method begins at the point after step210is performed (i.e., after the system receives the video content within the video communication session). In some embodiments, at least part of the exemplary method is performed concurrently to one or more steps ofFIG.2. At step310, the system receives video content within a video communication session of a video communication platform, as described above with respect to step210ofFIG.2. At step312, the system receives a lighting adjustment request, including a lighting adjustment depth. In some embodiments, the lighting adjustment request and lighting adjustment depth are received from a client device associated with a user. In some embodiments, the user may have navigated within a user interface on their client device to the video settings UI window, and then checked an “adjust for low light” checkbox or manipulated another such UI element. In some embodiments, the UI element may be selected by a participant by, e.g., clicking or holding down a mouse button or other component of an input device, tapping or holding down on the UI element with a finger, stylus, or pen, hovering over the UI element with a mouse or other input device, or any other suitable form of selecting a UI element. In some embodiments, upon selecting the UI element, a slider element, sub window, or other secondary UI element appears which provides the participant with the ability to granularly adjust the depth of the lighting adjustment which is to be performed on the video of the participant. Upon selecting the desired lighting adjustment depth, or simply allowing for the default adjustment depth without selecting one (the default depth may be, e.g., 100% or 50% lighting adjustment depth), the selection of UI element(s) is sent to the system (e.g., the processing engine102) to be processed. In some embodiments, rather than receiving the lighting adjustment request from a client device, the system detects that a lighting adjustment should be requested based on one or more lighting adjustment detection factors, then automatically generates a lighting adjustment request including a lighting adjustment depth. In these embodiments, a user does not, e.g., select a UI element within a Video Settings UI window in order to enable lighting adjustment. Instead, the user may enable a setting to turn on automatic lighting adjustment. The system then detects when a lighting adjustment may be needed based on one or more factors. In some embodiments, such lighting adjustment detection factors may include, e.g., detected low light past a predetermined threshold on a user's face, in the background, or throughout the video. In some embodiments, factors may also include a detected video quality of the video content, and detection of relative lighting on the subject compared to the background of the video. In some embodiments, a user may specify parameters for when the system should detect that a lighting appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust lighting only when the light in the room goes below a certain level. In some embodiments, the user may be able to select a range of skin tones that applies to them, and then the lighting adjustment can detect when there is low lighting based on those preselected skin tones. The lighting adjustment techniques can also preserve the user's skin tone based on the selected range of skin tones. At step314, the system detects an amount of lighting in the video content. In some embodiments, the system may employ one or more AI engines or AI techniques to detect the amount of lighting in the video content. In some embodiments, the video is analyzed using one or more image processing or image analysis techniques or methods. In some embodiments, a scene may be interpreted from the two-dimensional image or video content, and geometric reconstruction may occur based on the interpreted scene. In some embodiments, one or more light sources may be detected within the image or video content. In some embodiments, one or more positions, directions, and/or relative intensities of one or more light sources may be determined or estimated. At step316, the system modifies the video content to adjust the amount of lighting in real time or substantially real time based on the lighting adjustment depth. In some embodiments, the lighting is adjusted based on one or more AI engines or AI techniques, such as, e.g., deep learning techniques. In some embodiments, a convolutional neural network may be used to perform this adjustment. In various embodiments, the system may perform the lighting adjustment using processes or techniques such as, e.g., a dehazing based method, a naturalness preserved enhancement algorithm (NPE), an illumination map estimated based algorithm (LIME), a camera response based algorithm, a multi-branch low-light enhancement network (MBBLEN), and/or a bio-inspired multi-exposure fusion algorithm. In some embodiments, the system receives one or more detected lighting sources from step312and enhances the lighting in the image or video content such that it appears to be sourced from the detected lighting sources. In some embodiments, the depth or intensity of the lighting adjustment corresponds to the lighting adjustment depth that was received by the system. In some embodiments, the system adjusts the lighting while preserving natural elements of the image or video content. In some embodiments, the system has detected skin color or a range of skin tones of the participant appearing in the video, and the adjustment of lighting is performed such that the range of skin tones is preserved. For example, lighting may increase in an image or video, while a user's skin tone is still accurately represented in the image or video. Thus, in some cases the user's natural skin tone may appear brighter as the lighting changes, but does not appear lighter (i.e., the skin tone itself does not become lighter). The effect may therefore be as if a light or multiple lights are being shone on the user's natural skin, rather than the user's skin appearing as a different set of tones. In some embodiment, this is performed by modifying a Y′ amount of a YUV color space within the image or video corresponding to lightness, without changing the color tone(s) of the skin, and modifying a UV amount of the image or video corresponding to color. In some embodiments, the system may separate skin areas from the background of the video. In some embodiments, the system separates the imagery of the user from the background of the video content, and then modifies the video content to adjust the amount of lighting differently for the background compared to the imagery of the user. In some embodiments, the low light adjustment can be performed according to one or more themes which can be configured by the user. For example, a user may wish for the lighting in the video to appear as if a spotlight is directed on the user, with all else outside the spotlight appearing darkened. In another example, a user may wish to appear as if they are on a theater stage during a performance. Many such possibilities can be contemplated. FIGS.4A-4Gare diagrams illustrating various aspects of the systems and methods herein through different example embodiments. FIG.4Ais a diagram illustrating one example embodiment of a video settings UI element within a video communication session. User interface400depicts a UI that a particular participant is viewing on a screen of the participant's client device. A bar at the bottom of the UI present a number of selectable UI elements within the UI. These elements include Mute, Stop Video, Security, Participants, Chat, and Share Screen. An up arrow element appears on some of the elements, including the Stop Video element. The user has clicked on the up arrow for the Stop Video element, and a sub menu has been displayed in response. The submenu includes a number of video-based elements, including an HD Camera, Choose Virtual Background, and Video Settings. The user is about to click on the Video Settings sub menu item. FIG.4Bis a diagram illustrating one example embodiment of appearance adjustment UI elements within a video communication session. The user fromFIG.4Ahas selected the sub menu element appearing as “Video Settings . . . ”. The system responds by displaying a Video Settings UI window. The UI window includes a number of selectable elements for configuring video settings for the video communication session. One of the options appears as “Touch up my appearance” along with a checkbox UI element402. Next to this element, an additional slider element404is displayed for allowing the user to select an adjustment depth as needed. The user can optionally drag the slider left or right to have granular control over the precise amount of adjustment depth desired. FIG.4Cis a diagram illustrating one example embodiment of an unselected appearance adjustment UI element within a video communication session. Similarly toFIG.4B, a Video Settings UI window is displayed, including a “Touch Up My Appearance” element and an unchecked checkbox UI element408. No slider UI element has appeared yet. A preview window406appears as well, showing un-modified imagery of a user. FIG.4Dis a diagram illustrating one example embodiment of a selected appearance adjustment UI element within a video communication session. The user inFIG.4Chas opted to select the checkbox element408which was unchecked. The system responds by registering the checkbox element as a checked checkbox410. The slider element appears now that the checkbox has been checked, and the user is able to adjust the appearance adjustment depth. The preview window412now shows a modified image of a user, as the system has performed the steps of the smoothing process for adjusting the user's appearance in real time or substantially real time. FIG.4Eis a diagram illustrating a video showing a low lighting environment within a video communication session. The imagery of the user in the video content is hard to see and poorly-defined. The user's face is barely visible, and his expressions are difficult to ascertain for other users. A light source appears to be originating from behind the user, thus contributing to the darkened view of the user. FIG.4Fis a diagram illustrating a video with lighting adjustment applied within a video communication session. After the lighting has been adjusted, the user is now much more visible, and his face and facial expressions are now clearly ascertainable. The lighting has been adjusted such that the lighting no longer appears to be solely located behind the user, but instead is diffuse and/or spread out around the room in an even or semi-even fashion. The user himself appears to be lit from the front rather than the back, as if a light is shining on his face in order to light him professionally. This lighting adjustment is performed in real time or substantially real time upon the system receiving a lighting adjustment request. FIG.4Gis a diagram illustrating one example embodiment of an unselected lighting adjustment UI element within a video communication session. The Video Settings UI Window is once again shown, as inFIG.4B. An “adjust for low light” video setting is visible along with an unchecked checkbox420. FIG.4His a diagram illustrating one example embodiment of a selected lighting adjustment UI element within a video communication session. The user fromFIG.4Ghas opted to check the checkbox420, and the system responds by presenting the checked checkbox422for adjusting the low lighting of the video, as well as a slider UI element for adjusting the lighting adjustment depth in a granular fashion. FIG.5is a diagram illustrating an exemplary computer that may perform processing in some embodiments. Exemplary computer500may perform operations consistent with some embodiments. The architecture of computer500is exemplary. Computers can be implemented in a variety of other ways. A wide variety of computers can be used in accordance with the embodiments herein. Processor501may perform computing functions such as running computer programs. The volatile memory502may provide temporary storage of data for the processor501. RAM is one kind of volatile memory. Volatile memory typically requires power to maintain its stored information. Storage503provides computer storage for data, instructions, and/or arbitrary information. Non-volatile memory, which can preserve data even when not powered and including disks and flash memory, is an example of storage. Storage503may be organized as a file system, database, or in other ways. Data, instructions, and information may be loaded from storage503into volatile memory502for processing by the processor501. The computer500may include peripherals505. Peripherals505may include input peripherals such as a keyboard, mouse, trackball, video camera, microphone, and other input devices. Peripherals505may also include output devices such as a display. Peripherals505may include removable media devices such as CD-R and DVD-R recorders/players. Communications device506may connect the computer100to an external medium. For example, communications device506may take the form of a network adapter that provides communications to a network. A computer500may also include a variety of other devices504. The various components of the computer500may be connected by a connection medium such as a bus, crossbar, or network. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying” or “determining” or “executing” or “performing” or “collecting” or “creating” or “sending” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description above. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
52,054
11943565
DETAILED DESCRIPTION FIG.1schematically illustrates an example of a video system. The video system10comprises a dynamic 3D virtual environment50which comprises a number of virtual video cameras100, a video management system300, a display250and a number of peripheral devices500,600,700(here, physical peripheral devices500and virtual peripheral devices600,700). In the example ofFIG.1the peripheral devices500,600,700are shown as being remote from the video management system300, in particular as being executed by a data processing system remote from the video management data processing system on which the video management system is executed. In particular, embodiments are described where the peripheral devices are cloud-based or simulated. It will be appreciated, however, that in some embodiments, one or more, or even all, peripheral devices may be integrated into the video management system or otherwise executed on the same data processing system as the video management system. Optionally, the video system10may comprise additional components, such as a video analytics system and/or other video-receiving systems. These may be implemented on a data processing system remote from the video management system, as illustrated inFIG.1, or they may be executed on the same data processing system as the video management system. The number of virtual video streams displayed on the operator's display250may vary upon selection of a video stream by the operator. For instance, video streams may initially be presented to the operator in a 3×3 or 4×4 grid view. Upon selection of a video stream by the operator in a 3×3 grid view, the four lower screens may be converted into a single screen showing the selected video stream, and the remaining 5 screens may be used (in full or in part) to show relevant video streams from the video stream or new video stream recommendation. Note that the choice of the layout (3×3, 4×4 etc.) may depend on the size of the operator's display. The operator may also have multiple screens, and each video stream might appear on a different screen, in a full screen view. WhileFIG.1illustrates a system with three virtual video cameras, it will be appreciated that other examples may include fewer than three or more than three virtual video cameras. Thus, a dynamic 3D virtual environment50may include tens or even hundreds of virtual video cameras. The virtual video cameras are advantageously distributed across an area where surveillance or monitoring is desired e.g. across the premises of a building such as a virtual hospital or virtual shopping mall. The number and positions of virtual video cameras as well as the types of virtual video cameras to be installed may be selected based on factors such as the level of surveillance desired, the size of the facility and/or the complexity of the layout of the facility. It will be appreciated that the present disclosure is not limited to a closed or indoor environment or facility but also applies to open environments such as outdoor environments. Examples of closed environment include indoor spaces inside airports, busses, airplanes, manufacturing plants, nuclear power plants, schools, hospitals, prisons, shopping malls, office spaces and the like. Examples of open environments include freeways, intersections, outdoor car parks, streets, recreational parks, and the like. It will be appreciated that some environments may include closed and open areas, and that some of these areas may not always be clearly defined by boundaries such as walls, doors, barriers, and the like. Although the present disclosure particularly relates to the field of video surveillance, other typical purposes for monitoring video streams may be documentation, medical observation, building management, production, traffic control and/or process control. The virtual video cameras may correspond to simulations of conventional video cameras known as such in the art of video surveillance. It will be appreciated that the dynamic 3D virtual environment50may include a plurality of virtual video cameras of the same type, i.e. virtual video cameras having the same capabilities, providing the same type of video output, in the same format etc. Alternatively, the dynamic 3D virtual environment50may include cameras of different types, e.g. virtual video cameras having different capabilities, providing video streams of different resolution, in different formats or outputting additional metadata associated with the video. Examples of capabilities of virtual video cameras may include one or more of the following: audio recording, video recording in visible wavelength ranges and/or in infrared wavelength ranges, such as near-infrared wavelength ranges, control functions such as pan, tilt or zoom, image processing capabilities, motion detection, etc. The virtual video cameras100are communicatively connected to the video management system300. To this end, the virtual video cameras100may be connected to the video management system via a local area network200or in a different suitable manner, e.g. via simulated point-to-point wired and/or wireless connections, or the like. For example, the virtual video cameras may be connected to the video management system via a simulated Ethernet connection. An example of a simulated wireless connection includes a 5G network. Within the context of the present disclosure, the term “peripheral devices” (whether virtual or physical) should be understood as comprising devices for generating signals and/or data, such as monitoring signals and/or data streams. Typical peripheral devices which may be used or simulated include audio recording equipment, or the like, traditional measurement or sensing devices, such as sensors for biological, chemical, or physical quantities/parameters, electrical, magnetic or optical sensors, temperature or wind sensors, light detecting sensors, motion detecting sensors such as passive infrared (PIR) sensors, sensors which use microwave or ultrasonic pulses, or vibration sensors, biometric sensors or systems, access control and alarm equipment or systems, virtual door access control equipment or systems, and production process parameter sensors. The present disclosure is not limited to any particular types of peripheral devices. Preferably, the peripheral devices comprise a combination of devices such as access control and alarm equipment or systems or virtual door access control equipment or systems. In the example ofFIG.1, the dynamic 3D virtual environment50includes, a virtual loudspeaker600and a virtual door access control equipment700. As can be seen onFIG.1, the virtual door access control equipment700is virtually directly connected to a virtual video camera100. In this way, the virtual video stream generated by this video camera is input into the virtual door access control equipment700, which may in turn restrict or allow access to a virtual person in a part or area of the dynamic 3D virtual environment based on a recognition functionality embedded in the virtual door access control equipment700. For instance, the virtual door access control equipment700may be configured to detect a virtual tag attached to or worn by a virtual person. This configuration may allow to detect events of interest more easily, for instance, in this example, someone trying to access a restricted area. Alternatively, the virtual door access control equipment700may activate a virtual video camera upon detection of such a virtual tag and/or change a field of view, pan, tilt, zoom and/or resolution setting of at least one virtual camera which observes the virtual peripheral device700in the dynamic 3D virtual environment. In this way, the operator may get a better or different view of the scene comprising the virtual peripheral device700and/or of the virtual person accessing the virtual peripheral device700. According to another alternative, the virtual peripheral device700may stop the recording of a virtual video camera upon detection of such a virtual tag. It will also be appreciated that the virtual video stream can be input in a different type of virtual peripheral device, such as for instance a simulated virtual broadcasting device, which in turn broadcasts the virtual video stream to the video management system. The virtual data stream or signal generated by the virtual peripheral device within the dynamic 3D virtual environment may then be input into the video management system, which may in turn output on the display at least one virtual data stream and/or at least one alert, alarm and/or message based on the virtual data stream. The virtual data stream output on the display may for instance be a compressed video stream of the virtual video stream input into the virtual peripheral device, which in this case is a virtual video compressor. As another example, the virtual peripheral device may be any other type of video editing or video filtering device. It will be appreciated that the output from the virtual peripheral device may also be an alert, alarm and/or message, for instance a notification that the virtual video stream generated by the virtual video camera input into the virtual peripheral device contains an event of interest, detected by the virtual peripheral device. For instance, the virtual peripheral device may be a device configured to count people in a room and to send a signal to the video management system when the room's capacity has been reached, which in turn causes the computer to display an alert on the display and emit a sound (or any other kind of appropriate alarm). In the above-mentioned examples, it may be advantageous to receive a command from the operator indicative of an instruction to turn off the at least one alert, alarm and/or message output from the virtual management system. This makes it possible to evaluate if the operator reacts appropriately or if the operator reacts at all in case of a false alarm corresponding to the event of interest (if, for example, they have interrupted a fire alarm in the presence of a fire) while they should have done something else. It will be appreciated that some examples of video systems may include virtual peripheral devices600,700configured to operate without any video input, such as sensors providing sensor signals and/or media streams different from video streams, such as audio signals, radar signals, Lidar signals, etc., as described above. Note that the virtual peripheral devices600,700do not need to be connected or to communicate with each other and can be connected to the video management system via a communications network400described below with reference toFIG.1. The communications network400may be the Internet or another suitable communications network. Alternatively, some or all of the virtual peripheral devices600may communicate with each other, as appropriate. For instance, a virtual peripheral device may update its operating parameters by downloading new parameters from another virtual peripheral device through the communications network400. Communication between the virtual peripheral devices may be virtualized or simulated. The virtual peripheral devices may advantageously be operable and/or controllable from the video management system. For instance, they may be turned on/off from the video management system and/or controlled from the video management system. Such control may consist in choosing one or more parameters from the video management system or from a management device for operating the virtual peripheral device in the dynamic 3D virtual environment. The virtual data streams and/or signals may be recorded in real-time in a recording server320which will be described below with reference toFIG.1. The video system10may also advantageously include one or more physical peripheral devices500, which communicate with the video management system and/or the dynamic 3D virtual environment (optionally, through the video management system). Examples of such physical peripheral devices500include devices configured to alter a functioning of the dynamic 3D virtual environment, for instance an access control device which is configured to open or close doors in the virtual world, or a smoke detector configured to trigger a fire alarm in the virtual world through the virtual loudspeaker600. In this way, it is possible to trigger specific events of interest in the dynamic 3D virtual world and/or measure an operator's response time to a particular situation. This allows for instance to test video management system functionality and/or to improve operator training. It will be contemplated that the data streams and/or signals generated by the physical peripheral devices500may be processed and/or used as the virtual data streams and/or signals generated by the virtual peripheral devices600,700described above. For instance, the data streams and/or signals generated by the physical peripheral devices500may be input into the video management system300via the communications network400, input to any other appropriate virtual peripheral device (such as to the virtual door access control equipment700illustrated inFIG.1), input to any of the virtual video cameras100and/or stored in the recording server320, and so forth. Further description of the data streams and/or signals generated by the physical peripheral devices500is therefore omitted. The signals and/or data streams generated by the peripheral devices500,600,700can be segmented into data segments of manageable sizes in order to be stored on recording servers. The data streams can then be retrieved from the recording servers for live or playback streaming for viewing and/or analysis at a client side. The video management system300receives virtual video streams from the virtual video cameras100and, optionally, input signals from other sources (as described above with reference to the peripheral devices500,600,700). The video management system may be configured to store the received virtual video streams in a media repository350, and provides an interface360for accessing the live virtual video streams, and to access virtual video streams stored in the media repository350. The media repository350may be a media database or any other suitable storage device for storing media content. The video management system may include a user interface allowing users to view the live virtual videos and/or store virtual videos and/or to control operation of one or more of the virtual video cameras. The video management system300may be embodied as a software program executed by a suitable data processing system, e.g. by one or more computers, each having one or more processors, and preferably by one or more server computers, each having one or more processors. For instance, the video management system may be the XProtect® software program developed by Milestone Systems®. The video management system may comprise one or more camera drivers310for providing interfaces to respective types of virtual video cameras. Different virtual video cameras may provide their virtual video streams in different formats, e.g. using different encoding schemes and/or different network protocols. Similarly, different virtual video cameras may provide different interfaces for video camera control such as zoom, tilt or pan. Accordingly, the video management system300may include a plurality of different camera drivers310configured to cooperate with respective virtual video camera types. In particular, the camera drivers310may implement one or more suitable network protocols and/or other communications standards for communicating with virtual video cameras and/or other surveillance equipment. Examples of such protocols and standards include the Open Network Video Interface Forum (ONVIF) standard and the Real Time Streaming Protocol (RTSP). It will be appreciated that the camera drivers310may be simulated and/or virtualized as appropriate, to simulate as well as possible a system that would observe a real dynamic environment. In this way, the VMS operator can use or configure the VMS as in real life conditions. This also improves operator training. The camera drivers310further add one or more time stamps to the received virtual video streams101so as to ensure that the virtual video streams, which are stored and subsequently supplied by the video management system, include a uniform time stamp. The added time stamp will also be referred to as a canonical time stamp. The canonical time stamp is indicative of the time of receipt, by the video management system, of the virtual video streams101from the respective virtual video cameras100. The camera drivers thus provide uniformly time-stamped input virtual video streams311, each time-stamped input virtual video stream311corresponding to a respective one of the received virtual video streams101. The video system10or video management system300may advantageously comprise a recording server320. The recording server may be embodied as a software program module executed by a suitable data processing system, e.g. by one or more server computers. The recording server receives the input virtual video streams311originating from the respective virtual video cameras100from the corresponding camera drivers310. The recording server stores the received input virtual video streams in a suitable media storage device, such as a suitable media database. It will be appreciated that the media repository350may be part of the video management system300or it may be separate from, but communicatively coupled to the video management system. The media repository350may be implemented as any suitable mass storage device, such as one or more hard disks or the like. The storing of the received input virtual video streams is also referred to as recording the received input virtual video streams. The recording server may receive additional input signals. The additional input signals may originate from the virtual video cameras100and/or from the peripheral devices500,600,700and/or from any additional monitoring or surveillance sensors. The video management system may store the additional input signals in the media repository350and/or in a separate storage device. The recording server320may further be configured to selectively provide the live input virtual video streams311and/or previously stored input virtual video streams retrieved from the media repository350via a suitable interface360to one or more of the peripheral devices500,600,700, respectively (as described above). To this end, the interface360may provide a network interface for providing live virtual video streams and/or previously stored virtual video streams via a communications network400to one or more peripheral devices500,600,700, such as cloud-based peripheral devices. To this end, the interface360may be configured to establish respective video tunnels and/or other communications sessions with the peripheral devices500,600,700. The interface360may implement one or more suitable network protocols and/or other communications standards for communicating with other surveillance equipment. Examples of such protocols and standards include the Open Network Video Interface Forum (ONVIF) standard and the Real Time Streaming Protocol (RTSP). Optionally, the interface360may implement different communications channels to other types of external entities. Examples of external entities include a video-receiving system (not shown), which may receive virtual video streams and provide functionality for viewing and/or processing the virtual video streams. Other examples of external entities include a video analytics system, which may receive virtual video streams and perform virtual video processing for analysing the virtual video streams. To this end, the video analytics system may perform object detection, object recognition, facial recognition, motion detection and/or other types of video analytics. The video analytics system may create video metadata indicative of the results of the video analytics performed. For example, the video analytics systems may create video metadata indicative of recognized objects in a virtual video stream. The metadata may include information about the spatial and temporal positions of recognised objects in the virtual video stream and/or information about the identity of the recognized object. The analytics systems may store the generated metadata in a suitable metadata repository. In some embodiments, the analytics systems may communicate the generated metadata back to the video management system. The video management system may store the returned metadata in a suitable metadata repository340, such as a suitable metadata database, which may be separate from or integrated into the media repository350. To this end, the video management system may include an index server330. The index server may be embodied as a software program module executed by a suitable data processing system, e.g. by one or more server computers. The index server may receive metadata and store the received metadata in the metadata repository340. The index server may further index the stored metadata so as to allow faster subsequent search and retrieval of stored metadata. Metadata received from the external analytics systems may be received by the recording server320and forwarded to the index server330. Alternatively or additionally, the index server may receive metadata directly from one or more analytics systems. The interface360may implement different types of interfaces. For example, the interface may provide an application interface, e.g. in the form of a software development kit and/or one or more communication protocols, such as a suitable messaging protocol, e.g. SOAP, XML, etc. Accordingly, the interface may operate as a gateway to different types of systems. The communications network400may be the internet or another suitable communications network. It will be appreciated that at least some of the physical peripheral devices500may reside on the same data processing system as the video management system or on a data processing system connected to the video management system via a local area network, instead. FIG.2is a flowchart illustrating a computer implemented method according to the present disclosure, which comprises three essential steps. The method comprises, in a first step S1, inputting, into the video management system, a plurality of virtual video streams generated by virtual video cameras within a dynamic 3D virtual environment as described above. Within the context of the present disclosure, the term “dynamic 3D virtual environment” should be understood as meaning a computer-generated environment comprising virtual objects such as virtual people, animals, vehicles, and/or any other simulated property, with variable changing conditions and/or variable changing objects. The changing conditions may for instance be related to the weather and/or lighting conditions, e.g. day and night simulations. The changing objects may for instance be related to a number and/or a behaviour of the objects. For instance, it is possible to populate the dynamic 3D virtual world with dynamic models of humans, animals and/or vehicles. As an example, it is possible to simulate different scenarios and/or events of interest such as people loitering and/or theft in a shopping mall. Another example would be to simulate a car going in the wrong direction on a freeway, since such a situation is difficult to simulate in the real world for training purposes. Accordingly, the disclosure allows to set up and test a VMS before a building is even built or a full surveillance system is even installed. It also allows to test response procedures of VMS operators for scenarios that would be difficult for any other situations difficult to simulate in the real world such as fires, explosions, and assaults. It further allows to test operators or Artificial Intelligence (AI) systems and their responses in variable changing conditions such as high/medium/low density of people and/or traffic. The dynamic 3D virtual environment can be generated through the use of an appropriate game engine known to the skilled person such as the Unreal Engine®, the Unity Engine® and/or the Unigine Engine®. It will be contemplated that the game engine may comprise any other appropriate engines known to the skilled person, such as a physics engine which will determine how objects collide with each other within the dynamic 3D virtual environment and a rendering engine which will determine how to render textures of objects within the dynamic 3D virtual environment based on, for instance, variable lighting conditions. It will also be contemplated that the realism of the dynamic 3D virtual environment may be enhanced through machine learning, for instance by inputting the virtual video streams in a convolutional network trained with real-life videos and/or pictures, as described for instance in a 2021 paper titled “Enhancing Photorealism Enhancement” by Richter et al. (https://arxiv.org/abs/2105.04619). It will further be contemplated that the engines may be supplemented with various libraries and/or various 3D models. For instance, 3D animations created with Mixamo® and customized with Adobe Fuse®, and/or computer-generated objects generated with Blender®, may be integrated into the game engine. Various 3D models and/or environments may also be obtained from the Unreal Marketplace. It will further be contemplated that the dynamic 3D virtual environment may be configured to change based on external input received by the computer, for instance from an operator or a trainer of an operator. For instance, a scenario or event of interest can be triggered or pushed into the dynamic 3D virtual environment by any appropriate API such as the REST API or gRPC. The dynamic 3D virtual environment may also change based on a simulation-centric algorithm, which may be added to the dynamic 3D virtual environment or the video management system. In all of the above-mentioned cases, it may be advantageous to measure an intervention time representing a time elapsed between the triggering of the event of interest and the receiving of the command. It thus becomes possible to evaluate if the operator reacts too slowly (or even too quickly), or if they react at all in case of a false alarm corresponding to the event of interest (if, for example, they have interrupted a fire alarm in the presence of a fire) while they should have done something else. Each virtual video camera may capture what is in front of it by its location, rotation and field-of-view, just like a real-world camera. Each video feed can then be received outside the virtual environment as if the video was recorded by physical video cameras. From a video management system perspective, the video received from the dynamic 3D virtual environment is not distinctly different from a video originating from a regular video camera. The method also comprises, in a second step S2, receiving, in the computer, at least one command from a user indicative of an instruction to alter a functioning of the dynamic 3D virtual environment and/or a functioning of the video management system. Within the context of the present disclosure, the term “alter” should be understood as meaning a change having a visible effect on a normal or predetermined operation of the dynamic 3D virtual environment and/or video management software. For instance, a change in the dynamic 3D virtual environment caused by an existing or predetermined functionality in the video management software and/or a normal operation of the dynamic 3D virtual environment would not qualify as an alteration. Similarly, a change caused by an artificial intelligence and/or a simulation-centric algorithm which operate in a predetermined manner would also not qualify as an alteration. Conversely, a change in a source code of the dynamic 3D virtual environment, a change in a source code of a simulation-centric algorithm as described above and/or in a source code of the video management software, which visually affects the functioning of the dynamic 3D virtual environment and/or video management software, would qualify as an alteration. It will be contemplated that the command may include a plurality of commands indicative of a plurality of instructions. For instance, the command may directly correspond to the instruction or may be received in a particular programming language (for instance, in a more user-friendly language such as Google Golang® or Python®) and converted into another programming language (for instance, in a more machine-friendly language such as C++). It will also be contemplated that the command may be received in any appropriate API supporting, for instance, HTTP(S), gRPC, Websocket (etc.) and different versions or improvements thereof. The method also comprises, in a third step S3, displaying on a display, from the video management system, at least one virtual video stream and/or at least one alert, alarm and/or message implementing at least one alteration caused by the received command. For instance, the last step may comprise rendering, on a display, an updated view of the dynamic 3D virtual environment reflecting the alteration caused to the dynamic 3D virtual environment and/or video management system. For instance, an updated view of one virtual video stream implementing a new video filter newly implemented in the video management system. The last step may also comprise rendering, on a display, an alert, alarm and/or message reflecting the alteration, for instance, a notification reflecting a number of people in a certain area of the dynamic 3D virtual environment generated by a set of instructions added to the video management system. The altering may also comprise adding at least one new virtual video stream to the plurality of virtual video streams by adding a new virtual video camera within the dynamic 3D virtual environment. The altering may also comprise modifying at least one virtual video stream of the plurality of virtual video streams by modifying a setting of one of the virtual video cameras. The altering may also comprise adding a new processing step of a virtual video stream. The present disclosure also provides a non-transitory computer readable storage medium storing a program for causing a computer to execute a computer implemented method of operating a video management system according to any one of the above-mentioned embodiments and examples. The present disclosure also provides an apparatus for operating a video management system, comprising a display250, a video management system300and a computer having at least one processor configured to input, into the video management system, a plurality of virtual video streams generated by virtual video cameras within a dynamic 3D virtual environment; receive, in the computer, at least one command from a user indicative of an instruction to alter a functioning of the dynamic 3D virtual environment and/or a functioning of the video management system; and display on a display, from the video management system, at least one virtual video stream and/or at least one alert, alarm and/or message implementing at least one alteration caused by the received command. Advantageously, this apparatus can consist in a client device as mentioned above or consist in a combination of different electronic devices. Thus, the various embodiments of the present disclosure allow an operator to test or improve new functionality of a video management system, to train configurators of video management systems to efficiently configure a VMS for a concrete environment and to train VMS operators to act appropriately in atypic situations as in situations or scenarios involving alarms and catastrophes. While the present disclosure has been described with reference to exemplary embodiments, the scope of the following claims are to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
32,163
11943566
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 1. Use Mode of Dashboard Camera Recorder By referring toFIG.1, a use mode of a dashboard camera recorder1as an example of a configuration of a communication system of the present disclosure will be described. The dashboard camera recorder1is used by being mounted to a vehicle100, and has a function of capturing images of the surroundings of the vehicle100and inside the vehicle100by a camera. The vehicle100corresponds to a mobile body of the present disclosure. The dashboard camera recorder1has a function of performing cellular communication and Wi-Fi (registered trademark) communication. The dashboard camera recorder1performs communication with an external communication device (corresponds to a communication device of the present disclosure) via a wide area network500by performing cellular communication with a base station300of each cell or by performing Wi-Fi communication with a router310placed at a Wi-Fi spot near a road. InFIG.1, an image management server510and an information providing server520are illustrated as the external communication devices. Furthermore, the dashboard camera recorder1has a function of a Wi-Fi router that establishes Wi-Fi communication with mobile communication terminals51,52that are used by users U1, U2in the vehicle100and implements an in-vehicle Wi-Fi network environment. The mobile communication terminals51and52are smartphones, mobile phones, tablet terminals, mobile game machines, and the like having a Wi-Fi communication function. Cameras51aand52aare provided in the mobile communication terminals51and52, respectively. The dashboard camera recorder1functions as the Wi-Fi router to enable communication between the mobile communication terminals51,52and the external communication devices such as the image management server510and the information providing server520via the in-vehicle Wi-Fi network and the wide area network500. The users U1and U2can acquire information by having communication with the information providing server520, for example, by using the in-vehicle Wi-Fi network environment provided by the dashboard camera recorder1, even when the mobile communication terminals51and52do not have the cellular communication function. Moreover, by using the in-vehicle Wi-Fi network environment of the dashboard camera recorder1, it is also possible to communicate with external mobile communication terminals53,54, and55near the vehicle100.FIG.1illustrates a state where wireless communication via the in-vehicle Wi-Fi is established between the dashboard camera recorder1and the mobile communication terminal53used by a user U3standing on a sidewalk and the mobile communication terminals54,55used by users U4, U5riding in an oncoming vehicle110. Cameras53ato55aare provided in the mobile communication terminals53to55, respectively. Furthermore, the dashboard camera recorder1transmits the captured image that is captured by the camera to the image management server510via the wide area network500. The image management server510uses the captured image received from the dashboard camera recorder1to perform processing such as analysis of an accident, fixed point observation, inspection of road equipment, and the like. Note here that wireless communication between the dashboard camera recorder1and the mobile communication terminals51to55corresponds to a first communication of the present disclosure, and wireless communication between the dashboard camera recorder1and the external communication devices such as the image management server510and the information providing server520corresponds to a second communication of the present disclosure. Relay communication relaying the communication between the mobile communication terminals51to55and the external communication devices is executed by the first communication and the second communication. The cameras51ato55aprovided in the mobile communication terminals51to55, respectively, correspond to a second imaging device of the present disclosure. 2. Configuration of Dashboard Camera Recorder By referring toFIG.2, the configuration of the dashboard camera recorder1will be described. The dashboard camera recorder1includes a processor10, a memory20, a NAD (Network Access Device)30, an antenna31, a front camera32, a rear camera33, an in-vehicle camera34, a GNSS (Global Navigation Satellite System) sensor35, an acceleration sensor36, a switch37, and a display38. The front camera32, the rear camera33, and the in-vehicle camera34correspond to a first imaging device of the present disclosure. The NAD30is a chip in which a cellular communication module and a Wi-Fi communication module are integrated. The antenna31is a dual-use antenna corresponding to both cellular communication and Wi-Fi communication. The NAD30and the antenna31form a communication unit of the present disclosure. The front camera32captures an image of the surroundings of a front view of the vehicle100and outputs the captured image to the processor10. The rear camera33captures an image of the surroundings of a rear view of the vehicle100and outputs the captured image to the processor10. The in-vehicle camera34captures an image of the inside of the vehicle100and outputs the captured image to the processor. Note that it is also possible to employ a configuration that includes not all of the front camera32, the rear camera33, and the in-vehicle camera34but only the front camera32, for example. The GNSS sensor35receives a radio wave from a positioning satellite to detect the current position (latitude, longitude) of the dashboard camera recorder1, and outputs a position detection signal to the processor10. The acceleration sensor36detects the acceleration generated in the dashboard camera recorder1, and outputs an acceleration detection signal to the processor10. The acceleration sensor36detects the acceleration in three orthogonal axis directions, for example. The switch37outputs operation signals corresponding to operations of the users U1and U2to the processor10. The display38displays an operation state and the like of the dashboard camera recorder1in accordance with control input from the processor10. The processor10functions as an imaging control unit11, a communication control unit12, an image processing unit14, a synchronous imaging time point determination unit15, a mobile communication terminal position recognition unit17, a timer unit18, and a speed recognition unit19by reading and executing a control program21for the dashboard camera recorder1saved in the memory20. The imaging control unit11captures an image by at least one of the front camera32, the rear camera33, and the in-vehicle camera34at a prescribed imaging timing, and saves data of a captured image (a first captured image) in a captured image area22of the memory20. As the imaging timing, timings of (1) to (3) as follows are set, for example. (1) When Vehicle100Encounters Accident The imaging control unit11recognizes that the vehicle100has encountered an accident by detecting the acceleration in a level of a prescribed threshold or more by the acceleration sensor36. It is also possible to recognize an accident of the vehicle100based on an impact detection signal acquired by an impact sensor provided for an airbag or the like mounted to the vehicle100. (2) When Vehicle100Travels Through Fixed Point The imaging control unit11recognizes that the vehicle100is traveling through a fixed point set in advance from the current position of the dashboard camera recorder1detected by the GNSS sensor35. As the fixed point, for example, a point where a traffic jam is likely to occur, a point where road equipment (road sign, utility pole, or the like) as the target of maintenance is located, a sightseeing spot, or the like may be set. (3) When Vehicle100Travels Through Image Capturing Request Point The imaging control unit11recognizes the image capturing request point by receiving information of the image capturing request point transmitted from the image management server510, and recognizes that the vehicle100is traveling through the image capturing request point from the current position of the dashboard camera recorder1detected by the GNSS sensor35. As the image capturing request point, for example, an accident site of another vehicle, a fire site, a site where a child is lost, or the like is set. The communication control unit12performs control of communication established with the external communication device via the in-vehicle Wi-Fi communication and the wide area network500by the NAD30. The communication control unit12packetizes data that is a combination of information data of the in-vehicle Wi-Fi and data of the captured image, and performs communication with the external communication device via the wide area network500by packet communication. With this processing, the overhead of control data added to the information data of the captured image and the like transmitted by packet is shrunk and the communication volume is reduced, so that it is possible to reduce the communication cost borne by the users U1and U2. The communication control unit12receives data of images (second captured images) captured by the mobile communication terminals51and52from the mobile communication terminals51and52by the first communication. The processing performed by the communication control unit12for acquiring the data of the second captured images will be described later. The image processing unit14generates integrated image data in which the data of the images (first captured images) captured by the front camera32, the rear camera33, and the in-vehicle camera34and the data of the second captured images received by the communication control unit12are associated and recorded. The processing of generating the integrated image data performed by the image processing unit14corresponds to prescribed image processing of the present disclosure. The synchronous imaging time point determination unit15determines the synchronous imaging time point for executing imaging by synchronizing the imaging time point of the front camera32, the rear camera33, and the in-vehicle camera34of the dashboard camera recorder1with the imaging time point of the cameras51aand52aof the mobile communication terminals51and52. The communication control unit12transmits the integrated image data generated by the image processing unit14to the image management server510via the wide area network500by the NAD30. The mobile communication terminal position recognition unit17determines whether or not the mobile communication terminals51and52used inside the vehicle are located at prescribed positions from an image of the inside of the vehicle100captured by the in-vehicle camera34. The prescribed position may be a mount position of the holder of the mobile communication terminal (the neck part of a headrest of a seat, or the like), for example.FIG.1illustrates a state where the mobile communication terminal52is located at the prescribed position. The timer unit18performs processing for counting the current date and time. The imaging control unit11saves the data of the first captured images, which is acquired by adding information of the date and time that is the imaging timing counted by the timer unit18to the captured images that are captured by the front camera32, the rear camera33, and the in-vehicle camera34, in the captured image area22of the memory20. The speed recognition unit19recognizes the traveling speed (moving speed) of the vehicle100by receiving a speed detection signal Vcar from a car speed sensor provided in the vehicle100. The dashboard camera recorder1receives the speed detection signal Vcar by performing wired or wireless communication with an ECU (Electronic Control Unit) that is provided in the vehicle100. The speed recognition unit19may recognize the traveling speed (moving speed) of the vehicle100by performing prescribed image processing on the captured image. 3. Captured Image Providing Processing According to the flowcharts illustrated inFIG.3andFIG.4, captured image providing processing executed by the dashboard camera recorder1will be described. When determined in Step S1ofFIG.3that it is one of the imaging timings (1) to (3), the imaging control unit11proceeds the processing to Step S2. In Step S2, the imaging control unit11determines whether there is any mobile communication terminal having in-vehicle Wi-Fi communication established with the NAD30. Then, the imaging control unit11proceeds the processing to Step S3when there is a mobile communication terminal having the established in-vehicle Wi-Fi communication, and proceeds the processing to Step S20when there is no mobile communication terminal having the established in-vehicle Wi-Fi communication. In Step S20, the imaging control unit11captures an image by at least one of the front camera32, the rear camera33, and the in-vehicle camera34, and saves the data of the acquired first captured image in the captured image area22of the memory20. In a following Step S21, the communication control unit12transmits the data of the first captured image to the image management server510by the NAD30, and proceeds the processing to Step S11ofFIG.4. In this case, only the data of the first captured image that is captured by the dashboard camera recorder1is transmitted to the image management server510. Furthermore, in Step S3, the mobile communication terminal position recognition unit17extracts the image part of the mobile communication terminal from the captured image of the in-vehicle camera34to recognize the position of the mobile communication terminal inside the vehicle100. In the case ofFIG.1, the positions of the mobile communication terminals51and52used inside the vehicle100are recognized. In a next Step S4, the mobile communication terminal position recognition unit17determines whether or not the positions of the mobile communication terminals correspond to the prescribed position described above. In the case ofFIG.1, it is determined that the mobile communication terminal52is located at the prescribed position. Then, the mobile communication terminal position recognition unit17proceeds the processing to Step S5when the position of the mobile communication terminal is the prescribed position, and proceeds the processing to Step S20when the position of the mobile communication terminal is not the prescribed position. In Step S5, the synchronous imaging time point determination unit15determines the synchronous imaging time point for synchronizing imaging performed by the dashboard camera recorder1with imaging performed by the mobile communication terminal according to the imaging timing recognized in Step S1. In a following Step S6, the communication control unit12transmits, to the mobile communication terminal52, synchronous imaging time point notification information for notifying the synchronous imaging time point via the in-vehicle Wi-Fi communication by the NAD30. Upon receiving the synchronous imaging time point notification information transmitted from the dashboard camera recorder1, the mobile communication terminal52recognizes the synchronous imaging time point from the synchronous imaging time point notification information, and captures an image by the camera52awhen it comes to the synchronous imaging time point. Then, the mobile communication terminal52transmits the data of the acquired second captured image to the dashboard camera recorder1via the in-vehicle Wi-Fi communication by the NAD30. Note that images captured by the dashboard camera recorder1and the mobile communication terminal52may be moving images or sequential images captured in a period including before and after the synchronous imaging time point. Returning to the flowchart ofFIG.3, the communication control unit12and the imaging control unit11execute the processing of Steps S7, S8and the processing of Steps S30, S31in parallel. Upon receiving the data of the second captured image from the mobile communication terminal52in Step S7, the communication control unit12proceeds the processing to Step S8to save the data of the second captured image in the captured image area22of the memory20, and proceeds the processing to Step S9ofFIG.4. Furthermore, when determined in Step S30that it is the synchronous imaging time point, the imaging control unit11proceeds the processing to Step S31to capture the image by at least one of the front camera32, the rear camera33, and the in-vehicle camera34. Then, the imaging control unit11saves the data of the acquired first captured image in the captured image area22of the memory20, and proceeds the processing to Step S9ofFIG.4. In Step S9, the image processing unit14generates integrated image data in which the data of the first captured image and the data of the second captured image saved in the captured image area22by the imaging control unit11are associated and recorded. In a following Step S10, the communication control unit12transmits the integrated image data generated by the image processing unit14to the image management server510via the wide area network500by the communication control unit12and the NAD30. The integrated image data contains the data of the first captured image that is captured by the front camera32, the rear camera33, or the in-vehicle camera34of the dashboard camera recorder1and the data of the second captured image that is captured by the camera52aof the mobile communication terminal52at the same timing (at the synchronous imaging time point, period including before and after the synchronous imaging time point). Therefore, from the integrated image data, the image management server510can acquire the captured images in the surroundings of the vehicle100or inside the vehicle100from various angles captured at the same timing. Furthermore, the image management server510can acquire more detailed information regarding an accident and the like occurring in the vehicle100by analyzing the data of the first captured image and the data of the second captured image. 4. Other Embodiments While the four-wheeled vehicle100is described in the embodiment above as the mobile body to which the dashboard camera recorder1is mounted, the mobile body to which the dashboard camera recorder1is mounted may also be a two-wheeled vehicle, a flying object, a boat, or the like. While the cellular or Wi-Fi communication is used for accessing the wide area network500in the embodiment described above, it is also possible to use other communication schemes. Furthermore, while the in-vehicle Wi-Fi (corresponds to a second communication network of the present discloser) is used to establish communication between the mobile communication terminals51,52used by the users U1, U2of the vehicle100and the dashboard camera recorder1, it is also possible to use other communication specifications such as Bluetooth (registered trademark). While the case of configuring the communication system of the present disclosure with the dashboard camera recorder1is described in the embodiment above, the communication system of the present disclosure may also be configured with a communication terminal having a camera (a smartphone, a mobile phone, a tablet terminal, a camera with communication function, or the like). In the embodiment described above, by the flowchart ofFIG.3, the communication control unit12transmits the synchronous imaging time point notification information only to the mobile communication terminal located at the prescribed position of the vehicle100(the mobile communication terminal52in the case ofFIG.1) to acquire the data of the second captured image that is captured by the mobile communication terminal. As another configuration, the synchronous imaging time point notification information may be transmitted targeted to the mobile communication terminals (the mobile communication terminals51and52in the case ofFIG.1) used in the vehicle100without making determination whether those are located at the prescribed position so as to acquire the data of the second captured images that are captured by the mobile communication terminals. Furthermore, the mobile communication terminal position recognition unit17may set the prescribed position for recognizing the position of the mobile communication terminal in accordance with the number of mobile communication terminals having the in-vehicle Wi-Fi communication established with the dashboard camera recorder1. For example, the prescribed position is set at a position capable of imaging the rear side of the vehicle1when there is a single mobile communication terminal having the established first communication for the relay communication, the prescribed positions may be set at a position capable of imaging the right rear side of the vehicle1and at a position capable of imaging the left rear side of the vehicle1when there are two mobile communication terminals, and the prescribed positions may be set at a position capable of imaging the right rear side of the vehicle1, at a position capable of imaging the rear center, and at a position capable of imaging the left rear side of the vehicle1when there are three mobile communication terminals, so that the imaging range in the surroundings of the vehicle1can be expanded. Furthermore, upon recognizing that the mobile communication terminals are located at the prescribed positions of the vehicle1, the mobile communication terminal position recognition unit17recognizes the posture of the mobile communication terminals, and the communication control unit12may receive the data of the second captured images via the in-vehicle Wi-Fi communication when it is recognized by the mobile communication terminal position recognition unit17that the mobile communication terminals are located at the prescribed positions in a prescribed posture. In the embodiment described above, a mobile communication terminal performance recognition unit that recognizes the performance of the mobile communication terminals may be provided, and the communication control unit12may receive the data of the second captured images from the mobile communication terminals via the in-vehicle Wi-Fi communication when the performance of the mobile communication terminals having the in-vehicle Wi-Fi communication established with the NAD30is determined to be a prescribed performance. In the embodiment described above, the synchronous imaging time point determination unit15is provided to capture images by the dashboard camera recorder1and by the mobile communication terminals51,52synchronously. However, the synchronous imaging time point determination unit15may be omitted, and the data of the second captured images that are captured by the mobile communication terminals51and52at arbitrary timings may be acquired. Alternatively, the synchronous imaging time point notification information may be notified not by limiting to the mobile communication terminal used in the vehicle100but also to the mobile communication terminals having the in-vehicle Wi-Fi communication established with the NAD30(the mobile communication terminals51to55in the case ofFIG.1) to acquire the data of the second captured images that are captured by each of the mobile communication terminals. In the embodiment above, described is the configuration in which the dashboard camera recorder1communicates with the mobile communication terminals51to55via wireless communication. However, as illustrated inFIG.1, when the mobile communication terminals51and52are used inside the vehicle100, the mobile communication terminals51and52may also be connected to the dashboard camera recorder1via a communication cable to perform wired communication. In the embodiment described above, as the prescribed image processing based on the first captured image data and the second captured image data, the image processing unit14executes the processing of generating the integrated image data containing the data of the first captured image and the data of the second captured image in an associated manner. As another image processing based on the first captured image data and the second captured image data, for example, processing such as recognizing an object existing in the surroundings of the vehicle100, determining the risk of having contact with the object, and the like may be executed. Note thatFIG.2is a schematic diagram illustrating the configuration of the dashboard camera recorder1by sectioning it in accordance with the main processing contents in order to facilitate understanding of the present invention, and the configuration of the dashboard camera recorder1may also be formed by other sectioned blocks. Furthermore, the processing of each structural element may be executed by a single hardware unit or may be executed by a plurality of hardware units. Moreover, the processing of each structural element according to the flowcharts illustrated inFIG.3andFIG.4may be executed by a single program or may be executed by a plurality of programs. 4. Configurations Supported by the Embodiments The above-described embodiments support the following items. (Item 1) A communication system including: a first imaging device; a communication unit configured to execute a first communication with a mobile communication terminal and a second communication with a communication device other than the mobile communication terminal to execute relay communication relaying the communication between the mobile communication terminal and the communication device; a communication control unit configured to receive, from the mobile communication terminal via the first communication, data of a second captured image that is captured by a second imaging device provided in the mobile communication terminal having the first communication established for the relay communication; and an image processing unit configured to execute prescribed image processing based on the data of a first captured image that is captured by the first imaging device and the data of the second captured image. According to the communication system of item 1, it is possible to acquire the data of the second captured image that is captured by the second imaging device of the mobile communication terminal in various angles along with the data of the first captured image that is captured by the first imaging device of the communication system with a simple configuration with low cost. (Item 2) The communication system according to item 1, including a synchronous imaging time point determination unit configured to determine a synchronous imaging time point for synchronizing imaging performed by the first imaging device and the second imaging device, in which the communication control unit: transmits, via the first communication, synchronous imaging time point information indicating the synchronous imaging time point to the mobile communication terminal having the first communication established for the relay communication; and receives, via the first communication, the data of the second captured image that is captured by the second imaging device at the synchronous imaging time point according to the synchronous imaging time point information, and the image processing unit executes the prescribed image processing based on the data of the first captured image that is captured by the first imaging device at the synchronous imaging time point and the data of the second captured image that is captured by the second imaging device at the synchronous imaging time point. According to the communication system of item 2, recognizing an object, determining the risk of having contact with the object, and the like can be done by the image processing performed by using the data of the first captured image that is captured by the first imaging device of the communication system and the data of the second captured image that is captured by the second imaging device of the mobile communication terminal at the same time point. (Item 3) The communication system according to item 1 or 2, in which, as the prescribed image processing, the image processing unit executes processing of generating integrated image data containing the data of the first captured image and the data of the second captured image in an associated manner. According to the communication system of item 3, it is possible to acquire the integrated image data containing, in an associated manner, the data of the first captured image that is captured by the first imaging device of the communication system and the data of the second captured image that is captured by the second imaging device of the mobile communication terminal at the same time point. (Item 4) The communication system according to item 3, in which the communication control unit transmits the integrated image data to the communication device via the second communication. According to the communication system of item 4, it is possible to provide the integrated image data containing the data of the second captured image that is captured by the second imaging device of the mobile communication terminal to the communication device. (Item 5) The communication system according to any one of items 1 to 4, the communication system being used in a mobile body and including a mobile communication terminal position recognition unit configured to recognize whether or not the mobile communication terminal is located at a prescribed position of the mobile body, in which the communication control unit receives the data of the second captured image via the first communication, when the mobile communication terminal position recognition unit recognizes that the mobile communication terminal is located at the prescribed position. According to the communication system of item 5, it is possible to capture and acquire the image inside the mobile body or in the vicinity of the mobile body by the second imaging device of the mobile communication terminal used inside the mobile body or in the vicinity of the mobile body. (Item 6) The communication system according to item 5, in which the mobile communication terminal position recognition unit sets the prescribed position in accordance with the number of the mobile communication terminals having the first communication established for the relay communication. According to the communication system of item 6, it is possible to set the prescribed position in accordance with the imaging range covered by the second imaging device of the mobile communication terminal, depending on the number of the mobile communication terminals having the first communication established for the relay communication. (Item 7) The communication system according to item 5 or 6, in which the mobile communication terminal position recognition unit recognizes a posture of the mobile communication terminal upon recognizing that the mobile communication terminal is located at the prescribed position of the mobile body, and the communication control unit receives the data of the second captured image via the first communication, when the mobile communication terminal position recognition unit recognizes that the mobile communication terminal is located at the prescribed position in a prescribed posture. According to the communication system of item 7, by recognizing the posture of the mobile communication terminal, it is possible to acquire the data of the second captured image that is captured by the second imaging device by designating the imaging range and the imaging direction of the second imaging device of the mobile communication terminal in a more detailed manner. (Item 8) The communication system according to any one of items 1 to 7, including a mobile communication terminal performance recognition unit configured to recognize a performance of the mobile communication terminal, in which the communication control unit receives the data of the second captured image from the mobile communication terminal via the first communication, when the mobile communication terminal performance recognition unit recognizes that the performance of the mobile communication terminal having the first communication established for the relay communication is a prescribed performance. According to the communication system of item 8, it is possible to receive the data of the second captured image that is captured by the second imaging device from the mobile communication terminal, only in a case where the communication performance of the mobile communication terminal, the performance of the second imaging device provided in the mobile communication terminal, and the like are equal to or more than a prescribed level. (Item 9) The communication system according to any one of items 1 to 8, in which the communication unit executes the relay communication for a plurality of the mobile communication terminals via the first communication established with the plurality of mobile communication terminals, and the communication control unit receives the data of the second captured images that are captured by the second imaging devices provided in the plurality of mobile communication terminals having the first communication established for the relay communication. According to the communication system of item 9, it is possible to collect the data of the captured images in a wide range by receiving the data of the second captured images that are captured by the second imaging devices of the plurality of mobile communication terminals. (Item 10) The communication system according to any one of items 1 to 9, the communication system being configured with a camera or a dashboard camera recorder. According to the communication system of item 10, it is possible to achieve the communication system according to items 1 to 9 as functions of the camera or the dashboard camera recorder. REFERENCE SIGNS LIST 1Dashboard camera recorder10Processor11Imaging control unit12Communication control unit14Image processing unit15Synchronous imaging time point determination unit17Mobile communication terminal position recognition unit18Timer unit19Speed recognition unit20Memory21Control program22Captured image area30NAD31Antenna32Front camera33Rear camera34In-vehicle camera35GNSS36Acceleration sensor37Switch38Display51to55Mobile communication terminal51ato55aCamera100Vehicle (mobile body)110Another vehicle300Cellular communication base station310Wi-Fi spot router500Wide area network510Image management server520Information providing serverU1to U5User
34,597
11943567
These and other aspects of the present disclosure are disclosed in the following detailed description of the embodiments, the appended claims, and the accompanying figures. It will be appreciated that aspects can be implemented in any convenient form. For example, aspects may be implemented by appropriate computer programs which may be carried on appropriate carrier media which may be tangible carrier media (e.g., disks) or intangible carrier media (e.g., communications signals). Aspects may also be implemented using a suitable apparatus, which may take the form of programmable computers running computer programs arranged to implement the methods and/or techniques disclosed herein. Aspects can be combined such that features described in the context of one aspect may be implemented in another aspect. DETAILED DESCRIPTION It is critical that a patient and/or the patient's room be monitored for adverse conditions that may negatively impact the patient. For example, a lowered bed rail of the patient's bed presents the risk that the patient may fall off the bed. For example, if the patient remains lying one the same side of his/her body for over a certain period of time (e.g., more than two hours) without being repositioned, then bedsores (also known as pressure ulcers) may develop. Bedsores are a common, painful, debilitating, and potentially deadly condition. For example, if the patient gets out of his/her bed, such as to use the restroom, but does not return within a reasonable period of time, then there is a risk that the patient may have fallen and/or is unable to return to the bed. For example, a steep bed incline presents the risk that the patient's breathing may be obstructed. Proper patient care can include fall prevention, bed sore prevention, bed incline monitoring for breathing monitoring, and/or the detection or prevention of other adverse conditions (e.g., monitored conditions). When an adverse condition and/or the potential for an adverse condition is detected, a care provider (e.g., a nurse, etc.) can be notified so that the care provider can take appropriate corrective and/or preventative measures. To illustrate, and without loss of generality, if a patient has been lying on his/her back for more than two hours, then a notification (e.g., an alert, a message, etc.) may be sent a nurse so that the nurse can reposition the patient. The alert can be sent in one or more ways to a user device. For example, a text message may be sent to a hand-held device of the nurse. For example, the alert may be displayed on a display at a nurses station. Other ways of alerting care providers are possible. Detection of (potential) adverse conditions according to implementations of this disclosure actively monitor bed states and/or states of other aspects of a patient's room (collectively, room state or, simply, state). An in-room monitoring device, which includes a camera, can be used to actively monitor the room state. Image processing can be used to detect (e.g., infer, calculate, obtain, output, etc.) the room state. For example, a machine learning (ML) model can be trained to detect the room state. In an example, the ML model can be a multi-label image classification model. Implementations according to this disclosure can detect a state (e.g., a room state) of a monitored environment (e.g., a hospital room) and/or a part thereof (e.g., a hospital bed, a patient, etc.). The room state can be detected without any special hardware sensors. A monitoring device that includes a camera can be used to monitor the monitored environment and determine the states using machine learning and computer vision. Traditionally, and with respect to detecting different states of a hospital bed, existing beds may be retrofitted with specialized hardware sensors or new beds (which may be referred to as smart beds) that already include such sensors may be used. However, these can be costly prepositions for hospitals. Another traditional approach for monitoring a room is to rely in a human who would regularly visually inspect a patient's room to determine the room state. However, this approach is not efficient, is prone to mistakes, and is expensive. Traditionally, a human may be tasked with monitoring several monitored environments simultaneously. For example, a nurse may be tasked with monitoring the rooms of 15, 20, or more patients simultaneously. Video feeds from each of the monitored environments may be displayed on a user device (e.g., a monitoring station, a nurses station, etc.) of the human. The human has to attempt to watch for adverse, or potentially adverse, states (e.g., conditions, occurrences, etc.) in all of the monitored environments at the same time by simultaneously monitoring all the video feeds. Such traditional approaches can present several problems. The user device must have sufficient computational resources to receive, process (e.g., decode, etc.), and display several video streams simultaneously. The computing infrastructure (e.g., network) must have sufficient bandwidth to support the video streams. The possibility of degraded performance and increased usage of compute and/or network resources may also include increased investment in processing, memory, and storage resources for the user device and network, which result in increased energy expenditures (needed to operate those increased processing, memory, and storage resources, or for the network transmission of the intermediate data) and associated emissions that may result from the generation of that energy. Furthermore, and more importantly, the human may suffer from information overload causing the human to overlook or miss critical states in the monitored environments. Implementations according to this disclosure can focus the attention of a human who may be monitoring multiple monitored environments. The attention of the human can be focused on (e.g., directed to, etc.) those monitored environments currently exhibiting certain active states. The monitoring device can monitor for (e.g., detect, infer, etc.) several states in/of the monitored environment. A state that is detected is referred to herein an active state. An active state is a state that has a certain value. If the state has another value, then it may not be considered active. A state, as used herein, can refer to a condition of interest of the monitored environment. The state can have one or more values. The states of interest can include conditions, events, occurrences, and the like of the monitored environment. A state of interest that is detected is referred to herein as an active state. A state of interest that is not detected is referred to herein as an inactive state. For example, a state of interest may be whether the patient is waving. If the patient is determined to be waving, then the patient-waving state may have a value of “yes,” “1,” “true,” “waving” or any other value indicating that the patient is waving; if the patient is not waving, then the patient-waving state may have a value of “no,” “0,” “false,” “not waving” or any other value indicating that the patient is not waving. Furthermore, if the human is to be notified of a state when the state (e.g., the patient-waving state) has a certain value (e.g., “yes”), then if the state is detected to have that certain value, the state is referred to herein as an active state. A detected state that persists for a predetermined duration is referred to herein as a persistent state or as a state that persists for the predetermined duration. Monitored environments can be monitored by respective monitoring devices. The monitoring devices can be communicatively connected to a server that can in turn be communicatively connected to a user device. A monitoring device that is monitoring a monitored environment can obtain images (e.g., an image stream or a video stream) from a camera. Images of the monitored environment can be continuously captured using the camera. The monitoring device can apply a machine learning model to at least some of the images to determine respective states of the monitored environment. The monitoring device can record the states. Responsive to detecting a state change (e.g., a state becoming active or persistent) from one image to a next image, the monitoring device can transmit a notification to a central server. The notification can include a snapshot of the monitored environment. The notification can include a list of the detected active states and the persistent states. Attention focusing for multiple monitored environments can minimize information overload, direct the focus (e.g., attention) of the human to a subset of the monitored environments, and require fewer compute and network resources than traditional approaches. In an implementation, and as further described herein, implementations according to this disclosure enable humans (e.g., heath care professionals) to accurately provide proper attention to the patients that need it. Attention focusing for multiple monitored environments also reduces the need for live (e.g., streaming, etc.) feeds of the monitored environments. Details of room state detection via camera and attention focusing for multiple patients monitoring are described herein with initial reference to a system in which the teachings herein can be implemented. FIG.1is a schematic of an example of a system100according to implementations of this disclosure. The system100includes a monitored environment102, a monitoring device104, a user device106, and a server108. The monitored environment102can be a patient hospital room, a nursing home room, a room of a home patient, a manufacturing line, a workstation, a laboratory, and the like. The monitored environment102includes and/or can be viewed using the monitoring device104. The monitored environment102can be remotely monitored from the user device106. The user device106can be one or more of a desktop computer106A, a mobile device106B (such as tablet, a smart phone, and the like), a laptop computer106C, or some other device that can be used to access, communicate with, and/or control (directly or indirectly) the monitoring device104. A user (not shown) of the user device106can monitor the monitored environment102via the monitoring device104. That the monitored environment102is remotely monitored by the user means that the user may not physically be in the monitored environment102while performing the monitoring. In the case that the monitored environment102is a patient hospital room, the user can be a physician, a nurse, another health-care practitioner, a family member of the patient, and/or the like. For example, the physician may be remotely responding to (e.g., diagnosing, mitigating, assessing, etc.) a patient emergency or remotely performing patient rounds. The nurse may be monitoring patients, including the monitored environment102from a nurses station to, for example, ensure that no patient is falling, is in need of help, is distressed, and/or the like. The family member of the patient may remotely visit with the patient using the monitoring device104. The monitoring device104can be configured to and/or used to capture video, images, audio, environmental conditions, or other characteristics of the monitored environment. The characteristics of the monitored environment can be transmitted to one or more users of the user devices106. Via the user device106, the user can interact with the monitoring device, such as by sending and/or receiving captured video and/or audio, sending commands to the monitoring device104, and the like. The user device106and the monitoring device104can communicate via the server108. For example, the user device106can send commands to the server108, which relays the command to the monitoring device. Similarly, the monitoring device104can send information to the server108, which relays the information to the user device106. To illustrate, the monitoring device104can include a camera that is configured to view the monitored environment102. The user device106can issue a request to the server108to establish a connection with the monitoring device104. The server108can establish the connection. Issuing a request to the server108to establish a connection can include, for example, the user device106connecting to a patient by the patient's room number or name; the server108determining the monitoring device104of the patient (i.e., the monitoring device that is in the patient's room); and the server108connecting the user device106and the monitoring device104. The connection session may be an video communication session during which the user can communicate visually and/or verbally with a person in the patient's room. The user device106, may during the connection session, send a pan, tilt, or zoom (PTZ) command to the camera of the monitoring device104via the server108. The monitoring device104can update the view of the monitored environment according to the PTZ command and send back, via the server108, a video and/or image of the updated view of the monitored environment, which can then be displayed on a display of the user device106. In an example, the server108can allow certain users to control monitoring device and not allowing other user devices to control the monitoring device. In another example (not shown), the user device106can establish a peer-to-peer communication channel with the monitoring device104. For example, in response to the connection request, the server108can facilitate the establishment of the peer-to-peer (e.g., direct) communication between the user device106and the monitoring device104. The server108can be deployed (e.g., physically located) on premise at the location of the monitored environment. The server108can be deployed on a same local area network (LAN) of the monitoring device104. The server108can be deployed on a same wide area network (WAN) of the monitoring device104. The server108can be a cloud-based server. Other deployments of the server108are possible. The monitoring device104, the user device106, and the server108can communicate over any suitable network. The network (not shown) can be, for example, the Internet or an Internet Protocol (IP) network, such as the World Wide Web. The network can be a LAN, a WAN, a virtual private network (VPN), cellular telephone network, a private network, an extranet, an intranet, any other means of transferring information (e.g., video streams, audio streams, images, other information), or a combination thereof from one end point to another end point. In an example, the user device106and the monitoring device104may communicate using a real-time transport protocol (RTP) for transmission of the media content, which may be encoded, over the network. In another implementation, a transport protocol other than RTP may be used (e.g., a Hypertext Transfer Protocol-based (HTTP-based) streaming protocol). For example, the user device106can transmit and/or receive media content (e.g., audio and/or video content) to and/or from the monitoring device104via WebRTC, which provides web browsers and mobile applications with real-time communication. However, the disclosure herein is not so limited and any other real-time transmission protocol can be used. FIG.2is a block diagram of an example of a computing device200. Each of the monitoring device104, the user device106, or the server108can be implemented, at least partially, by the computing device200. The computing device200can be implemented by any configuration of one or more computers, such as a microcomputer, a mainframe computer, a supercomputer, a general-purpose computer, a special-purpose/dedicated computer, an integrated computer, a database computer, a remote server computer, a personal computer, a laptop computer, a tablet computer, a cell phone, a personal data assistant (PDA), a wearable computing device, or a computing service provided by a computing service provider, for example, a web host or a cloud service provider. In some implementations, the computing device can be implemented in the form of multiple groups of computers that are at different geographic locations and can communicate with one another, such as by way of a network. While certain operations can be shared by multiple computers, in some implementations, different computers are assigned to different operations. In some implementations, the system100can be implemented using general-purpose computers/processors with a computer program that, when executed, carries out any of the respective methods, algorithms, and/or instructions described herein. In addition, or alternatively, for example, special-purpose computers/processors including specialized hardware can be utilized for carrying out any of the methods, algorithms, or instructions described herein. The computing device200can have an internal configuration of hardware including a processor202and a memory204. The processor202can be any type of device or devices capable of manipulating or processing information. In some implementations, the processor202can include a central processor (e.g., a central processing unit or CPU). In some implementations, the processor202can include a graphics processor (e.g., a graphics processing unit or GPU). Although the examples herein can be practiced with a single processor as shown, advantages in speed and efficiency can be achieved by using more than one processor. For example, the processor202can be distributed across multiple machines or devices (each machine or device having one or more processors) that can be coupled directly or connected via a network (e.g., a local area network). The memory204can include any transitory or non-transitory device or devices capable of storing executable codes and data that can be accessed by the processor (e.g., via a bus). The memory204herein can be a random-access memory (RAM) device, a read-only memory (ROM) device, an optical/magnetic disc, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, a compact flash (CF) card, or any combination of any suitable type of storage device. In some implementations, the memory204can be distributed across multiple machines or devices, such as in the case of a network-based memory or cloud-based memory. The memory204can include data (not shown), an operating system (not shown), and an application (not shown). The data can include any data for processing (e.g., an audio stream, a video stream, a multimedia stream, user commands, and/or other data). The application can include programs that permit the processor202to implement instructions to generate control signals for performing functions of the techniques in the following description. In some implementations, in addition to the processor202and the memory204, the computing device200can also include a secondary (e.g., external) storage device (not shown). When present, the secondary storage device can provide additional memory when high processing needs exist. The secondary storage device can be a storage device in the form of any suitable non-transitory computer-readable medium, such as a memory card, a hard disk drive, a solid-state drive, a flash drive, or an optical drive. Further, the secondary storage device can be a component of the computing device200or can be a shared device accessible via a network. In some implementations, the application in the memory204can be stored in whole or in part in the secondary storage device and loaded into the memory204as needed for processing. In addition to the processor202and the memory204, the computing device200can include input/output (I/O) devices. For example, the computing device200can include an I/O device206. The I/O device206can be implemented in various ways, for example, it can be a display that can be coupled to the computing device200and configured to display a rendering of graphics data. The I/O device206can be any device capable of transmitting a visual, acoustic, or tactile signal to a user, such as a display, a touch-sensitive device (e.g., a touchscreen), a speaker, an earphone, a light-emitting diode (LED) indicator, or a vibration motor. The I/O device206can also be any type of input device either requiring or not requiring user intervention, such as a keyboard, a numerical keypad, a mouse, a trackball, a microphone, a touch-sensitive device (e.g., a touchscreen), a sensor, or a gesture-sensitive input device. If the I/O device206is a display, for example, it can be a liquid crystal display (LCD), a cathode-ray tube (CRT), or any other output device capable of providing a visual output to an individual. In some cases, an output device can also function as an input device. For example, the output device can be a touchscreen display configured to receive touch-based input. The I/O device206can alternatively or additionally be formed of a communication device for transmitting signals and/or data. For example, the I/O device206can include a wired means for transmitting signals or data from the computing device200to another device. For another example, the I/O device206can include a wireless transmitter or receiver using a protocol compatible to transmit signals from the computing device200to another device or to receive signals from another device to the computing device200. In addition to the processor202and the memory204, the computing device200can optionally include a communication device208to communicate with another device. Optionally, the communication can be via a network. The network can be one or more communications networks of any suitable type in any combination, including, but not limited to, networks using Bluetooth communications, infrared communications, near-field communications (NFCs), wireless networks, wired networks, local area networks (LANs), wide area networks (WANs), virtual private networks (VPNs), cellular data networks, or the Internet. The communication device208can be implemented in various ways, such as a transponder/transceiver device, a modem, a router, a gateway, a circuit, a chip, a wired network adapter, a wireless network adapter, a Bluetooth adapter, an infrared adapter, an NFC adapter, a cellular network chip, or any suitable type of device in any combination that is coupled to the computing device200to provide functions of communication with the network. The computing device200can also include or be in communication with an image-sensing device (not shown), for example a camera, or any other image-sensing device now existing or hereafter developed that can sense an image such as the image of a user operating the computing device200or a view of a monitored environment. The image-sensing device can be positioned such that it is directed to capture a view of the monitored environment. For example, the image-sensing device can be directed toward a patient and/or a patient bed in a hospital room. In an example, the position and optical axis of the image-sensing device can be configured and/or controlled such that the field of vision (i.e., the view) includes an area of interest. The computing device200can also include or be in communication with a sound-sensing device, for example a microphone, or any other sound-sensing device now existing or hereafter developed that can sense sounds near the computing device200. The sound-sensing device can be positioned or controlled to be positioned such that it is directed toward a monitored environment so as to capture speech, other utterances, or other sounds within the monitored environment. The sound-sensing device can be configured to receive sounds, for example, speech or other utterances made by the user while the user operates the computing device200. The computing device200can also include or be in communication with a sound playing device. The computing device200(and any algorithms, methods, instructions, etc., stored thereon and/or executed thereby) can be realized in hardware including, for example, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, firmware, microcontrollers, servers, microprocessors, digital signal processors, or any other suitable circuit. In this disclosure, the term “processor” should be understood as encompassing any the foregoing, either singly or in combination. The terms “signal,” “data,” and “information” are used interchangeably. FIG.3is a block diagram of an example of a monitoring device300according to implementations of this disclosure. The monitoring device300can be the monitoring device104ofFIG.1.FIG.3shows a front view301and a top view303of the monitoring device300. The front view301faces the monitored environment. The monitoring device300includes a camera302, a fish-eye camera304, microphone arrays306A,306B, infra-red light sensors308A,308B, a light sensor310, a multi-color LED strip312, a mounting device (i.e., a mount314), a speaker316, and a control panel318. However, a monitoring device according to this disclosure is not so limited and can include fewer, additional, other sensors and/or components, or a combination thereof. While not specifically shown, the monitoring device300can also include a processor, as described with respect to the processor202ofFIG.2. The monitoring device300can also include a memory, such as the memory204ifFIG.2. The camera302can be used to view the monitored environment. The camera302can include pan, tilt, zoom capabilities so that a remote user, via a user device, such as the user device106ofFIG.1, can control the camera302to pan, tilt, and/or zoom (PTZ) in order to adjust the view of the monitored environment to a desired view. That is, the monitoring device300can receive PTZ commands from the user device. The camera302can be capable of a magnification zoom factor of 10×, 12×, 20×, or some other magnification zoom factor. The fish-eye camera304can provide a 180° view of the monitored environment. The microphone arrays306A,306B can be used to capture sounds in the monitored environment. The infra-red light sensors308A,308B can be used to improve viewing of the monitored environment, such as the monitoring device104, under low light conditions, such as at night. The light sensor310can be used to sense the ambient light present in the monitored environment. In an example, the amount of detected ambient light can be used to adjust an intensity of a display that may connected to the monitoring device300. The multi-color LED strip312can be used to give a visual indication to an occupant of the monitored environment of an incoming video and/or audio call, that a video and/or audio call is ongoing, or that a video and/or audio call is not active. The multi-color LED strip312can be used to provide other visual indicators to the occupant of the monitored environment. The mount314can be used to mount the monitoring device on top of a monitor or a television. In an example, the monitor can be a portable computing device, such as a tablet. In an example, the monitoring device300may not itself include a processor. However, via an external connection (shot shown), such as a USB connection, a firewire connection, a Bluetooth connection, or the like, can be connected to a general purpose computer to enable the general purpose computer to perform monitoring functions of the monitored environment. As such, by connecting the monitoring device300to any processing unit, the processing unit can be turned into a telehealth end point. In such a configuration, the monitoring device encompasses the processor-less monitoring device plus the processor to which the processor-less monitoring device is connected to. The speaker316can be used to output sounds (e.g., voice, speech, etc.), such as those received from a user device, such as the user device106ofFIG.1. The control panel318can include controls for muting, unmuting, and controlling the volume of the speaker316. The control panel318can also include controls for controlling whether the camera302is enabled or disabled. When the camera302is disabled, the camera302does not visually (via video or images) capture (e.g., view) the monitored environment. FIG.4is an example of flowchart of a technique400for state detection according to implementations of this disclosure. The technique400can be used to detect the state of a monitored environment and/or a portion therein. The monitored environment can be a hospital room, a portion thereof or therein, such as hospital bed, a chair, and/or other objects or persons therein. The technique400monitors for changes in the state. The technique400uses images of the monitored environment captured by a camera (such as a camera of a monitoring device) to detect state changes. Image analysis can be used to detect the states. Upon detecting a change in the state, the technique400can send a notification of the state change. The notification can be sent to a server, such as the server108ofFIG.1. The monitoring device performing the technique400need only send notifications of the state changes to the server (such as for logging or further processing) thereby reducing network traffic. The technique400can be implemented by a monitoring device, such as the monitoring device104ofFIG.1or the monitoring device300ofFIG.3, which can be placed in the monitored environment, such as the monitored environment102ofFIG.1. The technique400can be implemented, partially or fully, by a computing device, such as the computing device200ofFIG.2. The technique400can be implemented as computer instructions that may be stored in a memory, such as the memory204ofFIG.2. The computer instructions can be executed by a processor, such as the processor202ofFIG.2. As mentioned above, the monitoring device may not itself include a processor but may be connected to a processor. Thus, the technique400can be implemented, partially or fully, by the processor to which the monitoring device is connected. At402, the technique402receives an image. The image can be received from a camera, which may be part of or is connected to the monitoring device. The image can be a frame of a video stream received from the camera. While not specifically shown inFIG.4, the technique400can be performed on successive images received from the camera. In an example, the camera can be directed, such as by the monitoring device, to capture single images, such as every certain period of time (e.g., 500 milliseconds, 1 second, 2 seconds, or some other period of time). In the case of a video stream, the technique400can be carried out on every frame of the video stream. In another example, the technique400can be carried out on less than all the frames of the video stream. For example, the technique400may process a certain frequency of frames of the video stream, such as every 10thframe, 20thframe, or some other frequency. In an example, at least the images from which state information is obtained can be saved in a memory of the of monitoring device. At404, the technique400obtains a current state of the monitored environment. In an example, the current state can be obtained as a set of state labels where each state label corresponds to a value of the respective state. The current state labels can be obtained from a ML model, such as a multi-label image classification model. The current state of the monitored environment (also referred to, simply, as state or room state), as used herein, refers to the collection of individual states, or a subset thereof, to be inferred (e.g., is of interest) and that the ML model is trained to detect. To illustrate, and without loss of generality, with respect to a hospital room that includes a patient bed, the room state can include respective states of one or more of the bed rails, respective states of inclining sections of the bed, a bed sheet state, a food tray state, patient position states, more states, fewer states, other states, or a combination thereof. The states of one or more of the bed rails describe whether one or more of the bed rails are up (i.e., raised) or down (i.e., lowered). For example, the states of one or more of the bed rails can include respective states for each of the rails (e.g., a top-right rail, a top-left rail, a bottom-right rail, and/or a bottom-left rail). The bed-incline state can include whether the section of the bed supporting the patient's head is up or down. In an example, the bed-incline state can include an estimate of the inclination angle. The bed sheet state can indicate whether the bed sheets are on or off the bed and/or whether the patient is covered or not. The food tray state can indicate whether the food tray is within a threshold distance from the bed. The patient position states can indicate the position of the patient on the bed. That is, on which of his/her body is the patient lying. The patient position states can indicate one or more of whether the patient is lying down on his/her left side, his/her right side, or his/her back, is getting out of the bed, is out of bed, more patient position states, less patient positions, other patient positions, or a combination thereof. As mentioned above, the ML model can be a multi-label image classification model. In the ML model, an output may be associated with each possible state label. In an example, the ML model can output a first value (e.g., 1, YES, TRUE, etc.) for a label if the state associated with the label is detected in an image; and can output a second value (e.g., 0, NO, FALSE etc.) if the state is not detected. The ML model can be thought of as outputting, for each state (i.e., a label) of the state model, a corresponding value. To illustrate, and without loss of generality, assume that the room state includes a first state (corresponding to whether the patient is lying on his/her back), a second state (corresponding to whether the patient is lying on his/her left side), and a third state (corresponding to whether the bottom-left rail of the bed is up or down). As such, when an image of the room (e.g., an image of a part of the room) that shows the patient lying on his/her back and the bottom-left rail in the down position is input to the ML model, the ML model outputs the tuple (1, 0, 1) corresponding, respectively, to a first state value (i.e., 1) indicating that the patient is on his/her back, a second state value (i.e., 0) indicating that the patient is not lying on his/her left side, and a third state value (i.e., 0) indicating that the bottom-left rail is in the down position. The values output by the ML model are not particularly limited. For example, instead of (1, 0, 1), the ML model can output (Yes, No, Yes), (“on back,” “not on left side,” “down”), (TRUE, FALSE, TRUE), or some other values. In an example, one output label can correspond to several states of the room state. For example, one output can correspond to both the first state (e.g., whether the patient is lying in his/her back) and the second state (e.g., whether the patient is lying in his/her left side). As such, the output label can have the values “back,” “left,” and “neither;” or some other similar labels. In an example, the outputs of the ML model can be translated into human-readable values (or labels) and only include only those states that are actually identified. For example, instead of the tuple (1, 0, 1), the human readable output can be: Patient_on_bed_back and Bed_rails_bottom_left_down, as described below with respect to Table I. The human readable output can be more descriptive, such as “The patient is on the bed lying on his/her back, and The bottom left rail is down.” In an example, the states that are not detected can be omitted from the human readable output. In another example, the values corresponding to all detectable states can be output. Table I illustrates an example of label classifications that can be detected (e.g., inferred) using the ML model with respect to a monitored environment that is a hospital room. That is, Table I describes an example of the states that the ML model may be trained to detect. It is noted that the disclosure herein is not limited to the states described with respect to Table I and implementations according to this disclosure can infer fewer states, more states, other states, or a combination thereof. TABLE ILabelDescriptionPatient_on_bed_backThe patient is on the bed lying onhis/her backPatient_on_bed_rightThe patient is on the bed lying onhis/her right sidePatient_on_bed_leftThe patient is on the bed lying onhis/her left sidePatient_getting_out_bedThe patient is about to get out of bedPatient_on_bed_downThe patient is scooched to the bottomof bedPatient_on_bed_above_railsThe patient is on the bed with limbsover the railsPatient_out_of_bedThe patient is out of the bedPatient_standingThe patient is standing upPatient_on_chair_normalThe patient is sitting on a chairPatient_getting_out_chairThe patient is about to get out of thechairPatient_out_of_chairThe patient is out of the chairPatient_on_floorThe patient is on the floorStaff_in_roomHospital staff is with the patientBed_emptyThe bed is emptyChair_emptyThe chair is emptyBed_inclinedThe bed is inclined above 30%Bed_rails_top_right_downThe top right rail is downBed_rails_top_left_downThe top left rail is downBed_rails_bottom_right_downThe bottom right rail is downBed_rails_bottom_left_downThe bottom left rail is downVisitor_in_roomNon-Staff person is in the room In another example, the bed rails can be associated with states of being up as opposed to being down. As such, the labels would be Bed_rails_top_right_up, Bed_rails_top_left_up, Bed_rails_bottom_right_up, and Bed_rails_bottom_left_up. It is noted that at least some of the states (e.g., state labels) may be mutually exclusive while others may not be. For example, the patient cannot be both on the bed lying on his/her back (state label Patient_on_bed_back) and out of bed (state label Patient_out_of_bed) at the same time. Some of the labels can be simultaneously detected in the same image. While binary values are described above as being output from the ML model, in another example, the ML model may be trained to output a confidence level (such as a percent value) for each state. As such, the patient may be inferred to be both on his/her back and out of bed, with different degrees of confidence. In an example, if the confidence level is below a certain confidence threshold (e.g., 30% or some other percent), then the detected state can be ignored. At406, the technique400stores the current state. In an example, the technique400can store the outputs of the ML model. In an example, the technique400can store the current state labels corresponding to the output values of the states. A timestamp of obtaining the state can be associated, and stored, with the state. The timestamp can be associated with each of the state values. The timestamp can be the time of receiving the image from the camera, the time that the camera captured the image, the time that the state was obtained at404, or a combination thereof. The state (e.g., the state labels) and associated timestamp(s) can be stored in a memory, such as the memory204ofFIG.2. At408, the technique400retrieves the last previously saved states from the memory. The last previously saved states are retrieved so that they can be compared to the states obtained in404. In some situations, last previously saved states may not be available. Such may be the case when the image being processed at402is a first image received for the monitored environment. For example, when a new patient is in the room, any stored states may be reset (e.g., archived, deleted, etc.) and obtaining current states at404begins anew. For example, when a new monitoring shift for the same patient is started or the monitoring device is reset, there may not be last previously saved states available. As such, the last previously saved states may be an empty state or some value indicating that last previously saved states do not exist. At410, in some implementations, the technique400may determine at least one state based on labels or states obtained from the image. As further described below with respect to the state of “Reposition,” the state cannot be obtained directly from the image. Rather such state is inferred based on further processing (e.g., rules and/or configurations) of the state or state labels obtained from the image. In some implementations, and further described below, the further processing may be performed by/at a server. In some implementations, the monitoring device and the server may perform further processing to infer different states from the states obtained using the ML model. At412, the technique400determines whether there are any state changes. To illustrate, and without loss of generality, assume that the last previously saved states include the labels Patient_on_bed_back and Bed_inclined, and the current state includes Patient_on_bed_back and Bed_rails_top_left_up. As such, there are state changes corresponding to the labels Bed_inclined and Bed_rails_top_left_up. In the case that last previously saved states do not exist, then the technique400determines that there is a change with respect to each of labels of the current state. At414, the technique400sends a notification of the state changes. In an example, the notification can be sent to a server, such as the server108ofFIG.1. In an example a notification may be sent directly to a user device, such as one of the user devices106ofFIG.1. In an example, the notification may be sent to a user and received by the user at the user's user device106. The server may perform additional processing (e.g., further state detection) based on the received notification. For example, the server may perform additional processing with respect to monitored conditions, as described below). In an example, the server can determine how to handle the state changes based on configurations and settings for alerts, documentation, audit reporting, some other purpose, or a combination thereof. FIG.5is an example of flowchart of a technique500for handling a monitored condition according to implementations of this disclosure. As eluded to above, some of the states can be directly determined by the classification labels of an image or video frame. However some states require further processing. Such processing can be carried out at a server, such as the server108ofFIG.1. In another example, such further processing can be performed at the monitoring device. The technique500can be implemented, partially or fully, by a computing device, such as the computing device200ofFIG.2. The technique500can be implemented as computer instructions that may be stored in a memory, such as the memory204ofFIG.2. The computer instructions can be executed by a processor, such as the processor202ofFIG.2. To illustrate, and without loss of generality, a patient is to be monitored to make sure that the patient will not develop bedsores (e.g., a monitored condition). If a patient lies on one side of his/her body (e.g., the back) for more than a threshold time (e.g., two hours), then the patient should be repositioned to another side (e.g., the left side) and must remain on the other side (e.g., the left side) for at least another threshold time (e.g., 15 minutes) before returning to the one side (e.g., the back). If the patient remains on the other side (e.g., the left side) for longer than the threshold time, then the clock resets with respect to developing bedsores. That is, the clock reset with respect to detecting the monitored condition (e.g., detecting for possibility of bedsores) with respect to the patient being on the one side (e.g., the back). If the patient returns to the one side (e.g., the back) within the threshold time (e.g., in less than 15 minutes), then any additional time on the one side (e.g., the back) would be added to the time that the patient was on the one side (e.g., the back) before being repositioned to the other side (e.g., the back). While described, for illustrative purpose, further processing of state changes with respect to bedsores, the disclosure is not so limited and further and other processing is contemplated with respect to other processing and state changes. Thus, the technique500, with respect to a monitored condition that is bedsores, can be summarized as getting a last position (i.e., a state) of the patient; adding the time that the patient has been in this state (position); if the position has been in this state for more than a first threshold time (e.g., two hours or some other time), then record a state of “Reposition” as the patient needs to be repositioned to prevent bedsores; and if the patient is in a new position, determine whether the patient has been in the new position for at least a second threshold time (e.g., 15 minutes or some other time) and, if so, reset the tracking times. Recording a state of “Reposition” can include sending an alert, such as to a nurse, to reposition the patient. At502, the technique500receives a first state. The first state can be received from the monitoring device as described above with respect toFIG.4. At504, the technique500stores the first state. A timestamp can be stored with the first state. The timestamp can be the time that the first state was received at502. The timestamp can be received at502with the state, as described above. If the technique500determines (not shown) that the first state relates to a monitored condition, then the technique500proceeds simultaneously to506and512; otherwise, the technique500proceeds only to512. At506, the technique500sets a first tracking time (a first timer) for the first state. That is, the technique500sets a clock to track the amount of time that the monitored state is set. If a first timer is already associated with (e.g., started for, etc.) the first state, then no new timer is set (e.g., initiated, activated, enabled, etc.). Rather, the first timer can be restarted if the first timer is paused. In an example, the first timer may be paused when a second state is received. In another example, the first timer may not be paused when the second state is received. The first timer is reset as described below with respect to524ofFIG.5. At508, the technique500monitors the duration of the first state. For example, in a continuous manner (e.g., every 30 seconds, 1 minute, 5 minutes, or some other time), the technique500determines whether, for example, a current time and the timestamp associated with the first state is greater than a threshold time (TH1). If the first state has been active for more than the threshold time, the technique500proceeds to510; otherwise the technique500can sleep until the next time that it performs the block508. At510, the technique500sends an alert of the state. For example, with respect to the monitored condition being related to bedsores, the alert can be according to the template “the patient has been in the state <state> for more than <TH1>,” where <state> and <TH1> are placeholder. As such, the alert can be “the patient has been in the state Patient_on_bed_back for more than 2 hours.” The alert can simply be “Reposition the patient.” Other alerts are possible. In an example, the technique500can regularly resent (not shown) the alert until the technique500receives a change in the state. At512, the technique500receives a second state. The second state can be received from the monitoring device, as described with respect toFIG.4. If the second state relates to the monitored condition, then the technique500proceeds to516-520, which are similar to506-510, respectively. For example, the second state can be that the patient is now on his/her right side whereas the first state can be that the patient was on his/her back. At522, the technique500determines whether the second state has been active for longer than a second threshold time (TH2). If so, then the technique500proceeds to524to reset the tracking time (e.g., the first timer) associated with the first state. if the technique500does not determine that the second state has been active for longer than the second threshold time, then the technique500can sleep for a period of time and then return to522. FIGS.6A-6Billustrate examples of images and corresponding state labels according to implementations of this disclosure. When images610-660are presented to a ML model, which is as described above, the ML model can output the indicated labels of Table I. With respect to the image610, at least the labels Patient_on_bed_right, Bed_rails_top_right_down, and Bed_rails_bottom_right_down. As is shown in the image610, a top-right rail612and a bottom-right rail614of a bed615, and which are hidden from view, are down. On the other hand, a top-left rail616and a bottom-left rail618of the bed615are up. With respect to the image620, at least the label Bed_empty is output because the patient is not in the bed615. With respect to the image630, at least the labels Bed_inclined (because a head-support section632is inclined up over 30 degrees), Patient_on_bed_back (because a patient634is lying on his back), and Chair_empty (because, even though a chair636is partially in the image630, the ML model infers that it is empty) are output. If the bed rail states are described in terms of whether they are up, as mentioned above, then the ML model would output the labels Bed_inclined, Patient_on_bed_back Bed_rails_top_right_up, Bed_rails_top_left_up, Bed_rails_bottom_right_up, Bed_rails_bottom_left_up, and Chair_empty because the top-right rail612, the bottom-right rail614, the top-left rail616, and the bottom-left rail618are all in the up (i.e., raised) position. With respect to the image640, at least the labels Patient_getting_out_bed and Bed_rails_bottom_left_up are output. Alternatively, if the bed rail states are described in terms of whether they are up, then the labels Patient_getting_out_bed, Bed_rails_top_right_up, Bed_rails_top_left_up, and Bed_rails_bottom_right_up can be output. With respect to the image650, at least the label Patient_getting_out_chair is output. With respect to the image660, at least the labels Patient_on_bed_back and Chair_empty may be output. FIG.7is an example of flowchart of a technique700for monitoring a room of a patient according to an implementation of this disclosure. The technique700can be used to detect the state of a hospital room of the patient or a portion therein. The technique700monitors for changes in the state. The technique700uses images of the room, which are captured by a camera (such as a camera of a monitoring device), to detect state changes. Image analysis can be used to detect the states. The image analysis can be performed by a ML model, which can be a multi-label classification model. Upon detecting a change in the state, the technique700can sent a notification of the state change. The notification can be sent to a server, such as the server108ofFIG.1. The monitoring device performing the technique700need only send notifications of the state changes to the server (such as for logging or further processing) thereby reducing network traffic. The technique700can be implemented by a monitoring device, such as the monitoring device104ofFIG.1or the monitoring device300ofFIG.3, which can be placed in the monitored environment, such as the monitored environment102ofFIG.1. The technique700can be implemented, partially or fully, by a computing device, such as the computing device200ofFIG.2. The technique700can be implemented as computer instructions that may be stored in a memory, such as the memory204ofFIG.2. The computer instructions can be executed by a processor, such as the processor202ofFIG.2. As mentioned above, the monitoring device may not itself include a processor but may be connected to a processor. Thus, the technique700can be implemented, partially or fully, by the processor to which the monitoring device is connected. At702, the monitoring device obtains a video stream of at least a part of the room of the patient. In an example, the video stream may be a sequence of images that are captured at regular time intervals. At704, the monitoring device obtains from a first picture of the video stream a first state of the part of the room of the patient. As described above, the first state can include respective states associated with different aspects of the room. As such, the first state can include states related to the patient, different parts of the patient's bed, and so on as described above. At706, the monitoring device obtains, from a second picture of the video stream, a second state of the part of the room of the patient. The second state can be as described with respect to the first state. At708, in response to identifying by the monitoring device a difference between the first state and the second state, the technique700sends a notification based on the difference, such as described with respect toFIG.4. In an example, the first state and the second state can each be obtained using a multi-label picture classification model, as described above. In an example, the first state or the second state can include at least one of bed-rail states, bed-incline states, or patient-position states. The bed-rail states can include respective states indicating positions of a top right rail, a top left rail, a bottom right rail, or a bottom left rail. In an example, the patient-position states can include respective states indicating whether the patient is lying down on a left side of the patient, whether the patient is lying on a right side of the patient, whether the patient is lying on a back of the patient, whether the patient is getting out of a bed, or whether the patient is out of the bed. In an example, and as described with respect toFIG.5, the technique700can further include setting a monitored condition of the patient based on the first state; and resetting the monitored condition in response to determining that the second state persists for a threshold time. In an example, the monitored condition can relate to bedsores. In an example, the technique700can store images from which state information is obtained (i.e., images that are input to the ML model) in a memory of the monitoring device. The images can be stored in association with the state. For example, and referring toFIG.4again, the image can be stored at406. In an example, one or more of the stored images can be retrieved from the storage. For example, in response to a request (such as from a server and/or a user device) for a state stored at or within a certain time, the corresponding image(s) may also be returned to the requestor. Another aspect of the disclosed implementations includes a system that includes a server and a monitoring device. The monitoring device can be configured to obtain, at a first time, a first image of at least a part of the room; identify a first state of the patient based on first image; obtain at a second time a second image of the at least the part of the room; identify a second state of the patient based on the second image; and, in response to the first state being different from the second state, send a first notification to the server. The server can be configured to, in response to receiving the first notification, set a monitored condition of the patient to a first value. In an example, the monitored condition can relate to bedsores, the first state can indicate whether the patient is lying on a first body side, and the second state can indicate whether the patient is lying on a second body side that is different from the first body side. In an example, the server can be further configured to, in response to the monitored condition having the first value for more than a threshold amount of time, send an alert. In an example, the server can be further configured to receive a second notification that includes a third state of the patient obtained at a third time; and determine whether to set the monitored condition to a second value based on whether a time difference between the first time and the third time exceeds a threshold. In an example, the system can further include a user device that is configured to display changes over time of at least one of the first state or the second state. FIG.8is an example of a display800of state information according to implementations of this disclosure. The display800can be displayed on a user device, such as the user device106ofFIG.1. For example, the display800can be displayed on a display at a nurses station. The display800can be generated based on the state change information received by a server, such as the server108ofFIG.1. In an example, a user action at the user device can cause the display800to be generated at the server and displayed at the user device. While the display800ofFIG.8includes certain information and has a certain layout, the disclosure herein is not so limited and a display according to implementations of this disclosure can include more, fewer, other information, or a combination thereof and/or can have a different layout. The display800includes identification information802, which can include the name of the patient for whose room state information is being displayed. The display800includes an abstract view803of the room of the patient. The abstract view803can be displayed instead of a real image of the room for privacy reasons. In another example, actual images captured by the camera of the monitoring device can be displayed in the display800. The abstract view803can be generated from one or more templates corresponding to different states. For example, if the state obtained from the ML model includes the labels Patient_out_of_bed, Bed_inclined, and Chair_empty, then the abstract view803can include an image template804of a bed that is empty and inclined and an empty chair template805. The image templates that used can be layout out according to the actual arrangement in the actual image. The display800includes a history806. The history806can be a scrollable table that displays the room states over time, which are saved by the server. The history806of the display800has a unit of measure of 1 hour. However, a user of the display800can zoom in and out to show more granular (e.g., down to the minute or less) or coarser state information. In an example, the history806can include a row for each of the states (e.g., labels) that can be obtained from the ML model. The history806can include rows for states that are further determined by the server based on the state changes received (i.e., states that require server processing, such as described with respect toFIG.5). The time periods during which the state was detected can be highlighted in the history806. For example, a row808shows that the Top-Right Bed Rails were up (i.e., the label Bed_rails_top_right_up) from 8:00 AM to 12:00 PM; and a row810shows that the patient needed repositioning during the 8:00 AM hour. The patient could have needed repositioning for reasons described with respect toFIG.5. The abstract view803can be displayed based on the particular time point selected by the user. In an example, the display800can be automatically updated, such as when a state change is received at the server. The display800can be updated according to the state change information. In an example, the display800can include video-player-like controls allowing the user to play, rewind, or pause the display800. For example, the user may click to select 8:00 AM in the history806and then select the play control. The abstract view803can then update to display views corresponding to the state changes starting at 8:00 AM. FIG.9is an example of flowchart of a technique900for monitoring a room of a patient according to implementations of this disclosure. The technique900can be used to detect active states (e.g., conditions of interest) relating to the patient room, objects therein, the patient, other persons therein, or other aspects of the patient room (collectively, and for brevity, active states of the patient room). Detecting an active state can mean identifying (e.g., determining, inferring, etc.) that a state of interest has changed from an inactive state (e.g., not detected) to active (e.g., detected). The technique900can detect the active states by examining images of an image stream of the patient room. An active state can be a condition of interest regarding the patient room such that the condition was not detected in an examined image of the image stream but is detected in a next immediate image to be examined. In an example, examining an image can mean using the image as an input to a machine learning model, as described herein. While the technique900is described with respect to monitoring a room of a patient, the technique900can be used to monitor any type of environment to be monitored. The technique900monitors for changes in the state. The technique900uses images of the monitored environment captured by a camera (such as a camera of a monitoring device) to detect active and persistent states. Image analysis can be used to detect the active and persistent states. Upon detecting an active or a persistent state, the technique900can send a notification of the active or persistent state. The notification can be sent to a server, such as the server108ofFIG.1. The monitoring device performing the technique900need only send notifications of the active or persistent state to the server thereby reducing network traffic and reducing human overload, as described herein. The technique900can be implemented by a monitoring device, such as the monitoring device104ofFIG.1or the monitoring device300ofFIG.3, which can be placed in the monitored environment, such as the monitored environment102ofFIG.1. The technique900can be implemented, partially or fully, by a computing device, such as the computing device200ofFIG.2. The technique900can be implemented as computer instructions that may be stored in a memory, such as the memory204ofFIG.2. The computer instructions can be executed by a processor, such as the processor202ofFIG.2. As mentioned above, the monitoring device may not itself include a processor but may be connected to a processor. Thus, the technique900can be implemented, partially or fully, by the processor to which the monitoring device is connected. At902, the technique900receives an image. The image can be received from a camera, which may be part of or is connected to the monitoring device. The image can be an image of image stream received from the camera. While not specifically shown inFIG.4, the technique900can be performed on successive images of the image stream received from the camera. In an example, the camera can be directed, such as by the monitoring device, to capture single images, such as every certain period of time (e.g., 500 milliseconds, 1 second, 2 seconds, or some other period of time). In the case that the image stream is a video stream (e.g., images captured at a rate of 24 frames per second or some other rate), the technique900can be carried out on every frame of the video stream. In another example, the technique900can be carried out on less than all the frames of the video stream. For example, the technique900may process a subset of the images of the video stream, such as every 10thframe, 20thframe. In an example, at least the images from which state information is obtained can be saved in a memory of the of monitoring device. At904, the technique900applies image classification to an image to obtain current states of the monitored environment. Obtaining current states means obtaining state values of the states. In an example, the current states (i.e., the values of the current states) can be obtained as a set of state labels where each state label corresponds to a value of a respective monitored condition (i.e., the monitored state). The current state labels can be obtained from an ML model, such as a multi-label image classification model, which can be as described herein. The current states of the monitored environment (also referred to, simply, as state or room state), as used herein, refer to the collection of individual states, or a subset thereof, to be inferred (e.g., is of interest) and that the ML model is trained to detect. As mentioned above, the ML model can be a multi-label image classification model. In the ML model, an output may be associated with each possible state label. In an example, the ML model can output a first value (e.g., 1, YES, TRUE, etc.) for a label if the state associated with the label is detected in an image; and can output a second value (e.g., 0, NO, FALSE etc.) if the state is not detected. The ML model can be thought of as outputting, for each state of the state model, a corresponding value (i.e., a label). To illustrate, and without loss of generality, assume that the current states include a first state (corresponding to whether the patient is sitting down), a second state (corresponding to whether the patient is lying down), and a third state (corresponding to whether the patient is getting up from sitting or lying down). As such, when an image of the room (e.g., an image of a part of the room) that shows the patient lying down, the ML model outputs the tuple (0, 1, 0). The values output by the ML model are not particularly limited. For example, instead of (0, 1, 0), the ML model can output (No, Yes, No), (“not sitting down,” “lying down,” “not getting up”), (FALSE, TRUE, FALSE), or some other values. In an example, the outputs of the ML model can be translated into human-readable values (or labels) and only include only those states that are actually identified. For example, instead of the tuple (0, 1, 0), the human readable output can be: “Lying down,” as described below with respect to Table II. The human readable output can be more descriptive, such as “The patient is lying down on the bed.” In an example, the states that are not detected can be omitted from the human readable output. In another example, the values corresponding to all detectable states can be output. Table II illustrates an example of state (and corresponding label classifications) that can be detected (e.g., inferred) using the ML model with respect to a monitored environment that is a hospital room. That is, Table II describes an example of the states that the ML model may be trained to detect. More accurately, Table II describes the active states corresponding to monitored states. The states can be easily deduced from Table II and are not specifically described herein. For example, it can be easily inferred from the state label Patient_not_visible that the state is, or corresponds, to whether the patient is visible. It is noted that the disclosure herein is not limited to the state labels described herein and implementations according to this disclosure can infer fewer states, more states, other states, or a combination thereof. In an example, the ML can be trained to detect at least some of the states of the union of the states of Table I and Table II. TABLE IILabelDescriptionPatient_Stationary_on_BedThe patient is in the same previouslydetected positionPatient_not_visibleThe patient is not visibleOthers_in_RoomThere are multiple people in the roomPatient_DrinkingThe patient is drinkingPatient_EatingThe patient is eatingPatient_Getting_upThe patient is about to get up fromsitting or lying downPatient_Lying_DownThe patient is lying down on the bedPatient_NudeThe patient seems to be partially orfully nudePatient_on_FloorThe patient has fallen downPatient_SittingThe patient is sitting downPatient_StandingThe patient is standing upPatient_WalkingThe patient is walkingPatient_WavingThe patient is waving at the cameraUnknownThe ML model could not identify anystates With respect to the Patient_Stationary_on_Bed state, one or more previous images may also be used as input to the ML model in additional to a current image. The ML model can be trained to output whether the patient is still in the same position as in the one or more previous images. It is noted that at least some of the states (e.g., state labels) may be mutually exclusive while others may not be. For example, the patient cannot be both lying down (state label Patient_Lying_Down) and not in view (state label Patient_not_visible) at the same time. Some of the labels can be simultaneously detected in the same image. In an example, the ML model may be trained to output a confidence level (such as a percent value) for each state. As such, the patient may be inferred to be both on his/her back and out of bed, with different degrees of confidence. In an example, if the confidence level is below a certain confidence threshold (e.g., 30% or some other percent), then the detected state can be ignored. At906, the technique900stores the current detected states. The technique900can store the outputs of the ML model. For example, the technique900can store the current state labels corresponding to the output values of the states. In an example, a timestamp of obtaining the current states can be associated, and stored, with the states. The timestamp can be associated with each of the state values. The timestamp can be the time of receiving the image from the camera, the time that the camera captured the image, the time that the state was obtained at904, or a combination of timestamps thereof. The states (e.g., the state labels or state values) and associated timestamp(s) can be stored in a memory, such as the memory204ofFIG.2. At908, the technique900retrieves the last previously saved states (e.g., state values) from the memory. The last previously saved states are retrieved so that they can be compared to the states obtained in904. In some situations, last previously saved states may not be available. Such may be the case when the image being processed at902is a first image received for the monitored environment. For example, when a new patient is in the room, any stored states may be reset (e.g., archived, deleted, etc.) and obtaining current states at904begins anew. For example, when a new monitoring shift for the same patient is started or the monitoring device is reset, there may not be last previously saved states available. As such, the last previously saved states may be an empty state or some value indicating that last previously saved states do not exist. As mentioned above, some conditions of interest can include a temporal element. That is, the conditions (e.g., states) may be identified as active states if they persist for respective durations of time. For example, a state may include whether the patient has moved within the last two hours. As mentioned above, if this state is active, then the patient should be repositioned to prevent bedsores. For example, a state may include whether the patient has not been detected in the images for a specified duration of time (e.g., 15 minutes or some other duration of time). As the patient may have fallen (such as in the bathroom), it is critical to identify such an active state. In an example, the ML model may have an architecture that includes a memory, such a recurrent neural network, which can be trained to identify a state as active if the state persists for a duration of time. In another example, a respective time duration can also be associated with at least some of the states. The technique900can reset to zero the time duration associated with a state responsive to the value output by/from the ML model being different from the immediately preceding output for the state. The technique900can add the time between the immediately preceding output and a current output to the time duration. For example, assume that images are processed at time steps of Δt and that at times 0, Δt, 2Δt, 3Δt, and 4Δt the patient was detected to be visible, visible, visible, not visible, and not visible, respectively, of the state “is the patient visible.” As such, at the time 2Δt, a total duration of 2Δt can be associated with the value Patient_not_visible; at time 3Δt, the total duration of the Patient_not_visible value is reset to zero; and at time 4Δt, a total duration of 2Δt can be associated with a value Patient visible of the state “is the patient visible.” At910, in some implementations, the technique900may determine at least one state based on labels or states obtained from the image and the stored states. As described herein, whether a state is active may not be directly obtained directly from the image. Rather such state is inferred based on further processing (e.g., rules and/or configurations) of the state or state labels obtained from the image. At912, the technique900determines whether there are any state changes. To illustrate, and without loss of generality, assume that the last previously saved states include the labels Patient_Sitting, and the current state includes Patient_Sitting and Patient_Drinking. As such, there are state changes corresponding to the labels Patient_Drinking. In the case that last previously saved states do not exist, then the technique900determines that there is a change with respect to each of labels of the current state. Additionally, the technique900determines whether persistent states are identified by examining the total durations associated with monitored states with the respective stored durations. At914, the technique900sends a notification of the state changes. More specifically, the technique900sends notification of detected active states or persistent states. In an example, the notification can be sent to a server, such as the server108ofFIG.1. In an example a notification may be sent directly to a user device, such as one of the user devices106ofFIG.1. The notification can include an image of the monitored environment. The image can be the image that caused the active states or persistent states to the detected. The notification can include the image processed at902. In another example, the technique900can obtain a another image from the camera and transmit the new image in the notification. In an example, the technique900can transmit, in the notification, the active and the persistent states identified. In an example, the notification can be transmitted to a user device, such as the user device106ofFIG.1. From the perspective of the server and the user device, there may not be any distinction between an active state and a persistent state. As such, and for brevity, an active state and a persistent state are both referred to as active state. As such, in the case that of a persistent state, the technique900can be to transmit an active state to the server. The server may perform additional processing (e.g., further state detection) based on the received notification. For example, the server may perform additional processing with respect to monitored conditions, as described below). In an example, the server can determine how to handle the state changes based on configurations and settings for alerts, documentation, audit reporting, some other purpose, or a combination thereof. The server can transmit the notification to the user device. In an example, the server can transmit instructions to the user device to display at least one of the image or the active state on a display of the user device. The instructions can includes instructions to highlight the image on the display of the user device. In an example, if an image is classified as including nudity (e.g., that the patient seems to be partially or fully nude), then the monitoring device can blur (or obscure) at least the private parts of the patient in the image before storing or transmitting the image. In an example, if the server receives an image with an active state of Patient Nude, the server may blur (or obscure) the at least the private parts of the patient in the image (even if the monitoring device already blurred (or obscured) the private parts of the patient. FIG.10is an example of a user interface1000for attention focusing for multiple patients monitoring according to implementations of this disclosure. The user interface1000can be displayed on a display of a user device, such as the user device106ofFIG.1. The user interface1000displays images of patient room, such as images1002(patient room number110),1004(patient room number112),1006(patient room number114). The user interface1000illustrates that the user device received instructions to display notifications related to the patient room numbers112and114(i.e., image1004and1006, respectively). As mentioned, in an example, the instructions can be received from the server. In another example, notifications can be received from the monitoring devices of respective patient rooms (i.e., the monitoring devices in the patient rooms numbered112and114). The image1004is the image that the technique900executing in the monitoring device of the patient room number112transmitted in response to detecting an active state (i.e., that the patient has not moved in 2 hours). In an example, an indication or a description of the active state can be displayed in the user interface1000. In an non-limiting example, the indication or the description of the active can be overlayed on the image, as shown with respect to an active state description1014. Other ways of displaying or indicating the active states in the user interface1000are possible. The image1006is the image that the technique900executing in the monitoring device of the patient room number114transmitted in response to detecting an active state (i.e., that the patient has not moved in 2 hours). An active state description1018is shown as overlaid on the image1006. To focus the attention of the user monitoring the user interface1000, the images1004and1006can be highlighted. In an example, the highlight can be a solid border that is displayed around an image to be highlighted, such as borders1012and1016. In another example, the border can be a blinking border. In an example, the highlight can depend on the active state. For example, different border colors may be used for different active states. Other ways to draw the attention of the user to newly updated (e.g., received and displayed images) are possible. In an example, the highlight may persist for a predefined period of time (e.g., 10 seconds, 15 seconds, or some other time). In another example, the highlight persists until cleared by the user. For example, the user may single click on an image to disable (e.g., hide, turn off, etc.) the highlight of the image. Other ways of disabling a highlight of an image are possible. In an example, a reset user interface component1024may be available, which, when pressed, disables all highlights on all images. In an example, the user can obtain an image from a patient room. For example, in response to double clicking the image1004(or some other user interface action), an image feed can be displayed in a window1026. In another example, the image feed can be displayed in place of the image1004. While not specifically shown inFIG.10, the user may be able to display multiple image feeds in the user interface1000. An image feed from a patient room can be received from the monitoring device of that patient room. In an example, the user device can receive the image feed from the monitoring device via peer-to-peer communications. In another example, in response to a user action, the user device sends a request for the image feed to the server. The server can in turn request the image feed from the monitoring device. The server then transmits the image feed to the user device. In an example, the user interface1000can include a control1020and a control1022. In other examples, the user interface1000can include other controls. In response to the user exercising (e.g., pressing, clicking, etc.) the control1020, a list of all rooms that the user can monitor may be displayed and the user can select the rooms for which monitoring images are be displayed in user interface1000. In response to the user exercising the control1022, all highlights on all images of the user interface1000can be disabled. FIG.11is an example of flowchart of a technique1100for monitoring a room of a patient according to an implementation of this disclosure. The technique1100detects active and persistent states of the room of the patient or a portion thereof. The technique1100uses images of the room, which can be captured by a camera (such as a camera of a monitoring device), to detect state changes. Image analysis can be used to detect the active and persistent states. The image analysis can be performed by a ML model, which can be a multi-label classification model. Upon detecting a change in the state, the technique1100can sent a notification of the state change. The notification can be sent to a server, such as the server108ofFIG.1. The monitoring device performing the technique1100need only send notifications of the state changes to the server (such as for logging or further processing) thereby reducing network traffic. The technique1100can be implemented by a monitoring device, such as the monitoring device104ofFIG.1or the monitoring device300ofFIG.3, which can be placed in the monitored environment, such as the monitored environment102ofFIG.1. The technique1100can be implemented, partially or fully, by a computing device, such as the computing device200ofFIG.2. The technique1100can be implemented as computer instructions that may be stored in a memory, such as the memory204ofFIG.2. The computer instructions can be executed by a processor, such as the processor202ofFIG.2. As mentioned above, the monitoring device may not itself include a processor but may be connected to a processor. Thus, the technique1100can be implemented, partially or fully, by the processor to which the monitoring device is connected. At1102, the monitoring device obtains an image stream of at least a part of the room of the patient. The image stream can be as described above. The technique1100can process (e.g., use, etc.) images of the image stream, as they are received, to identify active and persistent states, as described herein. At1104, the technique1100obtains, from a first picture of the image stream, first states of the part of the room of the patient. As described above, the first states can include, or can mean, respective state values associated with different aspects of the room. As such, the first states can include state values related to the patient (e.g., a state of the patient or an activity of the patient), different parts of the bed of the patient, other persons in the room, and so on, as described above. At1106, the monitoring device obtains, from a second picture of the image stream, second states of the part of the room of the patient. The second states can be as described with respect to the first states. At1108, responsive to identifying a state difference between the first states and the second states, transmitting a first notification to a server. The first notification can include the second image and the state difference, such as described with respect toFIG.9. The state difference, as used herein, refers to active or persistent states. For example, and as described above, and with respect to the state of whether the patient is waving, an active state may be that the patient is waving. If the patient is subsequently detected to not be waving, which is change from the previous value of the state, the change is not identified as a state difference that is transmitted to the server. In an example, the first states and the second states can each be obtained using a multi-label image classification model, as described above. In an example, the first states and the second states can each include at least one of an activity of the patient (e.g., values of states of the activity of the patient) and a state of the patient (e.g., values of states of the state of the patient). In an example, the state of the patient can include respective states indicating whether the patient is sitting, whether the patient is lying down, whether the patient is getting out of a bed, whether the patient is standing, whether the patient is walking, whether the patient is on a floor, more state values, fewer state values, or a combination thereof. In an example, the activity of the patient can include respective states indicating whether the patient eating, whether the patient is drinking, whether the patient is waving, more state values, fewer state values, or a combination thereof. In an example, and as described with respect toFIG.9, the technique1100can further include obtaining, from a third image of the image stream, third states of the part of the room of the patient; responsive to determining that the third states include a monitored condition, recording a time associated with the monitored condition; obtaining, from a fourth image of the image stream, fourth states of the part of the room of the patient; and, responsive to determining that the fourth states include the monitored condition and that the monitored condition persisted for a threshold duration of time, transmitting a second notification to the server. The second notification can include an indication of the monitored condition. In an example, the second notification can also include the threshold duration of time of the monitored condition. In an example, and as described with respect toFIG.9, the technique1100can further include obtaining, from a fifth image of the image stream, fifth states of the part of the room of the patient, where the fifth image is subsequent to the fourth image in the image stream; and, responsive to the fifth states not including the monitored condition, resetting the time associated with the monitored condition. Another aspect of the disclosed implementations includes a system that includes a server, a user device, and a monitoring device. The monitoring device can be configured to obtain an image stream of at least a part of the room, where the image stream includes a first image and a second image that is subsequent to the first image in the image stream; identify first states based on the first image; identify second states based on the second image; compare the first states to the second states to identify a first active state; and, in response to identifying the first active state, transmit a first notification to the server. The first notification can include the second image. The server can be configured to, responsive to receiving the first notification, transmit the second image to the user device. As mentioned above, active states encompass active states and persistent states. In an example, the first notification can include the first active state and the server can be further configured to transmit, to the user device, the first active state. The second image can be displayed with a highlight on the user device. In an example, the image stream further includes a third image and a fourth image. The monitoring device can be further configured to identify a second active state in the third image; record a time of identifying the second active state; and, responsive to identifying the second active state in the fourth image and the second active state persisting for at least a threshold duration of time, transmit a second notification to the server. The second notification can include an indication (e.g., a description) of the second active state. In an example, the server can be further configured to receive, from the user device, a first request to display the image stream on the user device; transmit, to the monitoring device, a second request to transmit the image stream to the server; and transmit the image stream to the user device. In an example, the image stream can further include a third image subsequent to the second. The monitoring device can be further configured to identify third states based on the third image; responsive to determining that the third states do not differ from the second states, not transmitting the third image to the server; and, responsive to determining that the third states differ from the second states, transmitting a second notification to the server, wherein the second notification comprises the third image. Another aspect is an apparatus for monitoring a monitored environment. The apparatus includes a camera and a processor. The processor can be configured to obtain an image stream of at least a part of the monitored environment; apply image classification to a first image of the image stream to obtain first classification labels; apply the image classification to a second image of the image stream to obtain second classification labels; identify state differences by comparing the first classification labels to the second classification labels; and, responsive to identifying state differences, transmit the state differences to a server. The first classification labels and the second classification labels can each be obtained using a multi-label image classification model. In an example, the processor can be further configured to set a monitored condition of the monitored environment based on the first classification labels; apply the image classification to a third image of the image stream to obtain third classification labels; and, responsive to the third classification labels including an indication of the monitored condition and the monitored condition persisting for a threshold duration of time, transmit a notification of the monitored condition. In an example, the monitored condition indicates whether a patient has not moved in at least the threshold duration of time. In an example, the monitored condition indicates whether the patient has not been detected in an image in at least the threshold duration of time. As mentioned above with respect toFIGS.4and9, an ML model (e.g., a multi-label classification model) can be used to infer the state of a monitored environment. In an example, the ML model can be a deep-learning convolutional neural network (CNN). In a CNN, a feature extraction portion typically includes a set of convolutional operations, which is typically a series of filters that are used to filter an input (e.g., an image) based on a filter (typically a square of size k, without loss of generality). For example, in machine vision (i.e., the processing of an image of a patient's room), these filters can be used to find features in an input image. The features can include, for example, edges, corners, endpoints, and so on. As the number of stacked convolutional operations increases, later convolutional operations can find higher-level features. In the CNN, a classification portion is typically a set of fully connected layers. The fully connected layers can be thought of as looking at all the input features of an image in order to generate a high-level classifier. Several stages (e.g., a series) of high-level classifiers eventually generate the desired classification output. In a multi-label classification network, the number of outputs from the output layer can be equal to the number of desired classification labels. In an example, and as described above, each output can be a binary value indicating whether the state corresponding to the binary value is set or not set (e.g., on or off). As mentioned, a typical CNN network is composed of a number of convolutional operations (e.g., the feature-extraction portion) followed by a number of fully connected layers. The number of operations of each type and their respective sizes is typically determined during a training phase of the machine learning. As a person skilled in the art recognizes, additional layers and/or operations can be included in each portion. For example, combinations of Pooling, MaxPooling, Dropout, Activation, Normalization, BatchNormalization, and other operations can be grouped with convolution operations (i.e., in the features-extraction portion) and/or the fully connected operation (i.e., in the classification portion). The fully connected layers may be referred to as Dense operations. As a person skilled in the art recognizes, a convolution operation can use a SeparableConvolution2D or Convolution2D operation. A convolution layer can be a group of operations starting with a Convolution2D or SeparableConvolution2D operation followed by zero or more operations (e.g., Pooling, Dropout, Activation, Normalization, BatchNormalization, other operations, or a combination thereof), until another convolutional layer, a Dense operation, or the output of the CNN is reached. A convolution layer can use (e.g., create, construct, etc.) a convolution filter that is convolved with the layer input to produce an output (e.g., a tensor of outputs). A Dropout layer can be used to prevent overfitting by randomly setting a fraction of the input units to zero at each update during a training phase. A Dense layer can be a group of operations or layers starting with a Dense operation (i.e., a fully connected layer) followed by zero or more operations (e.g., Pooling, Dropout, Activation, Normalization, BatchNormalization, other operations, or a combination thereof) until another convolution layer, another Dense layer, or the output of the network is reached. The boundary between feature extraction based on convolutional networks and a feature classification using Dense operations can be marked by a Flatten operation, which flattens the multidimensional matrix from the feature extraction into a vector. In a typical CNN, each of the convolution layers may consist of a set of filters. While a filter is applied to a subset of the input data at a time, the filter is applied across the full input, such as by sweeping over the input. The operations performed by this layer are typically linear/matrix multiplications. The activation function may be a linear function or non-linear function (e.g., a sigmoid function, an arc Tan function, a tanH function, a ReLu function, or the like). Each of the fully connected operations is a linear operation in which every input is connected to every output by a weight. As such, a fully connected layer with N number of inputs and M outputs can have a total of N×M weights. As mentioned above, a Dense operation may be generally followed by a non-linear activation function to generate an output of that layer. An example of training the ML model is now described. In a first step, a respective number of images (e.g., 100, 1000, or any number of images) of every state that the ML model is to detect are collected. In a second step, each of the images is labeled (such as by a human) with the multiple labels that apply to the image. In a third step, a label list file that contains the image file names and associated labels is generated. In a fourth step, a certain percent of the image (e.g., 10% of the images, or some other percentage) is allocated for training validation of the ML model. A certain percent of the images can also be allocated to the training testing of the ML model. In a fifth step, the architecture of the ML model is defined. That is, a number of convolution layers, a number of fully connected layers, a size of the output layer, activation functions, and other parameters of the ML model are defined. It is noted that this step can be iterative until the ML model converges. In a sixth step, the training images are run through the defined model. In a seventh step, the trained model (e.g., the parameters and weights) is saved. The saved model can then be included in the monitoring device to perform, inter alia, the technique900ofFIG.4. In an example, the images may be pre-processed before being input to the ML model. In an example, the images may be resized. In an example, the images can be resized to a size of 300×300. In an example, the ML model can include the following layers: flattening layers to reshape an input image into a format suitable for the convolutional layers and one or more fully connected layers; one or more convolutional layers; dense layers having respectively 128, 64, and 32 layers and using the Rectified Linear Unit (ReLu) function as an activation function; and a dense layer having 10 units and using the softmax function as an activation function. The training process can be iterative and continuous. As more images and more patient room environments become available, the ML model can be retrained. Additionally, in order to optimize accuracy of the human pose detection, the training images used can be changed using respective hospital room images. That is, for each hospital or each set of similar hospital room set ups, a different trained model can be obtained. In an example, as part of an initial process of deploying a system according to implementations of this disclosure at a medical facility (e.g., a hospital), images of existing hospital rooms are taken and fed into the training set and the ML model is retrained. For example, hospitals may have certain bed models that require training the ML model to detect states. For simplicity of explanation, the techniques400,500,700,900, and1100ofFIGS.4,5,7,9, and11, respectively, are depicted and described as a series of blocks, steps, or operations. However, the blocks, steps, or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter. The word “example” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” is not necessarily to be construed as being preferred or advantageous over other aspects or designs. Rather, use of the word “example” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise or clearly indicated otherwise by the context, the statement “X includes A or B” is intended to mean any of the natural inclusive permutations thereof. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more,” unless specified otherwise or clearly indicated by the context to be directed to a singular form. Moreover, use of the term “an implementation” or the term “one implementation” throughout this disclosure is not intended to mean the same implementation unless described as such. Implementations of the monitoring device300, and/or any of the components therein described with respect toFIG.3(and the techniques, algorithms, methods, instructions, etc., stored thereon and/or executed thereby) can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors, or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, in one aspect, for example, the monitoring device300can be implemented using a general purpose computer or general purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms, and/or instructions described herein. In addition, or alternatively, for example, a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein. Further, all or a portion of implementations of this disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
101,430
11943568
DESCRIPTION This disclosure, its aspects and implementations, are not limited to the specific components, assembly procedures or method elements disclosed herein. Many additional components, assembly procedures and/or method elements known in the art consistent with the intended systems and methods for facilitating a virtual visit to a gravesite will become apparent for use with particular implementations from this disclosure. Accordingly, for example, although particular implementations are disclosed, such implementations and implementing components may comprise any shape, size, style, type, model, version, measurement, concentration, material, quantity, method element, step, and/or the like as is known in the art for such systems and methods for facilitating virtual visits to gravesites, and implementing components and methods, consistent with the intended operation and methods. In the field of personal bereavement, when a grieving person desires to visit their deceased loved one, they physically travel to the gravesite or memorial site where they can view the grave, hear the ambient sounds in the environment, and converse with the departed by speaking towards the burial site. The only way to implement this method of visitation is that the grieving person is required to be physically present at the gravesite, which is not always possible or practical due to reasons such as adverse health, inclement weather conditions, cemetery operating hours, and long travel distances. Various systems and methods disclosed herein utilize aerial and/or land-based drones to allow users to conduct virtual visits to a gravesite or gravesites in one or more cemeteries. As used herein, the term “gravesite” includes all types of systems for housing human or animal remains including, by non-limiting example, mausoleums, burial plots, burial sites, memorials, crypts, vaults, urns, columbariums, graves, caves, or any other fixed physical location for interring or storing human or animal remains. As used herein, the term “drone” refers to any autonomously or remotely controllable vehicle capable of traveling in the air, land, and/or water that does not physically carry a human operator including, by non-limiting example, unmanned aerial vehicles, quadcopters, hexacopters, rovers, bipedal robots, quadrupedal robots, wheeled vehicles, tracked vehicles, treaded vehicles, boats, or any other unmanned autonomously or remotely controllable vehicle type. Referring toFIG.1, a diagram of the layout of a cemetery2is illustrated. As illustrated, the cemetery includes a plurality of gravesites B1, B2, B3and B4distributed across the physical land occupied by the cemetery. Also placed within the bounds of the cemetery are various other features and structures like trees C, lake D, and mausoleum E. A base location/base station A is also located on the grounds of the cemetery which houses one or more drones, which may be aerial or ground-based or a combination of both types in various system implementations. Four different travel paths F1, F2, F3, and F4are illustrated inFIG.1, each designed to reach gravesites B1, B3, B2, and B4, respectively. Travel paths F2and F3illustrate paths that a land-based drone would take to navigate around gravesites and physical obstacles like trees C to reach gravesites B3and B2. Travel path F4illustrates a path that an aerial drone could take to travel over various gravesites and avoid trees C while crossing over lake D to reach gravesite B4. Travel path F1shows a path that could be taken by either a ground-based drone/vehicle or an aerial drone/vehicle to reach gravesite B1. The various travel paths F1, F2, F3, and F4represent a travel path that takes the drone to the respective gravesite; the travel paths for the return trip back to the base station A for each drone may follow the same path or a different, newly calculated one, depending on the method implementation and whether any new obstacle(s) like visitors, workers, or equipment are now detected in the original travel path or whether another gravesite needs to be visited by the drone prior to returning to base station A (for the same user or a different user). The use of land-based and aerial drones (and even water-based drones in some implementations) can allow drones in the base station A to physically reach every gravesite in a cemetery. This ability for the drones to reach any gravesite can allow family members and friends who are physically unable to visit the gravesite of a loved one, friend, celebrity, or notable person to visit virtually at their convenience without having to travel to the gravesite. Reasons to visit a gravesite may include spiritual reasons/remembrances/ceremonies, monitoring its upkeep, and other personal reasons that may include marking birthdays, anniversaries, holidays, and other occasions. Many cemeteries are open during certain hours or require on-site staff to be on the grounds during visitations. The use of the drones to do virtual visitation does not require the cemetery to be physically open or have staff on-site for the drone to be utilized because the base station A is already located on the grounds or off-site. For cemeteries that are enclosed by gates or fencing, the drone could fly over the gate/fence to the intended gravesite for the user's visit if the base station is located outside the fencing. This ability for the system to facilitate visitation outside ordinary operating hours of the cemetery means that additional staff do not need to be hired just to facilitate extended visitation hours. Referring toFIG.2, a block diagram of an implementation of a system4for facilitating virtual visits to a gravesite using a drone6is illustrated. Here the system includes a computing device8associated with user10who interacts with instructing module12which provides instructions to control system14. Instructing module12may include one or more webservers and one or more databases designed to connect with computing device8across a telecommunication channel and receive instructions from the user via the computing device8. In various implementations, the instructing module12operates a website or backend service for an application operating on a desktop, laptop, computer, remote control, handheld computer, tablet, smartphone or other device that can send and receive audio and video signals that allows the user to, by non-limiting example, sign up to use the system, provide payment information, apply to get access to a particular gravesite(s) or cemeteries, download software associated with the system, and get help with operating the system. The various one or more databases of the instructing module12may store a wide variety of data associated with one or more cemeteries that utilize the system including, by non-limiting example, physical address information, global positioning system (GPS) coordinates of one or more gravesites in each of the one or more cemeteries, images of one or more gravesites in each of the one or more cemeteries, permitted operating hours for drone visitation, permitted weather conditions for drone visitation, off-limits areas, geo-fencing, height limitations within the one or more cemeteries for drone operations, data from the one or more cemeteries, data from third party databases, or any other desired data attribute that enables/affects the virtual visitation capabilities of the drones. Instructing module12is in operating communication with control system14located at a cemetery using one or more telecommunication channels which may be, by non-limiting example, cellular, satellite, microwave, light, a wired telecommunication channel; a wireless telecommunication channel; a wireless telecommunication channel using the wireless protocol marketed under the tradename WIFI by the WiFI Alliance of Austin, Texas; the internet; a local area network; a wireless telecommunication channel using the wireless protocol marketed under the tradename BLUETOOTH by Bluetooth SIG, Inc. of Kirkland, Washington; a wireless telecommunication channel using the wireless protocol marketed under the tradename ZIGBEE by Connectivity Standards Alliance of Davis, California; or any other wired, wireless, or other electromagnetic wave communication system. The one or more telecommunication channels may also include non-electromagnetic wave communication systems including, by non-limiting example, acoustic, compression wave, or other momentum-transfer communication systems and methods. The control system14may be included in at least partly or entirely in the base station/base location and works to operate the drones6in response to instructions received from the instructing module12. Implementations of the control system14use navigational drone software and communications technology to communicate and direct the travel path of the drones in combination with software and communications technology included on each of the drones themselves. The base station/base location may be on the cemetery grounds themselves or outside at another location in the vicinity of the cemetery within travel distance of the drones. As illustrated inFIG.10, the base station16includes an energy replenishment apparatus18such as, by non-limiting example, a drone battery charging unit, refueling system, plug-in station, docking station, wireless charging station, or any other power transfer station. While the base station16inFIG.10is illustrated as a box with a lid, a wide variety of base station designs may employed in various implementations. Additional operational characteristics of the control system14will be described with respect to the various method implementations disclosed later in this document. Referring toFIG.2, the aerial or land-based drones6utilized in the implementations disclosed herein may be configured with various sensing devices such as, by non-limiting example, one or more cameras, one or more microphones, one or more speakers, a privacy sensor, a 360-degree privacy sensor, ultrasonic object-detection sensor(s), obstacle avoidance sensors, light sensors, light detection and ranging (LIDAR), or any other desired sensor type. InFIG.2and with reference to the larger view inFIG.3, the particular drone6is a quadcopter drone that includes camera20, microphone22, and speaker24operably coupled with the frame of the drone. In various drone implementations like the drone6illustrated inFIG.2, an illuminating light projecting arrangement such as a spotlight26may be incorporated into the drone6to illuminate at nighttime or low light conditions a gravesite or portions thereof or surrounding area. This ability to employ illumination may enable a remotely located grieving individual the ability to visit a burial site at night or under other low light conditions when many cemeteries may be closed for public visitation. Other components that may be included in various drone implementations include a projector device28designed to display one or more images onto the gravesite. One or more antennas30may also be included with the drone to enable communication with the drone6with the control system14during operation and communication with the various sensor and projection components. Also, the drone6illustrates how, in some implementations, a hook32or other mechanical device or system may be attached to the aerial drone for carrying physical objects to a gravesite as will be discussed hereafter. The aerial drone implementations may operate autonomously or semi-autonomously depending on the configuration of the control system14and the nature of the air space regulations present in the area where the cemetery is located. In autonomous operation, the drone receives a travel path with waypoints to the gravesite, or, in some implementations, just the desired coordinates of the gravesite, and the drone then pilots itself to the gravesite. In semi-autonomous operation, the drone may receive the travel path step-by-step from the control system as the drone indicates each waypoint has been reached until the drone reaches the gravesite. In some implementations where semi-autonomous operation is employed, the drone may have an auto-return function if the drone loses its connection with the base station mid-travel to prevent it from being lost or damaged. User interaction with the gravesite may include reciting prayers and/or having a conversation with the user's loved one. The ensure privacy, the drone may also include privacy sensors like any disclosed in this document (seeFIG.6, privacy sensor98, by non-limiting example) such as, by non-limiting example, 360-degree infrared or motion sensors that detect body heat or motion from one or more humans nearby the gravesite within a predetermined distance from the drone. The use of the privacy sensors allows the drone/system to alert the user that their conversation may not be private as one or more humans are within the predetermined distance from the drone. Referring toFIG.4, an implementation of a land-based drone/vehicle38is illustrated. As illustrated this drone38includes wheels40which are powered to allow the drone38to move and traverse the ground autonomously or semi-autonomously under the direction of the control system14. As illustrated, the drone includes a support42for a proximity/privacy sensor44and an obstacle avoidance system46. The obstacle avoidance system may employ, by non-limiting example, ultrasonic sensor(s), cameras, LIDAR (Light Detection and Ranging), any combination thereof, or any other sensor type capable of detecting obstacles in the path of the drone. The various obstacle avoidance systems work to prevent the drones from colliding with people, trees, rocks, and other physical bodies during travel on the ground as in the case of drone38or travel in the air as in the case of aerial drone6. The drone38also includes one or more supports48for one or more cameras50, a projection system52, and an illumination device54. The drone also includes an arm or other support56(movable or stationary) designed to carry an object (in this case a bouquet of flowers58with it to the gravesite. In various system implementations, the object may be manually placed/attached to the support56or the drone may retrieve the object on its own at or near the base station or other location where the objects are stored. Referring toFIG.5, another implementation of a land-based drone60is illustrated that is similar to the one ofFIG.4except that instead of independently moving wheels, tracks or treads62are used to allow the drone to move across the surface of the ground in the cemetery. Similar to the implementation illustrated inFIG.4, the drone60includes a support64for a proximity/privacy sensor66and an obstacle avoidance system68. The obstacle avoidance system68may be any disclosed in this document including any LIDAR, acoustic sensor, infrared sensor, camera, or other electromagnetic wave or compression wave-based system. The drone60also includes one or more supports70for one or more cameras72, a projection system74, and an illumination device76. The drone also includes an arm or other support78(movable or stationary) with a container80designed to carry an object (in this case a bouquet of flowers82) with it to the gravesite. This drone includes a microphone84and speaker86as well. Various system implementations may not use just wheeled or tracked land-based drones but may also employ bipedal, quadrupedal, or drones with other numbers of legs or support appendages in various implementations. Referring toFIG.6, a bipedal drone88implementation is illustrated. Similar to the other drone implementations, the drone88includes video camera90, microphone92, speaker94, projector system96, privacy sensor98, and obstacle avoidance sensor100, each of which may be and function like any similar sensor or system disclosed in this document. In this drone implementation, the drone includes an arm102that contains a coupling system (hand in this case) that allows it to carry container104to transport objects to the gravesite. In various system and method implementations, the drone navigation software of the control system14may utilize navigation technologies that utilize, by non-limiting example, physical location, geographical mapping coordinates, GPS, grid pattern, location markers, way points, beacons, image recognition, acoustic wave-based, or any other visual or electromagnetic wave or compression wave employing methods to guide the drone to and from a gravesite. The control system software may use a lookup table or database to retrieve geographic positioning coordinates that correspond to a gravesite of interest that a user has subscribed to or has permission to visit in response to receiving the instruction from the instructing module12that the user wishes to visit the gravesite. As part of an implementation of a method of preparing a cemetery for virtual visitation using drones, GPS mapping of some or all of the gravesites in the cemetery may be carried out. The mapping may take place as gravesite virtual visitation requests are received from users or mapping may take place using any other strategy, such as, by non-limiting example, newest gravesites to oldest, section by section, ad hoc, or systematic mapping. In various mapping method implementations, referring toFIG.11, an individual34may momentarily place a GPS receiver device36at a specified location for a gravesite (here on top of a headstone located on the gravesite) to obtain/record the GPS location coordinates of that gravesite at the desired level of GPS resolution needed to allow for accurate drone travel to the gravesite. In some method implementations, at the time of recording of the GPS location coordinates, corresponding details of the associated deceased individual interred at the gravesite, such as, by non-limiting example, the name of the deceased as is typically found on a grave marker or register at the cemetery, may be recorded for storage in a database of the instructing module12along with the GPS location coordinates. This ability to link GPS location coordinates with a person's name at a given cemetery may allow a grieving individual to simply look up a deceased person's name using a computing interface generated by the computing device8in communication with the instructing module12to retrieve from the database the deceased individual's associated GPS location coordinates. These coordinates can then be provided to the control system by the instructing module12and used by the navigational software to calculate a travel path for a drone to the desired gravesite. The starting point of the travel path begins where the drone is currently located (i.e., another gravesite, the base station, or another intermediate location) to the desired location of the gravesite. The use of GPS location coordinates for navigation does not involve any electronic equipment or anything else to be pre-positioned or permanently installed or mounted at a burial site as a prerequisite for a grieved individual to remotely visit a deceased loved one. In other words, a grieving individual may remotely and spontaneously visit a deceased individual without the need for the burial site to be preconfigured with any installed electronic hardware. This is similar to the traditional method of visiting a local gravesite “on-the-fly” or “on the spur of the moment” without requiring any preparation work with the cemetery. In other mapping method implementations, in conjunction with GPS coordinate mapping or instead of GPS coordinate mapping, the use of grave stone image recognition may be employed. As used herein, the term “grave stone” refers to any object used to identify the location of and/or identity of a person interred at a gravesite; it thus includes the terms “grave marker” and “grave monument.” In implementations where grave stone image recognition is employed, the use of the drones themselves to acquire images of grave stones and process or send for processing the images for data extraction may be used. A non-limiting example of a computer vision application that could be utilized in a method implementation includes the machine vision library marketed under the tradename OPENCV by the Open Source Vision Foundation. Other machine vision systems, libraries, and methods could also be employed in various implementations.FIG.12illustrates a drone106taking images of grave stones108,110. Here, as illustrated by the dotted sight lines, the drone106includes two cameras to enable taking images in a wider field of view for faster image acquisition. After the drone takes one or more images of the grave stone(s), the physical characteristics such as, but not limited to, size, shape, material, color, imprinted/engraved text and images is noted, processed, and stored in the database to create a record of the visually unique and distinguishable characteristics of the grave stone that set it apart from each other grave stones in the cemetery. Whether by use of drone data collection as inFIG.12, or by manual use of a digital camera, at least one photograph is taken of at least the content (often front) surface of some or all grave stones in a cemetery. Each digital image along with the corresponding details of the associated deceased individual (i.e., the name of deceased as is typically found on the content/front side of a grave marker) is then recorded into a database associated with the instructing module12. This information, in combination with the creation of a relational map of each grave stone in a cemetery using a map or with respect to another coordinate system, allows for the unique positioning of each grave stone in relation to all other grave stones in the cemetery. With the relational map and grave stone images stored in the database, a grieving individual may simply look up a deceased person's name using a computer interface of computing device8to retrieve from the database the associated unique grave stone image(s) and then instruct a drone to navigate to the desired grave stone. In various method implementations where digital images are employed, some implementations may utilize way points and travel path calculations like those previously discussed where the relational map data and grave stone data are employed instead of using GPS coordinate information. In other implementations, as the drone travels, the drone is continuously accessing its location by viewing grave stone digital images in its surroundings and comparing these with stored images of grave stones. This information, coupled with the above referenced pre-determined relational location map of grave stones in a cemetery, enables the drone to move towards the direction of a desired gravesite and to confirm when it arrives by again using image comparison. This method of navigation and the corresponding image-based method of cemetery mapping also does not involve installing any electronic equipment or anything else to be installed at a gravesite or elsewhere to allow for a grieved individual to remotely visit their deceased loved one. The various methods of drone navigation previously disclosed may be augmented with auxiliary navigational aids in various situations. Referring toFIG.8, in the case of GPS navigation, if GPS satellite signal reception is unreliable or becomes unreliable at a cemetery, then a navigational aid(s) such as, by non-limiting example, a homing beacon(s)112may be pre-positioned at or near a gravesite or elsewhere in various location(s) in the cemetery. Such a beacon(s) may simply be appended to the top end of a rod or stake114and may face skyward for use with an aerial drone116. The bottom end of the rod or stake114is hammered or otherwise inserted into the ground117. Such a beacon(s) may be installed and removed quickly and be present either temporarily or permanently. One type of such a beacon emits a unique light pattern118which may be detected overhead by the camera120of an aerial drone116and used to guide the drone towards the beacon allowing the drone to locate its position and move appropriately toward the base station or gravesite. In the case of image recognition-based navigation, such a beacon(s) or other navigation aid(s) may be appropriate in situations such as, but not limited to, where a cemetery has frequent low visibility due to persistent conditions such as fog or rain which may impact grave stone clear image capture from a drone camera. In the various methods of drone navigation, geofencing and/or height limitations may be employed to ensure that the various drones do not cross specific boundaries of the cemetery or rise above predetermined or regulatory height thresholds. In the various system and method implementations disclosed herein, when the drone reaches the intended gravesite, the onboard camera, microphone, and speaker device are automatically or remotely activated and communicate video and/or audio signals with the user's computing device8. The camera provides images and the microphone picks up ambient sounds around the gravesite and the drone communicates the images and audio over a telecommunications channel to the base station which then relays the images and audio to the instructing module12. The instructing module12then sends the images and audio to the computing system8associated with the user10who then uses a computer interface on the computing system8to view the images and listen to the audio. In a similar way, the onboard speaker device allows the user to be heard at the gravesite when speaking into a microphone associated with the computing device so the user may interact with the gravesite to recite prayers or have a conversation with their loved one, friend or acquaintance, just as if they were visiting in person. The various privacy sensor implementations previously disclosed may also be used to sense the presence of one or more humans within a predetermined distance from the drone and alert the user if one or more humans are detected. This will allow the user to tune the speaker volume as described hereafter to ensure what is being spoken is not overheard. Although in various system and method implementations, the complete operation of the drones may be programmed to be automatic and not require any intervention by the grieving user or others, the user may also be provided the ability to control certain aspects of their visit to the gravesite. Referring toFIG.9, an implementation of a computer interface122generated by computing device124associated with user126is illustrated. As illustrated, the computer interface122includes various controls for camera and speaker functions, including, by non-limiting example, controlling the positioning/pointing of the angle of camera towards the gravesite or surrounding areas (up/down132) and controlling other aspects of the camera operation such as panning128, tilting130, zooming134or any other desired camera position adjustment. Similarly, the user may control the sound level of the audio speakers135, the intensity of the projecting light sources131, and/or the adjustment of the gain of the microphone129. In some implementations, the user may also be able to input commands that adjust the position of the drone either among a set of fixed positions relative to the gravesite or a set of continuous positions within limits relative to the gravesite (height, angle, position to grave stone, etc.). The ability to control such features remotely using the interface ofFIG.9enables the user to more realistically simulate an actual in-person experience at a gravesite. For example, if the privacy sensor (which may be any disclosed in this document) detects nearby mourners or other persons, the remote user of this system may lower the output volume of the speaker illustrated such that the user may speak towards the gravesite in a whispering volume similar to the sound level they would project to a gravesite in person when other mourners/persons are nearby. The computer interface122enables a user to control such functions by manipulating scroll-bars or other icons to control the various functions. While the drone is present at the gravesite after being navigated thereto, the user is able to have an interactive audiovisual experience with the gravesite at their convenience. The video camera provides images of the grave stone, monument, flowers, or other aspects of the gravesite. This same function exists even where the gravesite is a crypt, inside a mausoleum, or inside a columbarium provided the drone is able to navigate into the building. The microphone captures sounds that may include birds, water features, or other mourners visiting the gravesite which are the same sounds that a person may hear when visiting a gravesite in person. The ability to visit virtually via drone may be coordinated with other mourners, allowing the virtual user to participate with those mourners at a gravesite in person. In various system or method implementations, when arriving at the gravesite, the onboard video camera, microphone, and audio speaker device may be automatically activated or may be remotely activated by the user when ready. Referring toFIG.7, the onboard projector136may also be activated during a gravesite visit either by the user or automatically so that the user may view virtual flowers138on or near the grave stone140at the gravesite. The onboard projector136may also project flowers or other images onto or at, by non-limiting example, the gravesite, monument or ground that the user is viewing. The projection unit136may use non-holographic, holographic, laser, diffuse light, or other projection systems and methods to carry out the projection. The drone142may also have the ability to deliver physical flowers during the visit and/or other items of remembrance to the gravesite using drone delivery technologies and methods disclosed in this document like arm143. When the visit is complete, the drone may communicate with the control system14to indicate the user is finished (or the user's time is up or time has expired due to battery life limitations, etc.). The control system14may then, in various system implementations, calculate a return travel path and communicate that to the drone, the drone may autonomously calculate a return travel path, or the drone may simply retrace its original steps back to the base station following the reverse of the original travel path. If another user is ready for a virtual visit however, the control system14may direct the drone by sending it a path to the next gravesite or may simply send the coordinates/image of the next gravesite and the drone may then plot a new course to the gravesite. The various system implementations disclosed herein may utilize/enable various methods of facilitating a virtual visit to a gravesite. Referring toFIG.13, a flowchart of a method implementation is illustrated. As illustrated, the method includes providing a drone including a video camera, a microphone, and a speaker along with a control system associated with a cemetery where the control system includes one or more processors and is operatively coupled with one or more telecommunication channels (step146). The drones used in the various method implementations may be any aerial, land-based, or water-based ones disclosed in this document. The method also includes receiving, at the control system, from a computing device associated with a user, a request for a virtual visit to a gravesite (step148). The computing device may be any disclosed in this document, including, by non-limiting example, a laptop, a desktop, a server, a smart phone, a tablet, or any other fixed or portable computing device capable of displaying, playing, and transmitting audio and video. The method also includes calculating, using the control system, a travel path for the drone to the gravesite from a base station located at the cemetery (step150) and initiating autonomous travel by the drone from the base station along the travel path to the gravesite (step152). The use of autonomous travel may significantly reduce the labor costs associated with manual driving of the drones and aid in allowing for the system to have multiple drones out and returning at a time for a given cemetery. The method also includes confirming, using the control system, the arrival of the drone at the gravesite (step154). The confirmation may come to the control system from the drone itself or the control system may independently monitor the position of the drone using a GPS position tracking system installed on the drone, by non-limiting example. The method also includes, while at the gravesite, receiving at the control system or the base station over the one or more telecommunication channels, video, and audio of the gravesite from the camera and microphone of/associated with the drone (step156). The method includes, while at the gravesite, transmitting the video and audio of the gravesite to the computing device associated with the user using the one or more telecommunication channels (step158). The method also includes receiving, at the control system, user audio from the computing device associated with the user (step160) and sending, using the control station, the user audio to the drone using the one or more telecommunication channels (step162). The method also includes playing the user audio using the speaker of the drone at the gravesite (step164) and returning the drone to the base station (step166). In places where the description above refers to particular implementations of systems and methods for facilitating a virtual visit to a gravesite and implementing components, sub-components, methods and sub-methods, it should be readily apparent that a number of modifications may be made without departing from the spirit thereof and that these implementations, implementing components, sub-components, methods and sub-methods may be applied to other systems and methods for facilitating a virtual visit to a gravesite.
34,103
11943569
DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well-known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims. It will be understood that the term “system,” “engine,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression(s) if they may achieve the same purpose. Generally, the word “module,” “unit,” or “block,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage device(s). In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices (e.g., processing device113and/or processing device122as illustrated inFIG.1) may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules (or units or blocks) may be included in connected logic components, such as gates and flip-flops, and/or can be included in programmable units, such as programmable gate arrays or processors. The modules (or units or blocks) or computing device functionality described herein may be implemented as software modules (or units or blocks), but may be represented in hardware or firmware. In general, the modules (or units or blocks) described herein refer to logical modules (or units or blocks) that may be combined with other modules (or units or blocks) or divided into sub-modules (or sub-units or sub-blocks) despite their physical organization or storage. It will be understood that when a unit, engine, module, or block is referred to as being “on,” “connected to,” or “coupled to” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The terminology used herein is for the purposes of describing particular examples and embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “include” and/or “comprise,” when used in this disclosure, specify the presence of integers, devices, behaviors, stated features, steps, elements, operations, and/or components, but do not exclude the presence or addition of one or more other integers, devices, behaviors, features, steps, elements, operations, components, and/or groups thereof. An aspect of the present disclosure relates to an intercom system and methods using the same. The intercom system may include a first intercom including a first image acquisition device that has a first field of view (FOV), a second intercom operably connected to the first intercom, and one or more second image acquisition devices. Each second image acquisition device may be operably connected to the first intercom and have a second FOV different from the first FOV. During a video and/or audio intercom between the first intercom and the second intercom, the first intercom may receive a request for target image data captured by one or more target image acquisition devices from the second intercom. The target image acquisition device(s) may include at least one target second image acquisition device of the second image acquisition device(s) and optionally the first image acquisition device. The first intercom may obtain the target data from the target image acquisition device(s), and process the target image data to generate a signal encoding the target image data. The first intercom may further send the signal to the second intercom for display. For example, the target image data may include image data captured by a plurality of second image acquisition devices and be jointly displayed on the second intercom. According to some embodiments of the present disclosure, the first intercom may serve as an intermediary device for image data processing and transmission between the second intercom and the second image acquisition device(s). For example, the second intercom may obtain and display image data captured by a second image acquisition device via the first intercom without connecting to the second image acquisition device directly. In this way, the second intercom may not need to store authentication information (e.g., an IP address, a user name, and/or a password) of the second image acquisition device, which may avoid the leak of the authentication information and improve the information safety of the intercom system. In addition, in some embodiments, the first intercom may be operably connected to a plurality of second intercoms. Using the first intercom as a centralized image data processing and transmission device may obviate the need of establishing a connection between each second intercom and each second image acquisition device. For example, there is no need to input authentication information of each second image acquisition device into each second intercom or installing a wire to connect each second image acquisition device with each second intercom. In this way, the complexity and cost of the intercom system may be reduced by, e.g., reducing the cost of wiring and installation. FIG.1illustrates a schematic diagram of an exemplary intercom system100according to some embodiments of the present disclosure. The intercom system100may allow a group of people located at different locations to communicate with each other. The intercom system100may be used as, for example, a communication system, a security system, a surveillance system, or the like, and be applied in various scenarios, such as a residential building, an office building, a shopping mall, a hospital, etc. For illustration purposes, the present disclosure is described with reference to an intercom system100applied in a residential building. This not intended to limit the scope of the present disclosure, and the intercom system100may be applied in any other scenarios (e.g., an office building, a collection of buildings in a residential community or a business center). As shown inFIG.1, the intercom system100may include a first intercom110, a second intercom120, and one or more second image acquisition devices130. The first intercom110and the second intercom120may be located at two different positions in the residential building and operably connected to each other. A user of the first intercom110and a user of the second intercom120may communicate, for example, have an audio intercom and/or a video intercom, with each other via the first intercom110and the second intercom120. For example, the first intercom110may be an outdoor intercom positioned at an entrance of the residential building or outside a house gate of a particular resident in the residential building. The second intercom120may be an indoor intercom positioned inside a house of the particular resident in the residential building. In such cases, the particular resident may use the second intercom120to communicate with a visitor outside the house gate or at the entrance of the residential building. Each second image acquisition device130may be configured to capture image data relating to the residential building. For example, a second image acquisition device130may be mounted outside the residential building (e.g., near an entrance of the residential building) to capture images of the outside environment of the residential building. As another example, a second image acquisition device130may be mounted at the house gate of a particular resident in the residential building to capture images of, e.g., a visitor to the particular resident. As illustrated inFIG.1, the second image acquisition device(s)130may be operably connected to the first intercom110. In some embodiments, the intercom system100may include a plurality of second image acquisition devices130, all or a portion of which may be operably connected to the first intercom110. In some embodiments, in order to establish an operable connection between the first intercom110and a second image acquisition device130, a user (e.g., an administrator or a security guard of the residential building) may input authentication information of the second image acquisition device130into the first intercom110via an interface (e.g., a web interface) of the first intercom110. The authentication information may include, for example, an IP address, a user name, and/or a password of the second image acquisition device130. The first intercom110may establish an operable connection to the second image acquisition device130if the authentication information is valid. Optionally, the first intercom110may store the authentication information into a storage of the first intercom110. In some embodiments, the first intercom110may serve as an intermediary device between the second intercom120and the second image acquisition device(s)130. The image data captured by all or a portion of the second image acquisition device(s) may be transmitted to the first intercom110. The first intercom110may process the image data and transmit a signal encoding the image data to the second intercom120for display. Merely by way of example, the first intercom110may be mounted at the entrance of the residential building and the second intercom120may be mounted at the home of a particular resident. A visitor of the particular resident may use the first intercom110to initiate an intercom with the particular resident. The first intercom110may include a camera that can capture image data of the visitor standing in front of the first intercom110. The second image acquisition device(s)130may be mounted near the entrance of the residential building to capture image data of the visitor from different perspective(s). During the intercom, the second intercom120may display image data of the visitor captured by the first intercom110to the particular resident. If the particular resident wants to view target image data captured by a certain second image acquisition device130, he/she may input a request for the target image data via the second intercom120. The second intercom120may send the request to the first intercom110. In response to the request, the first intercom110may obtain the target image data from the certain second image acquisition device130and process the target image data to generate a signal encoding the target image data. The signal may be transmitted to the second intercom120, and the second intercom120may display the target image data to the particular resident. In some embodiments, the residential building may include a plurality of resident families. Each resident family may be configured with an individual intercom system100including, for example, an individual first intercom110and one or more second image acquisition devices130mounted at the house gate, and a second intercom120mounted at home. Alternatively, the intercom systems100of different resident families may share one or more devices. For example, one or more centralized first intercoms110may be mounted at, for example, an entrance or a stairway of the residential building and operably connected to the second intercoms120of all or a portion of the resident families. As another example, one or more centralized second image acquisition devices130may be mounted, for example, near an entrance or a stairway of the residential building and operably connected to the first intercoms110of all or a portion of the resident families. In some embodiments, as illustrated inFIG.1, a first intercom110may include a first image acquisition device111, an input/output (I/O)112, a processing device113, and a communication port114. The first image acquisition device111may be configured to capture image data with a first FOV. For example, the first image acquisition device111may capture image data of a user (e.g., a security guard of the residential building or a visitor) in front of the first intercom110. In some embodiments, the first FOV may be a fixed FOV or a variable FOV. The first image acquisition device111may be and/or include any suitable device that is capable of acquiring image data. Exemplary first image acquisition devices111may include a camera (e.g., a digital camera, an analog camera, an IP camera (IPC), etc.), a video recorder, a scanner, a built-in camera of a terminal device (e.g., a mobile phone, a tablet computing device, or a wearable computing device), an infrared imaging device (e.g., a thermal imaging device), or the like. In some embodiments, the first image acquisition device111may include a gun camera, a dome camera, an integrated camera, a binocular camera, a monocular camera, etc. The image data acquired by the first image acquisition device111may include an image and/or data about the image, such as values of one or more pixels of the image (e.g., luma, gray values, intensities, chrominance, contrast of one or more pixels of an image), RGB data, audio information, timing information, location data, etc. In some embodiments, the first image acquisition device111may include a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS) sensor, an N-type metal-oxide-semiconductor (NMOS), a contact image sensor (CIS), and/or any other suitable image sensor. The I/O112may enable user interaction with the first intercom110. For example, the I/O112may receive a request (e.g., a request for connecting the second intercom120) and/or data (e.g., authentication information of a second image acquisition module130) from a user of the first intercom110. As another example, the I/O112may output audio and/or video data received from the second intercom120to a user of the first intercom110. In some embodiments, the I/O112may include an input component and/or an output component. Exemplary input components may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Exemplary output components may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Exemplary display devices may include a liquid crystal display (LCD), a light-emitting diode (LED)-based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT), a touch screen, or the like, or a combination thereof. The processing device113may process information and/or data relating to the first intercom110to perform one or more functions of the first intercom110described in the present disclosure. For example, the processing device113may receive a user request from the second intercom120and process information and/or data to satisfy the user request. In some embodiments, the processing device113may include one or more processing devices (e.g., single-core processing device(s) or multi-core processor(s)). Merely by way of example, the processing device113may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), a programmable logic device (PLD), a controller, a microcontroller unit, a reduced instruction set computer (RISC), a microprocessor, or the like, or any combination thereof. Merely for illustration, only one processing device may be described in the first intercom110. However, it should be noted that the first intercom110of the present disclosure may also include multiple processing devices, and thus operations and/or method steps that are performed by one processing device as described in the present disclosure may also be jointly or separately performed by the multiple processing devices. For example, if in the present disclosure the processing device of the first intercom110executes both operations A and operation B, it should be understood that operation A and operation B may also be performed by two or more different processing devices jointly or separately in the first intercom110(e.g., a first processing device executes operation A and a second processing device executes operation B, or vice versa, or the first and second processing devices jointly execute operations A and B). The communication port114may facilitate data communications between the first intercom110and one or more other components of the intercom system100. For example, the communication port114may establish a connection between the first intercom110and the second intercom120. As another example, the communication port114may establish a connection between the first intercom110and one or more of the second image acquisition device(s)130. As used herein, a connection between two connected components may include a wired connection, a wireless connection, or any other communication connection that can enable data transmission and/or reception, or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection may include, for example, a Bluetooth™ link, a Wi-Fi™ link, a WiMax™ link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G, etc.), or the like, or a combination thereof. In some embodiments, the communication port114may be and/or include a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port114may be a specially designed communication port. For example, the communication port114may be designed in accordance with analog signal transmission. In some embodiments, the communication port114may be connected to a network (not shown inFIG.1) to felicitate data communications. In some embodiments, the network may be any type of wired or wireless network, or combination thereof. Merely by way of example, the network may include a cable network (e.g., a coaxial cable network), a wireline network, an optical fiber network, a telecommunications network, an intranet, an Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a wide area network (WAN), a public telephone switched network (PSTN), a Bluetooth network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the first intercom110may be configured as a terminal including, for example, a tablet computer, a laptop computer, a mobile phone, a personal digital assistant (PDA), a smartwatch, a point of sale (POS) device, a virtual reality (VR), an augmented reality (AR), an onboard computer, an onboard television, a wearable device, or the like, or any combination thereof. For example, the first intercom110may be implemented by a terminal device800having one or more components as described in connection withFIG.8. In some embodiments, the first intercom110may be fixed at a certain location in the residential building or be a mobile device that can be carried by a user. In some embodiments, as illustrated inFIG.1, a second intercom120may include an input/output (I/O)121, a processing device122, and a communication port123. The I/O121may enable user interaction with the second intercom120. For example, the I/O121may receive a request (e.g., a request for connecting the first intercom110) from a user of the second intercom120. In some embodiments, the I/O121may include an input component (e.g., a keyboard, a touch screen, and/or a microphone) and/or an output component (e.g., a display). Exemplary input components and/or output components may be found elsewhere in this disclosure. The processing device122may process information and/or data relating to the second intercom120to perform one or more functions of the second intercom120described in the present disclosure. For example, the processing device122may receive a user request inputted via the I/O121and send the user request to the first intercom110. As another example, the processing device122may decode information and/or a signal received from the first intercom110. In some embodiments, the processing device122may be implemented on a same or similar type of device as the processing device113as described above. The communication port123may facilitate data communications between the second intercom120and one or more other components of the intercom system100. For example, the communication port123may establish a connection (e.g., a wired connection and/or a wireless connection) between the second intercom120and the first intercom110. The communication port123may be implemented on a same or similar type of device as the communication port114as described above. In some embodiments, similar to the first intercom110, the second intercom120may be configured as a terminal device, such as the terminal device800. The second intercom120may be fixed at a certain location in the residential building or be a mobile device. In some embodiments, each second image acquisition device130may have a second FOV (e.g., a FOV covering the surroundings of a user of the first intercom110) different from the first FOV. A second image acquisition device130may be and/or include any suitable device that is capable of acquiring image data as aforementioned. For example, as illustrated inFIG.1, the second image acquisition device(s)130may include a gun camera130-1, a dome camera130-2, an integrated camera130-3, a binocular camera130-4, a monocular camera, etc. In some embodiments, a second image acquisition device130may have a fixed FOV or a variable FOV. It should be noted that the above description regarding the intercom system100is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more components of the intercom system100may be integrated into one component, or one component of the intercom system100may be divided into multiple components. For example, the communication port123may be integrated into the processing device122. Additionally or alternatively, one or more components of the intercom system100may be omitted or replaced by other component(s) that can realize the same or similar functions. For example, the first image acquisition device111of the first intercom110may be omitted. In some embodiments, the intercom system100may include one or more additional components. For example, the second intercom120may include a third image acquisition device, e.g., facing a user of the second intercom120. The image data captured by the third image acquisition device may be transmitted from the second intercom120to the first intercom110for display. As another example, the intercom system100may include a storage device, such as an independent storage device or a storage device integrated into one or more components of the intercom system100. Merely by way of example, the first intercom110may include a storage device configured to store data and/or instructions, such as authentication information of a second image acquisition device130, data and/or instructions for the processing device113to execute. In some embodiments, the storage device may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. Exemplary mass storage devices may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage devices may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM). Exemplary RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage device may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. FIG.2is a schematic diagram illustrating an exemplary processing device113of a first intercom according to some embodiments of the present disclosure. As shown inFIG.2, the processing device113may include an acquisition module210, a signal generation module220, a transmission module230, and a connection module240. The acquisition module210may be configured to obtain and/or receive information, requests, and/or instructions from one or more components of the intercom system100. For example, the acquisition module210may receive a request from a second intercom120. The request may be a request for target image data captured by one or more target image acquisition devices or a connection request sent by the second intercom120. As another example, the acquisition module210may obtain the target image data from the target image acquisition device(s) in response to the request for the target image data. More descriptions regarding the request for the target image data and the obtaining of the target image data may be found elsewhere in the present disclosure. See, e.g., operations420and430inFIG.4and relevant descriptions thereof. The signal generation module220may be configured to generate a signal encoding the target image data. In some embodiments, the signal encoding the target image data may be a single-channel signal. In some embodiments, the signal generation module220may include one or more units as shown inFIG.3. More descriptions of the generation of the signal encoding the target image data may be found elsewhere in the present disclosure (e.g., operation440and the descriptions thereof). The transmission module230may be configured to transmit information, instructions, and/or requests to one or more other components of the intercom system100. For example, the transmission module230may send a connection request to a second intercom120. As another example, the transmission module230may send an approval regarding a connection request to the second intercom120. As still another example, the transmission module230may send a signal encoding target image data to the second intercom120in response to a request for the target image data received from the second intercom120. The connection module240may be configured to establish a connection and/or an intercom between the first intercom and one or more other components of the intercom system100, such as a second intercom120and/or a second image acquisition device130. More descriptions regarding the connection establishment and/or the intercom establishment may be found elsewhere in the present disclosure. See, e.g., operations410and430inFIG.4and relevant descriptions thereof. FIG.3is a schematic diagram illustrating an exemplary signal generation module220according to some embodiments of the present disclosure. As shown inFIG.3, the signal generation module220may include a decoding unit310and a recoding unit320. The decoding unit310may be configured to decode target image data obtained from one or more target image acquisition devices. In some embodiments, the decoding may be performed based on any image data decoding techniques. The recoding unit320may be configured to recode the decoded target image data to generate a signal encoding the target image data. Optionally, the recoding of the decoded target image data may include compressing the decoded target image data. In some embodiments, the recoding may be performed based on any image data compression and/or recoding techniques. It should be noted that the above descriptions of the processing device113and the signal generation module220are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various modifications and changes in the forms and details of the application of the above method and system may occur without departing from the principles of the present disclosure. However, those variations and modifications also fall within the scope of the present disclosure. In some embodiments, two or more of the modules (or units) may be combined into a single module (or unit), and any one of the modules may be divided into two or more units. For example, the transmission module230and the connection module240may be integrated into a single module. As another example, the connection module240may include a first unit to establish a connection between the first intercom and a second intercom120, and a second unit to establish a connection between the first intercom and a second image acquisition device130. In some embodiments, one or more of the modules mentioned above may be omitted and/or one or more additional modules may be added in the processing device113. For example, the processing device113may further include a storage module. FIG.4is a flowchart of an exemplary process for data transmission in an intercom system according to some embodiments of the present disclosure. In some embodiments, one or more operations in the process400may be implemented in the intercom system100illustrated inFIG.1. For example, one or more operations in the process400may be stored in a storage device (e.g., a storage device of a first intercom110) as a form of instructions, and invoked and/or executed by a processing device113(e.g., one or more modules illustratedFIGS.2and3) of the first intercom110. The first intercom110may be operably connected to a second intercom120and one or more second image acquisition devices130. The first intercom110may include a first image acquisition device111that has a first FOV. Each second image acquisition device130may have a second FOV different from the first FOV. Merely by way of example, the first intercom110may be an outdoor intercom mounted at an entrance of a residential building, and the second intercom120may be an indoor intercom mounted inside a house of a particular resident of the residential building. In410, the first intercom110(e.g., the processing device113, the connection module240) may establish an intercom with the second intercom120. In some embodiments, before the intercom between the first intercom110and the second intercom120starts, a connection between the first intercom110and the second intercom120may need to be established. The connection may be initiated by a user of the first intercom110. For example, the user of the first intercom110may input or select the second intercom120which he/she wants to call via the I/O112(e.g., a keyboard or a touch screen) of the first intercom110. In response to the user input or selection, the first intercom110may send a connection request (or also referred as an intercom request herein) to the second intercom120. After the first intercom110receives an approval (e.g., a response) regarding the connection from the second intercom120, the connection between the first intercom110and the second intercom120may be established. In some embodiments, the intercom between the first intercom110and the second intercom120may be automatically established after the connection is established. Alternatively, the intercom between the first intercom110and the second intercom120may be established after the user of the second intercom120approves the intercom. Alternatively, the connection may be initiated by the second intercom120. For example, the user of the second intercom120may input or select the first intercom110which he/she wants to call via the I/O121(e.g., a keyboard or a touch screen) of the second intercom120. In response to the user input or selection, the second intercom120may send a connection request to the first intercom110. After the second intercom120receives an approval (e.g., a response) regarding the intercom from the first intercom110, the connection between the first intercom110and the second intercom120may be established. In some embodiments, the intercom between the first intercom110and the second intercom120may be automatically established after the connection is established. Alternatively, the intercom between the first intercom110and the second intercom120may be established after the user of the first intercom110approves the intercom. In some embodiments, the intercom may include an audio intercom, a video intercom, or the like, or any combination thereof. For example, each of the first intercom110and the second intercom120may include a microphone, which enables an audio intercom between the user of the first intercom110and the user of the second intercom120. Optionally, after the intercom is established, the first intercom110may transmit a signal encoding image data captured by one or more initial image acquisition devices of the first image acquisition device111and the second image acquisition device(s)130to the second intercom120for display. The initial image acquisition device(s) may be determined according to a default setting of the intercom system100(e.g., the first intercom110or the second intercom120) or be set by a user of the intercom system100(e.g., the first intercom110or second intercom120). For example, according to a default setting of the first intercom110, a signal encoding image data captured by the first image acquisition device111and optionally a certain second image acquisition device130may be transmitted to the second intercom120immediately after the intercom is established. In some embodiments, the second intercom120may include a third image acquisition device. If the third image acquisition device is turned on (e.g., by the user of the second intercom120or according to a default setting of the second intercom120), image data captured by the third image acquisition device may be transmitted to the first intercom110for display and a two-way video intercom may be established between the first intercom110and the second intercom120. If the third image acquisition device is turned off (e.g., by the user of the second intercom120or according to a default setting of the second intercom120), a one-way video intercom or an audio intercom may be established between the first intercom110and the second intercom120. In420, during the intercom with the second intercom120, the first intercom110(e.g., the processing device113, the acquisition module210) may receive a request for target image data captured by one or more target image acquisition devices. As used herein, the target image acquisition device(s) may include any image acquisition device selected from the first image acquisition device111and the second image acquisition device(s)130by the user of the second intercom120who inputs the request for the target image data. In some embodiments, the target image acquisition device(s) may include at least one target second image acquisition device of the second image acquisition device(s)130. The target image data of a target image acquisition device may refer to image data (e.g., one or more images and/or a video) captured by the target image acquisition device. For example, during an audio intercom, the user of the second intercom120may send a request for target image data captured by the first image acquisition device111and/or one or more of the second image acquisition device(s)130to the first intercom110via the second intercom120. As another example, as described in connection with410, the second intercom120may display image data captured by one or more initial image acquisition devices after the intercom is established. The user of the second intercom120may input the request to view the target image data captured by the target image acquisition device(s), wherein at least one of the target image acquisition device(s) may be different from each initial acquisition device. Merely by way of example, the second intercom120may display image data captured by the first image acquisition device111after the intercom is established. In order to view image data captured from another perspective, the user of the second intercom120may send a request for target image data captured by a target second image acquisition device130to the first intercom110. In some embodiments, the request received by the first intercom110may include an identification of each target image acquisition device. An identification of a target image acquisition device may be used to uniquely identify the target image acquisition device from the first image acquisition device111and the second image acquisition device(s)130. For example, an identification of a target image acquisition device may include a channel number (e.g., 001, 010, or 011), a MAC address, a name, or the like, or any combination thereof, of the target image acquisition device. The identification of a target image acquisition device may be a defaulting setting of the intercom system100or be set by a user of the intercom system100(e.g., the first intercom110or the second intercom120). In some embodiments, the user (e.g., a particular resident) of the second intercom120may input the request via the I/O121of the second intercom120. For example, the user of the second intercom120may press one or more keys (e.g., physical buttons, virtual buttons displayed on a touch screen) of the second intercom120to input the request. Each key may correspond to a target image acquisition device, for example, keys1,2, and3may correspond to target image acquisition devices with identifications 001, 010, and 011, respectively. The second intercom120may store a first corresponding relationship between keys and identifications of the image acquisition devices in the intercom system100. Based on the first corresponding relationship, the second intercom120may determine the identification of each target image acquisition device. Further, the second intercom120may transmit a signal encoding the request (which includes the determined identification of each target image acquisition device) to the first intercom110. Alternatively, the first intercom110may store a second corresponding relationship between signals triggered by the keys of the second intercom120and identifications of the image acquisition devices in the intercom system100. The first intercom110may receive a signal encoding the request from the second intercom120. Based on the second corresponding relationship and the signal, the first intercom110may determine the identification of each target image acquisition device. Exemplary signals encoding the request may include a Dual Tone Multi-Frequency (DTMF) signal. In430, in response to the request, the first intercom110(e.g., the processing device113, the acquisition module210) may obtain the target image data from the target image acquisition device(s). In some embodiments, the first intercom110may decode the request (e.g., a signal encoding the request) to obtain the identification of each target image acquisition device. The first intercom110may further identify each target image acquisition device based on its corresponding identification. For example, if the decoded request shows that the target image acquisition device(s) include the first image acquisition device111, the first intercom110may obtain image data from the first image acquisition device111as the target image data (or a portion thereof). As another example, the target image acquisition device(s) may include a certain target second image acquisition device130. The first intercom110may store a corresponding relationship between an IP address of each second image acquisition device in the intercom system100and its corresponding identification. The first intercom110may determine the IP address of the certain target second image acquisition device130based on its identification decoded from the request. The first intercom110may further retrieve image data of the certain target second image acquisition device130as the target image data (or a portion thereof) from the determined IP address. In some embodiments, the operable connection between the first intercom110and the certain target second image acquisition device130may be established in advance (e.g., when the intercom system100is mounted). The first intercom110may directly obtain image data from the IP address of the certain target second image acquisition device130. Alternatively, before obtaining image data from the certain target second image acquisition device130, the first intercom110may need to obtain authentication information (e.g., a login name and/or a password) of the certain target second image acquisition device130from, e.g. a storage of the first intercom110. The first intercom110may establish an operable connection to the certain target second image acquisition device130according to its authentication information. In some embodiments, the image data obtained from a target image acquisition device may include real-time image data captured by the target image data and/or historical image data captured by the target image acquisition device in a certain period (e.g., 0.5 seconds, 1 second, or 2 seconds) before the request is received by the first intercom110. Optionally, the target image data may be encoded in any signal (e.g., a digital signal, an analog signal, a mixed-signal of digital signal and analog signal) that can encode image data. In some embodiments, the second image acquisition device(s)130may include a plurality of second image acquisition devices130. In response to the request, the first intercom110may obtain a signal encoding image data from each of the second image acquisition device130via, e.g., a standard onvif protocol. The signal received from each second image acquisition device130may include an identification and/or authentication information of the second image acquisition device130. The first intercom110may identify the signal corresponding to the at least one target second image acquisition device among the signals received from the second image acquisition devices130, so as to determine the target image data. In some embodiments, in operation, the second image acquisition devices130may transmit captured image data to the first intercom110in real-time, periodically, or intermittently. After receiving the request, the first intercom110may determine the target image data captured by the at least one target second image acquisition device from the image data received from the second image acquisition devices130. In440, the first intercom110(e.g., the processing device113, the signal generation module220) may generate a signal encoding the target image data. In some embodiments, the signal encoding the target image data may be a single-channel signal. In some embodiments, the signal may encode target image data and also audio data. For example, the signal may include both the target image data and an audio recorded by a microphone of the first intercom110. In some embodiments, the target image acquisition device(s) may include a plurality of target image acquisition devices. The first intercom110may decode the target image data obtained from the target image acquisition devices, and generate the signal encoding the target image data by recoding the decoded target image data. Optionally, the recoding of the decoded target image data may include compressing the decoded target image data. In some embodiments, the signal generation may be performed based on any image data decoding, compression, and/or recoding techniques. In450, the first intercom110(e.g., the processing device113, the transmission module230) may send the signal encoding the target image data to the second intercom120for display. In some embodiments, the second intercom120may decode the signal to obtain the target image data and display the target image data. Optionally, the target image acquisition device(s) may include a plurality of target image acquisition devices. The second intercom120may jointly display the target image data captured by the target image acquisition devices. For example, the second intercom120may split its screen into a plurality of regions (e.g., 4 regions, 9 regions, or 16 regions). The count of the regions may be equal to the count of the target image acquisition devices, wherein each region may be used to display the target image data captured by one of the target image acquisition devices. Alternatively, the count of the regions may be greater than the count of the target image acquisition devices, wherein a portion of the regions may be used to display the target image data. Merely by way of example, the count of the target image acquisition devices may be equal to 3. The screen of the second intercom120may be split into 4 regions, 3 of which may be used to jointly display videos captured by the three target image acquisition devices. In some embodiments, the user of the second intercom120may input a response regarding the target image data displayed by the second intercom120. For example, the user of the second intercom120may select a video captured by a particular target image acquisition device to enlarge the video and/or delete a video captured by a particular target image acquisition device. Alternatively, the user of the second intercom120may input a new request for target image data captured one or more other target image acquisition devices. In some embodiments, during the intercom of between the first intercom110and the second intercom120, one or more of operations420to450may be performed continuously so that real-time image data captured by the target image acquisition device(s) may be transmitted to the second intercom120for display. Alternatively, one or more of operations420to450may be performed periodically and/or intermittently so that image data captured by the target image acquisition device(s) may be transmitted to the second intercom120periodically and/or intermittently for display. It should be noted that the above description regarding the process400is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more additional operations may be added in the process400and/or one or more operations of the process400described above may be omitted. For example,410may be omitted. As another example, an additional operation may be added between operations420and430for establishing an operable connection between the first intercom110and a target second image acquisition device. FIG.5is a schematic diagram illustrating an exemplary processing device122of a second intercom according to some embodiments of the present disclosure. As shown inFIG.5, the processing device122may include an acquisition module510, a transmission module520, a display module530, and a connection module540. The acquisition module510may be configured to obtain and/or receive information, requests, and/or instructions from one or more other components of the intercom system100. For example, the acquisition module510may receive a connection request from a first intercom110. As another example, during an intercom between the first intercom110and the second intercom, the acquisition module510may receive a signal encoding target image data captured by one or more target image acquisition devices from the first intercom110. More descriptions regarding the signal encoding the target image data may be found elsewhere in the present disclosure. See, e.g., operation730inFIG.7and relevant descriptions thereof. The transmission module520may be configured to transmit information, instructions, and/or requests to one or more other components of the intercom system100. For example, the transmission module520may send a connection request to a first intercom110. As another example, the transmission module520may send an approval regarding a connection request to the first intercom110. As still another example, the transmission module520may send a request for target image data captured by one or more target image acquisition devices to the first intercom110. The display module530may be configured to display data via, for example, a display of the second intercom. For example, the display module530may direct the display of the second intercom to jointly display target image data captured by a plurality of target image acquisition devices of the intercom system100. More descriptions regarding the display of the target image data may be found elsewhere in the present disclosure (e.g., operations450and740and the descriptions thereof). In some embodiments, the display module530may include a decoding unit610and a display unit620as shown inFIG.6. The decoding unit610may be configured to decode a signal encoding target image data that is captured by one or more target image acquisition devices to obtain the target image data. The display unit620may be configured to display the target image data via, for example, a screen of the second intercom. It should be noted that the above descriptions of the processing device122and the display module530are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various modifications and changes in the forms and details of the application of the above method and system may occur without departing from the principles of the present disclosure. However, those variations and modifications also fall within the scope of the present disclosure. In some embodiments, two or more of the modules (units) may be combined into a single module (unit), and any one of the modules may be divided into two or more units. For example, the transmission module520and the connection module540may be integrated into a single module. As another example, the transmission module520may include a first unit to send a connection request to a first intercom110and a second unit to send a request for target image data to the first intercom110. In some embodiments, one or more of the modules mentioned above may be omitted and/or one or more additional modules may be added in the processing device122. For example, the processing device122may further include a storage module. FIG.7is a flowchart of an exemplary process700for data transmission in an intercom system according to some embodiments of the present disclosure. In some embodiments, one or more operations in the process700may be implemented in the intercom system100illustrated inFIG.1. For example, one or more operations in the process700may be stored in a storage device (e.g., a storage device of a second intercom120) as a form of instructions, and invoked and/or executed by a processing device122(e.g., one or more modules illustratedFIGS.5and6) of the second intercom120. The second intercom120may be operably connected to the first intercom110as described inFIG.5. In710, the second intercom120(e.g., the processing device122, the connection module540) may establish an intercom with the first intercom110. More descriptions regarding the establishment of the intercom between the first intercom110and the second intercom120may be found elsewhere in the present disclosure. See, e.g., operation410inFIG.4. and relevant descriptions thereof. In720, the second intercom120(e.g., the processing device122, the transmission module520) may send a request for target image data captured by at least two target image acquisition devices to the first intercom110. In some embodiments, the at least two target image acquisition devices may include at least one target second image acquisition device of the second image acquisition device(s)130. For example, the at least two target image acquisition devices may include the first image acquisition device111and one or more second image acquisition devices130. As another example, the at least two target image acquisition devices may include at least two second image acquisition devices130. More descriptions regarding the request, the target image data, and the target image acquisition device(s) may be found elsewhere in the present disclosure. See, operation420inFIG.4and relevant descriptions thereof. In730, the second intercom120(e.g., the processing device122, the acquisition module510) may receive a single-channel signal encoding the target image data from the first intercom110. More descriptions regarding the single-channel signal may be found elsewhere in the present disclosure. See, e.g., operations440and450inFIG.4and relevant descriptions thereof. In740, the second intercom120(e.g., the processing device122, the display module530) may jointly display the target image data of the at least two target image acquisition devices. In some embodiments, the second intercom120may decode the single-channel signal to obtain the target image data of the at least two target image acquisition devices. The second intercom120may spilt its screen of the second intercom into a plurality of regions to jointly display the target image data. More descriptions regarding the display of the target image data may be found elsewhere in the present disclosure. See, e.g., operations450inFIG.4. and operation and relevant descriptions thereof. It should be noted that the above description regarding the process700is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the second intercom120may send a request for target data captured by one target image acquisition device to the first intercom110. For example, the second intercom120may send a request for target image captured by a certain second image acquisition device130to the first intercom110. In some embodiments, one or more additional operations may be added in the process700and/or one or more operations of the process700described above may be omitted. For example,710may be omitted. As another example, an additional operation may be added between operations720and730for establishing an operable connection between the first intercom110and a target second image acquisition device. FIG.8is a schematic diagram illustrating exemplary hardware and/or software components of a terminal device800according to some embodiments of the present disclosure. In some embodiments, one or more components (e.g., the first intercom110, and/or the second intercom120) of the intercom system100may be implemented on the terminal device800. As illustrated inFIG.8, the terminal device800may include a communication port810, a display820, a graphics processing unit (GPU)830, a central processing unit (CPU)840, an I/O850, a memory860, and a storage890. In some embodiments, any other suitable component, including but not limited to a system bus, a controller or a camera (not shown), may also be included in the terminal device800. In some embodiments, a mobile operating system870(e.g., iOS™, Android™, Windows Phone™, etc.) and one or more applications880may be loaded into the memory860from the storage890in order to be executed by the CPU840. The applications880may include a browser or any other suitable mobile apps for receiving and rendering information relating to the image acquisition system100. User interactions with the information stream may be achieved via the I/O850and provided to the image acquisition system100via the network120. To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by the present disclosure, and are within the spirit and scope of the exemplary embodiments of the present disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment,” “one embodiment,” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “block,” “module,” “engine,” “unit,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon. A computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 1703, Perl, COBOL 1702, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a software as a service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution—e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claimed subject matter may lie in less than all features of a single foregoing disclosed embodiment.
64,609
11943570
DETAILED DESCRIPTION In the drawings and descriptions set forth, identical reference numerals indicate those components that are common to different embodiments or configurations. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the presently disclosed subject matter. It is to be understood that the invention is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the presently disclosed subject matter. It will also be understood that the system according to the invention may be, at least partly, implemented on a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a non-transitory computer-readable memory tangibly embodying a program of instructions executable by the computer for executing the method of the invention. Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope, defined in and by the appended claims. Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “receiving”, “processing”, “classifying”, “determining” or the like, refer to the action(s) and/or process(es) of a computer that manipulate and/or transform data into other data, said data represented as physical, e.g. such as electronic or mechanical quantities, and/or said data representing the physical objects. The term “computer” should be expansively construed to cover any kind of hardware-based electronic device with data processing capabilities including a personal computer, a server, a computing system, a communication device, a processor or processing unit (e.g. digital signal processor (DSP), a microcontroller, a microprocessor, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), and any other electronic computing device, including, by way of non-limiting example, systems102,705,720,820,830,843and1210and processing circuitries730,824,834,840, and1220disclosed in the present application. The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes, or by a general-purpose computer specially configured for the desired purpose by a computer program stored in a non-transitory computer-readable storage medium. Embodiments of the presently disclosed subject matter are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the presently disclosed subject matter as described herein. The terms “non-transitory memory” and “non-transitory storage medium” used herein should be expansively construed to cover any volatile or non-volatile computer memory suitable to the presently disclosed subject matter. As used herein, the phrase “for example,” “such as”, “for instance” and variants thereof describe non-limiting embodiments of the presently disclosed subject matter. Reference in the specification to “one case”, “some cases”, “other cases”, “one example”, “some examples”, “other examples” or variants thereof means that a particular described method, procedure, component, structure, feature or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter, but not necessarily in all embodiments. The appearance of the same term does not necessarily refer to the same embodiment(s) or example(s). Usage of conditional language, such as “may”, “might”, or variants thereof should be construed as conveying that one or more examples of the subject matter may include, while one or more other examples of the subject matter may not necessarily include, certain methods, procedures, components and features. Thus such conditional language is not generally intended to imply that a particular described method, procedure, component or circuit is necessarily included in all examples of the subject matter. Moreover, the usage of non-conditional language does not necessarily imply that a particular described method, procedure, component or circuit is necessarily included in all examples of the subject matter. It is appreciated that certain embodiments, methods, procedures, components or features of the presently disclosed subject matter, which are, for clarity, described in the context of separate embodiments or examples, may also be provided in combination in a single embodiment or examples. Conversely, various embodiments, methods, procedures, components or features of the presently disclosed subject matter, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination. It should also be noted that each of the figures herein, and the text discussion of each figure, describe one aspect of the presently disclosed subject matter in an informative manner only, by way of non-limiting example, for clarity of explanation only. It will be understood that that the teachings of the presently disclosed subject matter are not bound by what is described with reference to any of the figures or described in other documents referenced in this application. Bearing this in mind, attention is drawn toFIG.1, schematically illustrating an example generalized view of a vehicle inspection system, in accordance with some embodiments of the presently disclosed subject matter. A system such as disclosed herein may, in some examples, detect candidate damage regions, which are indicative of detected possible damage to a vehicle, and may verify the detected damage, by processing information indicative of multiple images of these regions and determining that these candidate regions meet defined detection repetition criteria. The example system100illustrated inFIG.1is a computer-based vehicle inspection system for automatically inspecting a vehicle for damage or other anomalies. In some examples, system100is also referred to herein as image acquisition system100. System100comprises a computerized system102for object inspection, and target detection and verification. One non-limiting example of system102is for vehicle damage inspection, detection and verification. Examples of the computerized system are disclosed further herein with reference toFIGS.7,8and12. Example system100includes a set of image acquisition devices140,144,142,146,148(also referred to herein as imaging devices). In some examples, these devices are also referred to herein as capturing imaging devices, as distinguished from monitoring imaging devices136(disclosed below). System100can be configured to obtain, from the set of imaging devices such as140, a plurality of sets of images capturing/acquiring a plurality of segments or portions of a vehicle, e.g. images of the vehicle surface. The set of imaging devices such as140can be operatively connected to system102and the captured images can be transmitted to system102via wired or wireless communication104. The imaging acquisition devices used herein can refer to any kind of imaging devices or general-purpose devices equipped with image acquisition functionalities that can be used to capture vehicle images at a certain resolution and frequency, such as a digital camera with image and/or video recording functionalities. The set of imaging devices140,144,142,146,148can comprise multiple camera devices located (mounted or otherwise situated) on at least one side of a vehicle105(e.g. on at least one side of an inspection passage/lane through which the vehicle105passes), and they may be configured to capture a plurality of segments or portions of a vehicle. In some examples, some or all of the cameras are configured to capture still images. In other examples, some or all of the cameras are configured to capture image frames of a video recording. In other examples, some or all of the cameras are configured to capture both still and video images. In some embodiments, there are camera devices located on both sides of the vehicle, such that images of both sides of the vehicle can be simultaneously acquired and processed. Note also that the presently disclosed subject matter discloses the non-limiting example of two-dimensional (2-D) image frames, such as those shown inFIGS.3,4and5. However, the disclosure applies as well to one-dimensional image captures, such as those captured by a line scanner140,144. In some embodiments, image acquisition system100can also comprise a supporting structure. The supporting structure can comprise one or more poles125,127positioned on at least one side of the inspection passage.FIG.1illustrates an exemplary supporting structure that comprises two poles, both positioned on one side of the inspection passage. Each pole has a set of imaging devices attachable thereto. The imaging devices can be attached at an appropriate height and/or angle in relation to the vehicle so as to capture image frames covering a Field of View (FOV) corresponding to a predetermined region. In some examples, the imaging devices are mounted in a fixed manner, pointing in a fixed direction with a fixed FOV. In other examples, the imaging devices can move, e.g. along a rail, and/or can rotate so as to point in a different direction, in a pre-defined manner. In some cases, the vehicle105can be a moving110vehicle which passes through an inspection passage equipped with such imaging devices. In some other cases, object/vehicle105is not imaged while moving. Rather, it rests in one location, and images of it are captured. It may be, in some examples, that one or more devices of the set of imaging devices are mounted on a movable platform so as to move180relative to the vehicle, and are configured to capture image frames of the vehicle from multiple relative positions and angles. In some other examples, the object105rests in one location, and a plurality of imaging devices140,142,144,146,148capture images of the object from multiple positions and multiple angles relative to the object. Such an image capture by multiple devices is done simultaneously, in some examples. Since the image frames are captured by imaging devices located at different positons relative to the object, this can, in some cases, cause a change in the light reflection behavior from image to image (as disclosed with reference to, for example, reflection227ofFIG.2Aand reflection325appearance on image frame334ofFIG.3), without the vehicle being required to move. In some examples, there are sufficient imaging devices, and/or the image device(s) capture image frames from a sufficient number of different positions relative to the object, so as to capture images of the entire object. In still other examples, combinations of the vehicle and/or imaging devices move. As will be disclosed further herein with reference toFIGS.2A and2B, in some examples certain captured image frames will include appearances, on portions of the object, of possible targets of interest such as scratches or other possible damage, or labels. Such appearances may occur in certain image frames that include that portion of the object, and not in others. Regions on an image frame, where there has been detection of a possible target of interest, are referred to herein also as candidate target detection regions. There is, in some examples, a need to verify, in an automated fashion, whether such detected regions of possible damage to the vehicle are in fact regions where there is damage (or other targets of interest), or are merely false detections associated with, for example, reflections. In some examples, imaging such a detected region of possible damage from multiple relative positions, of the capturing imaging device(s) and of the relevant imaged portion of the object, may enable such verification. Example methods are disclosed herein, with reference toFIGS.2B,3,4,5,6,10A and10B, to perform such verification of target detection. These methods utilize for example a target detection verification system, to perform steps such as:(d) receiving information indicative of two or more image frames of the vehicle(s) or other object(s), where this information comprises one or more candidate target detection regions (e.g. candidate damage regions), the candidate target detection region(s) indicative of possible detected targets of interest associated with the at least one object(s),(e) processing the information indicative of the two or more image frames, to determine whether or not each candidate target detection region meets one or more detection repetition criteria;(f) in response to the one or more detection repetition criteria being met, classifying the relevant candidate target detection region as a verified target detection region (e.g. a verified damage region). In some cases, such a process can facilitate output of an indication of the verified target detection region(s). In some examples, the one or more detection repetition criteria are indicative of repetition of the candidate target detection region(s) in locations of the two or more image frames that are associated with the same location on a data representation of the object (e.g. “15 cm above the front left wheel”). Returning toFIG.1, in some examples, system100comprises one or more monitoring imaging devices, e.g. monitoring camera136. In some examples, the same monitoring imaging device, or set of devices, is used for both triggering and brightness/intensity recognition functions, as disclosed further herein with regard toFIGS.10A and11. In other examples, each of these functions has a different monitoring camera or set of cameras. In some examples, monitoring camera136is one or more of the capturing imaging devices140,142,146. In other examples, it is a device or set of devices that are not the capturing imaging devices, e.g. as shown inFIG.1. Monitoring camera136used for imaging triggering functions may in some cases monitor the vehicle-crossing entry indicator150(also referred to herein as trigger-on indicator or trigger-on line) and exit indicator157(also referred to herein as trigger-off indicator or trigger-off line). In some examples, these are virtual lines monitored by the system. In other examples, these indicators are actual painted or otherwise marked lines. The camera136may instead, or may additionally, monitor vehicle entry into and exit from an image capture region155. Region155may also be referred to herein as an imaging triggering region. Again, this region may be virtual, or may for example be physically marked on the ground or on the floor or lower surface of the system. Triggering functions of this camera, the indicators and the region are disclosed further herein with reference to blocks1004and1013ofFIG.10A. It is to be appreciated that the present disclosure is not limited by the specific number, type, coverage, and perspective of the imaging devices and/or the images as being taken, nor by the specific generation methods of the images by the imaging devices. In some examples, the plurality of sets of images, as acquired by the set of imaging devices, are acquired at a plurality of time points during a relative movement between the vehicle and the set of imaging devices, such that: i) each set of images captures a respective segment or portion of the vehicle that falls within the predetermined region at a respective time point, and ii) the plurality of segments captured in the plurality of sets of images are partially overlapped in such a way that each given point of at least some of the portions of the vehicle, is captured at least at two time points in at least two sets of images. The given point captured in the at least two sets of images are as if captured under different illumination conditions pertaining to different relative positions between the given point and the set of imaging devices at the two time points. In some embodiments, there can be provided one or more illumination devices or lights130,132located in close proximity to the imaging devices and which provide illumination covering the FOVs of the imaging devices, so as to enable images to be captured at high resolution and quality. By way of example, the illumination devices130,132can be positioned on the side of the passage, e.g. mounted on the poles, to provide peripheral illumination for image acquisition. Optionally, the image acquisition and/or the illumination can be triggered by an external sensing device which can detect the presence/approach of a vehicle (such as, e.g., road loop, IR beam, VMD, etc.). The imaging devices140(and the illumination devices, if any) can be controlled by system102. System102is operatively connected to the set of imaging devices (and the illumination devices, if any) and can be used for controlling the devices (e.g. synchronizing the image acquisition and illumination operation), calibrating the system during a set-up stage, and processing the acquired images of the vehicle so as to detect and verify damage. Also shown inFIG.1are a person162and an object165, e.g. a bag or box, that happen to be in the vicinity of system100. As will be disclosed further herein, these bodies may be the source of reflections that may appear in image frames. Note that the disclosure is for the example of a vehicle, and for verification of detected damage. In some examples, the methods and systems disclosed herein can be used to detect and verify detection of anomalies other than damage, e.g. dirt or manufacturing defects. Damage is disclosed herein as one non-limiting example of an abnormality or anomaly. Abnormalities and anomalies are disclosed herein as one non-limiting example of the more general case of detection of imaged targets, that is detection in an image of a target or feature of the object, or other item of interest to the user that is on the object. As one non-limiting example, the methods and systems disclosed herein may be used to detect the presence of a particular feature which is expected to be on the object. As one example of this, it may be that a particular decal or product label is supposed to appear on a particular location of the object, e.g. a decal with a company logo being present on the body of an automobile above the rear wheel. As another non-limiting example, a signature or logo or the like (e.g. a designer's signature) is supposed to appear on a particular location of an object such as a vase. In such cases, the absence of such features is considered a flaw or an otherwise undesirable situation, and thus it is desirable that the particular feature be detected positively on the relevant location on the object, and that this detection be verified. Thus, damage or other anomalies, labels, signatures and the like are in some cases all examples of a physical entity that whose image or likeness appears on a captured image frame. Therefore, candidate damage regions are non-limiting examples of a candidate target detection region, which is indicative of possible detection of a target of interest associated with the object. As will be disclosed further herein, in some cases it is desired to verify that the candidate target detection region is indicative of actual detection of the target of interest, and that the detection on the image of the target is not a false positive. Thus the case of damage to objects, and of candidate damage regions, is disclosed herein as one non-limiting example of the more general case of detection of targets of interest and of candidate target detection region, for ease of exposition. In some examples, the methods and systems disclosed herein can be used to detect and verify detection of anomalies on objects other than vehicles, e.g. furniture and other manufactured goods. Such objects may in some cases undergo inspection using imaging devices such as140,142. Thus vehicles are disclosed herein as one non-limiting example of the more general case of an object, for ease of exposition. Similarly, an automobile is disclosed herein, as one non-limiting example of a vehicle. Other non-limiting examples of vehicles are ground vehicles such as, for example, trucks, buses and trains, watercraft, and aircraft. Attention is now drawn toFIG.2A, schematically illustrating an exemplary generalized depiction200of imaging angles, in accordance with some embodiments of the presently disclosed subject matter. Vehicle105is shown in three different example stages202,204,208of movement while in view of the imaging devices. Poles125,127, illumination device130, and imaging devices140,142,146, are shown, as non-limiting examples. Note that in some examples the vehicle105is not constrained to travel a pre-defined path of movement, such as a track, conveyor or rails. In some such examples, the vehicle can travel freely, veering or turning260,265, accelerating and decelerating—driven by e.g. a human driver or autonomously. Thus, the speed of travel, and angle of the vehicle with respect to capturing imaging device140at particular point in time in an image capture session, may vary for different image capture sessions inspecting a particular vehicle. Therefore, the image captured by a particular camera140, at a particular point in time of an image capture session, may in some cases be of a different portion of the vehicle, and in some cases captured from a different relative imaging angle, compared to a similar image taken in a different inspection session. Thus, various captured images may in some cases be associated with different relative positions of the capturing imaging device(s) associated with the images and a particular imaged portion of the object associated with the images. Note that the turn radius of vehicle105is exaggerated in the figure, for clarity of exposition. In the example, vehicle105depicts a scratch220. Scratches are one non-limiting example of damage to the vehicle or other object. Another non-limiting example is a dent in the vehicle body. One example of a scratch is a paint scratch. Note that the relative positions of a particular capturing imaging a particular imaging device such as e.g.140and any particular imaged portion point or region of the vehicle (such as the imaged portion containing scratch220) vary, as the vehicle moves forward and possibly turns and veers. This may in some cases be a function of at least the position of the imaging device, the direction in which it is pointed, and the speeds and directions of the relative motion between the imaging device and portions of the vehicle surface. For example, the lines of sight (LOS)250,254,258between the single camera140and scratch220are very different as the vehicle moves over time along the different positions and attitudes202,204,208. Similarly, these are different lines of sight than the line of sight274shown between camera142and scratch220. In some examples, the different capturing imaging devices140,142,146are configured to capture image frames substantially synchronously, at substantially the same time. Such synchronization may in some examples enable a more accurate comparison of images, for example as disclosed with reference toFIGS.4and5. In some examples, synchronization circuitry or similar techniques is used to trigger image capture by the cameras substantially synchronously. The degree of synchronization is an engineering decision, in some examples based on one or more of factors such as, for example, rate of image frames capture, speed of vehicle or system movement110,180, size and resolution of image frames, size of target areas to be detected and verified (e.g. large scratches vs small pits in paint). It is disclosed further herein that the geometric maps are calculated based on the relative positions of the various cameras and the imaged portions of the object at a particular time. For such a calculation to be possible, the synchronization between cameras must be such that imaging of a particular portion of the object (e.g. a car door handle) is captured by each relevant camera while the object is located at the same location within image acquisition system100. In some non-limiting examples, the cameras may be synchronized such that a particular imaged portion of the object has moved no more than several millimeters between capture by different cameras. In some non-limiting examples, the cameras may be synchronized such that a particular imaged portion of the object has moved no more than several centimeters between capture by different cameras. In some non-limiting examples, the cameras may be synchronized to within a couple of milliseconds. In some non-limiting examples, the cameras may be synchronized to less than one millisecond. Where the imaged object is static during image capture, in some non-limiting examples the synchronization may be to within 10 seconds. Person162is shown as a non-limiting example of a light source. Another non-limiting example of a light source is bag165. When the vehicle is in position204, light226is incident from light source162on a point217on the vehicle, shown as an oval, and it is reflected227. In some cases, this reflection227is captured on the imaged recorded by device140. In some examples, this captured reflection is detected by automated detection systems (examples of which are disclosed further herein) as damage, for example as an apparent scratch, when in fact there is no damage. Examples of this phenomenon are disclosed further herein with reference to reflection325appearance on image frame334ofFIG.3. Attention is now drawn toFIG.2B, schematically illustrating an example generalized depiction of captured image frames, in accordance with some embodiments of the presently disclosed subject matter. In some examples, cameras140,142capture a two-dimensional representation of the plane in their field of view. Thus, a viewed appearance of a target such as a scratch, that it captured in an image frame, may in some cases correspond to a real scratch220, while in other cases it may correspond to e.g. a reflection that merely appears in the image frame to be a scratch. Note that image frames are in some examples referred to herein also as images, for simplicity of exposition. In the example of the figure, the two image frames290,291are captured at two points204,208in time of vehicle movement. Looking at the position of the front of the vehicle105, it can be noted that in frame291, the vehicle has moved forward (closer to the left edge of the image frames) relative to its position in frame290. In both frames, scratch220appears at the same position over the front left wheel. That is, in both frames the scratch is associated with the same location on a data representation of the vehicle, e.g. “15 cm above the front left wheel”. Scratch220corresponds to a target of interest, and is not merely e.g. a reflection, since its location on the data representation of the object105remains fixed, and does not change, as the vehicle moves. The scratch “moves along with the vehicle”. Nor does the scratch's location on the data representation of the object change in image frames captured by different cameras. This location is thus not dependent on time of capture or on the imaging device used. Thus, more generally, the location of a target of interest220, on the data representation of the object, does not vary based on the relative position of the imaging device140and of the particular imaged portion of the object that contains the target of interest. By comparison, appearances241and245and243do not correspond to damage or to some other target of interest on the vehicle such a label. Rather,241and245are both reflections242,246of the person162on the shiny or reflective surface of vehicle105, while appearance243is a reflection244of bag165. This is evident from a comparison of frames290and291. In290, reflection appearance241is on the rear door, while in291the reflection appearance245of the same person is on the rear panel. Similarly, in290reflection appearance243is on the rear panel, while in frame291the reflection248of the bag165has no appearance on the vehicle. Unlike the case of targets of interest such as scratch220, the reflections appear to “move” along the vehicle as the vehicle moves. In the two images frames290,291, the region, including the reflections241,245, appears in locations of the images that are associated with different locations on the data representation of the vehicle. Note that reflections are disclosed as one example of an appearance in an image frame, the location of which on the data representation of the object, varies, and does not stay the same, as the relative position of the imaging device140and of a particular imaged portion217of the object105changes. Other non-limiting examples include image acquisition noise and problems with the capturing imaging device140. Reflections, acquisition noise etc. thus are not in fact targets of interest on the object105, and if detected as such by automated detection systems, they represent false detections. Point217inFIG.2Ais thus an example of a point on the vehicle which receives a particular light reflection when the vehicle105is at certain positions204and angles, and does not receive that particular light reflection when the vehicle is at certain other positions206and angles. There is thus in some examples a need to verify, in an automated fashion, whether such detected regions of possible damage to the vehicle are in fact regions where there is damage or other targets of interest, or are merely false detections associated with, for example, reflections. In some examples, imaging such a detected region of possible damage from multiple relative positions, of the capturing imaging device(s) and of the relevant imaged portion of the object, may enable such verification. Example methods are disclosed herein, to perform such verification of target detection, utilizing for example a target detection verification system, to perform steps such as:(a) receiving information indicative of two or more image frames of the vehicle(s) or other object(s), where this information comprises one or more candidate target detection regions (e.g. candidate damage regions), the candidate target detection region(s) indicative of possible detected targets of interest associated with the at least one object(s),(g) processing the information indicative of the two or more image frames, to determine whether or not each candidate target detection region meets one or more detection repetition criteria;(h) in response to the one or more detection repetition criteria being met, classifying the relevant candidate target detection region as a verified target detection region (e.g. a verified damage region). In some cases, such a process can facilitate output of an indication of the verified target detection region(s). In some examples, the one or more detection repetition criteria are indicative of repetition of the candidate target detection region(s) in locations of the two or more image frames that are associated with the same location on a data representation of the object (e.g. “15 cm above the front left wheel”). In some examples, step (a) may include, instead of, or in addition to the above, processing the information indicative of the two or more image frames, to determine whether or not each candidate target detection region meets one or more non-detection repetition criteria. The non-detection repetition criteria may be indicative of repetition of the candidate target detection regions in locations241,245of two or more image frames that are associated with different locations on a data representation of the object. In such examples, step (b) may include, in response to one or more of the detection non-repetition criteria being met, classifying the candidate target detection region(s) as non-verified target detection region(s). In some examples, the images that are received and processed have at least partial overlap, constituting an overlap area610,620(see further herein with reference toFIG.6), and the candidate damage region(s) appear at least partially in the overlap area. In some examples, these image frames are associated with different relative positions of the capturing imaging device(s) that are associated with the two or more images and the relevant imaged portion of the object. These differences in relative position may in some cases be due to, among other factors, the position of the respective imaging device140, the position and orientation202,204,208of the vehicle or other object105at the moment of capture of a particular frame, and the position of the particular portion of the vehicle being imaged, in relation to the vehicle (for example, side-view mirror near the front vs rear door handle). The relative position of the portion of the vehicle will be reflected in the relative position, within a data representation of the vehicle, of the portion of the data representation corresponding to the portion of the vehicle that was imaged. These data representations can be used, in some examples, when determining to what portion of the vehicle (or other object) each captured image frame corresponds. Note that the lines of sight230,234,238between camera140and point217, and the lines of sight264,268between camera142and point217, are also different from each other as the vehicle moves over time. Also, at the same vehicle position208, for example, the lines of sight238,268,288to the single point217from the three imaging devices140,142,146differ, as indicated by the different angles A, B, C between the imaginary line205and the various LOS to217at position208. In some examples, verification methods such as disclosed herein may have at least certain example advantages. In some typical image scans of vehicles, approximately 1000 to 1500 candidate damage regions may be detected per scan, most of these false detections. Automated imaging detection, without verification, thus in many cases does not provide accurate results. Methods such as disclosed herein can in some cases filter out most of the false detections, and can output only a handful of verified damage regions, in some cases one verified region, indicative of actual damage, e.g. within a matter of seconds. A system such as720is in some examples of receiving image frames corresponding to relatively small areas of the object, and of automatically understanding the particular movement of the object based on the received image frames, and of determining the relative positions of the cameras and of each imaged portion of the object, so to be able to identify the overlap areas and to look for matching candidate detection regions indicative of the same target appearance. For at least these reasons, using the system720can, in some non-limiting examples, increase the detection rate of defects and other targets by 60-80% compared to inspection and verification by a human. Moreover, the system100in some examples inspects the entire vehicle simultaneously, using information from consecutive image frames from adjacent cameras. This may enable verification in a more efficient manner than visual inspection by humans, in some cases taking approximately within a matter of seconds. In some examples this can, in an efficient and automated manner, and with high accuracy, indicate to users exactly where there is damage, or other targets of interest. In one application thereof, a vehicle can be inspected when leaving the factory, arriving at and exiting the ship, arriving at and exiting a dealer showroom, and arriving at and exiting a repair garage. At each stage, such a scan can verify what damages did and did not exist, and determination can be made as to when the damage likely occurred, and under whose control. Note that not every verified target detection region will necessarily correspond to a true target of interest, from the point of view of the particular business application. For example, if there is dirt stuck on a vehicle to the right of the wheel, it will appear in multiple images that include the wheel area, and may in some cases be verified as damage by a damage verification system, even though the dirt is not true damage. Attention is now drawn toFIG.3, schematically illustrating an exemplary generalized depiction of captured image frames, in accordance to some embodiments of the presently disclosed subject matter. The figure shows images of a portion of vehicle105that includes point217. The upper three image frames330,334,338were captured by device140when the vehicle105was at each of positions202,204,208, the two image frames364,368were captured by device142when the vehicle was at each of positions204,208, and the lowest image frame388was captured by device146when the vehicle was at position208. The six example images correspond to the different lines of sight230,234,238,264,268,288respectively, which are shown inFIG.2A. In the example of the figure, region327of image frame338contains in it an appearance of the target location217on the surface of the vehicle. Point217has no damage in this image frame, and thus does not appear. The same is true of image frames364,368,388(for which the regions containing217are not shown on the figure). On the other hand, in image frame334there is a region360which contains the appearance of a mark325, for example what appears to be an appearance of a flaw on the vehicle surface's paint—perhaps a paint smudge or a scratch. Also, in image330the mark325appears, but in a region370that is on a different location on the data representation of the vehicle. However, in reality, there is no such damage on the vehicle, and mark325is a result of the reflected227light226which was captured by device140. An automated damage detection system710(disclosed further herein with reference toFIG.7) of system102may in some examples detect that region360of the image334contains an area of possible vehicle damage, while this is in fact a “false positive”, caused by a reflection (for example). Region360of the image frame334may be referred to herein as a candidate damage region, since it is indicative of detected possible damage to the at least one vehicle, but it has not yet been verified that the corresponding region on the vehicle in fact contains actual damage. Region327, by contrast, shows no indication of possible damage, and thus is not a candidate damage region. A candidate damage region is a non-limiting example of the more general term of a candidate target detection region. In the example, the images presented have an overlap area, and the portion of the automobile (region325) containing point217appears in this overlap area. Regions327and325of their respective images correspond to the same location area on the vehicle surface. For example, they are both in a relative position310from a common reference location on the two images, e.g. curved line320that represents the line separating the front and rear doors of the imaged automobile. Thus, regions327and325, and the corresponding regions (not shown) on the other image frames, are in locations of the at least two images that are associated with the same location on a data representation of the vehicle. However, the images containing region327and region325are associated with different relative positions of the camera and the imaged portion (regions327,325) of the object since they were captured at different times, when the vehicle105and camera140were e.g. at different distances and angles from each other. Thus the effect on the two image frames of reflections and the like is different. However, by comparing the image frames, it may be seen that the other cameras did not capture an appearance corresponding to the mark325near point217, nor did camera140itself capture such a mark in images captured at points in time before and after image334. Similarly, such an appearance325was captured in another image frame330, but in a region370, which is located at a distance312from curved line320that is different from the distance310. Thus region370, unlike regions325and327, corresponds to a location on the data representation that is different from point217. The mark325thus appears to “move” along the vehicle in different image frames. A target detection verification system720(disclosed further herein with reference toFIG.7) can thus determine, using or more detection repetition criteria, that325is a “false positive” detection, and that360is in fact not a region on the image frame that indicates an actual target of interest such as vehicle damage or other anomaly. Since the candidate damage region is in fact indicative of a reflection of light from the vehicle, and not of actual vehicle damage, the region may not meet the detection repetition criteria. For example, the criteria may be whether there is repetition of a particular candidate damage region360in locations327of the images that are associated with a same location on a data representation of the vehicle. In one example, the detection repetition criterion used is determining whether the candidate damage region appears in all of the image frames processed. In this example, the candidate region360does not meet the detection repetition criteria, since damage is indicated in only one of six image frames checked, and therefore the damage is not verified. Note that the above description is a non-limiting example only. In other examples, the detection repetition criterion is met if the candidate damage region appears in at least two of the image frames processed. The number of repetitions needed to meet the criterion is an application-specific decision based on the business need, and may be based on the operating point of the particular application. If more repetitions are required, there will be fewer false alarms of target detection, and thus higher confidence that there was a detection. However, there will be fewer verifications, and thus the final result is a lower verified detection rate of targets. There is thus a trade-off between number of output775detections and the confidence in the verification. Recall that the example images ofFIG.3are associated with different relative positions, between the respective imaging devices and the respective imaged portion of the vehicle (the portion of the vehicle that includes point217). In the example of comparing the three image frames330,334,338, the different relative positions, associated with the multiple image frames, are associated at least partially with the images being captured at different points in time202,204,208—even in a case where they were all imaged by the same camera140. In such a case, the detection repetition criteria may include a temporal detection repetition criterion, indicative of the time of capture of each image, comparing images captured at different times. By contrast, in the example of comparing the three images338,368,388, or the two images334,364, the different relative positions, associated with the multiple image frames, are associated at least partially with the image frames being captured by different imaging devices140,142,146—even in a case where they were all captured at the same time208or206. The reason is that different cameras have different positions relative to a particular portion of the vehicle that is imaged at a particular time, and they have different orientations. Thus, again, the effects on the two images, of reflections and the like, are different. In such a case, the detection repetition criteria may include a spatial detection repetition criterion, comparing images captured by different capturing imaging devices, which are not co-located. In some examples, the comparison is done across both different imaging devices and different times, for example comparing image frames334(device140at time204) and368(device142at time208). In some examples, this may be referred to as a temporal-spatial detection repetition criterion, comparing images that were taken at different times by different capturing imaging devices. Attention is now drawn toFIG.4, schematically illustrating an example generalized depiction of captured image frames, in accordance with some embodiments of the presently disclosed subject matter. The figure shows images of a portion of vehicle105that includes scratch220. The upper three image frames450,454,458were captured by device140when the vehicle105was at each of positions202,204,208. The image frame474was captured by device142when the vehicle was at position204. The four example image frames correspond to the different lines of sight250,254,258,274, respectively, which are shown inFIG.2A. In the example, candidate damage region460of image frame450contains an appearance425of the scratch220on the vehicle. The same is true of image frames454,474, containing candidate damage regions463,467. An automated damage detection system710(seeFIG.7) of system102may have detected these regions. On the other hand, in image frame458the corresponding region468does not contain an appearance of damage corresponding to425, and thus is not a candidate damage region. The regions460,463,468,467all correspond to the same location area on the vehicle. For example, they are both in a relative position410from a common reference location on the two images, e.g. curved line420that represents the line separating the front and rear doors of the imaged automobile. In some cases, target/damage verification system720determines, using detection repetition criteria, e.g. that the candidate damage image425is repeated in regions460,463,467that are associated with a same location on a data representation of the vehicle. The system may thus verify that425is indeed an imaged appearance corresponding to actual damage to vehicle105, and thus that460is a verified damage region. For example, the system720may compare the three image frames450,454,458, associated at least partially with the frames being captured by the same camera140at different points in time202,204,208, and may use a temporal detection repetition criterion. In another example, it may compare only the two image frames450,454, and may use a temporal detection repetition criterion. It will find that the candidate damage region appears in 100% of the processed images. In yet another example, the system compares the two image frames454,474, captured by different imaging devices140,142at the same time208, and applies a spatial detection repetition criterion. It will find that the candidate damage region appears in 100% of the processed image frames. In a fourth example, all four image frames ofFIG.4are compared, across both different imaging devices and different times, and the candidate damage region is found to appear in three out of four of the processed images. Note that the above description is a non-limiting example only. Note that if the system was configured to compare only image frame454and the image frame458following it temporally, there would not be a match, and the candidate damage region would, erroneously, not be verified. Thus, as indicated above, the choice of how many image frames to compare to verify a detected candidate target detection region, as well as the number or percentage of matches required for verification, is an engineering business decision associated with the particular service or product application. The choice may be based on considerations such as, for example, the accuracies required when determining false-positives and false-negatives, the number of imaging devices, the number of image frames per camera, and the rate of imaging over time. Note thatFIGS.1-4provide only a general disclosure of the subject matter, for ease of exposition. The figures disclose non-limiting examples only. For example, the vehicle105may be photographed in more positions than the three shown, there may be more imaging devices and lights, as well as lines of sight, than are shown inFIGS.1-2. There may be more image frames available for processing, and more candidate regions per image frame, than those shown inFIGS.4-5. More examples of choosing image frames to process and compare are disclosed with regard toFIG.5. Attention is now drawn toFIG.5, schematically illustrating an exemplary generalized depiction of captured image frames, in accordance with some embodiments of the presently disclosed subject matter. The figure illustrates the possibility of the system102generating multiple image frames500, captured by multiple imaging devices 1 to n, each capturing multiple image frames at times before and after any given time Assume that the current task of system720is to attempt verify the two candidate damage regions515and550that appear on the image frame562, captured by Camera 1 at ti. For example, Camera 1 may be camera140. In some examples, this arrangement500may be referred to herein also as matrix500of image frames. In some examples one or more of the image frames are black and white, grayscale and/or color, depending for example on the specific capturing imaging devices utilized. In some examples, these image frames are input to damage verification system720, as disclosed further herein with reference toFIG.7. In some examples, this system has knowledge of the time of capture of each image frame (whether an absolute time, or a time relative to capture of other frames), and/or of the capturing imaging device associated with each image frame. In some examples, this information is sent along with the image frame to system720. In other examples, the information is indicative of these parameters, but is not transferred explicitly. For example, the system720may be configured such that the time of receiving of a particular image frame, or the order of the arrival of the image frame, are indicative of the time of capture and/or of the capturing imaging device. In some examples, the system receives information indicative of the relative position of the candidate damage region(s) on the image frame(s). For example, the system720may be able to analyze the region and determine its coordinates relative to the image frame. As another example, the system720may receive explicitly the coordinates of one or more of the candidate damage region(s) on an image frame(s). In one example, the system is configured to compare the candidate damage regions of image562to image564that immediately precedes it in time. In another example,562is compared to image frame560that immediately follows it in time. Image frames564and560are examples of temporally-consecutive captured image frames, with respect to image frame562. In a third example,562is compared to both564and560. In another example, more than three image frames are compared, e.g. comparing562also to image566. In another example, time ti+1is skipped, and image frame562of tiis compared only to image frame566captured at ti+2. In all of these examples, using temporal detection repetition criteria, candidate damage region515will be verified, since regions520,510,525are all found to contain the same appearance of apparent damage, and the damage in all of them is associated with the same location on a data representation of the vehicle, to which system720has access. Note that the regions510,515,520,525, associated with the same location on the data representation, appear in different locations within their respective image frames560,562,564,566, since the vehicle105moved as the images were captured. Similarly, in all of these examples, candidate damage region550will not be verified, since no regions are found to contain the same appearance of apparent damage, or other target of interest, associated with the same location on the data representation of the object105. In one example, the system compares the candidate damage regions of image frame562to image572captured at the same time by the “next” (e.g. adjacent physically) imaging device, Camera 2 (e.g. camera142). Image frame572has no region corresponding to515, and the verification fails. In another example,562is compared to image frame574that was captured at the same time by camera3(e.g.146), which may be several cameras distant from Camera 1. Image572has a region535corresponding to515, containing the same appearance of apparent damage, and the damage in both of them is associated with the same location on a data representation of the vehicle. In that example, the verification succeeds. In a third example,562is compared to both572and574. In another example, more than three image frames are compared, e.g. comparing562also to image576, captured by Camera n. In other examples, image frame572is skipped, and562is compared directly to574and/or to576. Various other combinations of cameras may be used in the comparison of image frames to those of Camera 1. In all of these examples, a spatial detection repetition criterion used. Note also, that in all of these examples, candidate damage region550will not be verified, since using the spatial detection repetition criteria, no regions are found to contain the same image of apparent damage, associated with the same location. In some typical but non-limiting examples, target detection verification system720compares two to ten image frames that have an overlap area, to determine if there is repetition of one or more candidate detection regions. In other non-limiting examples, up to hundreds or more image frames can be compared to determine if there is such a repetition. In some examples, the comparison is done across both different imaging devices and different times, for example comparing image frames562(Camera 1 at time t) and570(Camera 2 at time ti+1), using one or more criteria that may be referred to herein also as a temporal-spatial detection repetition criteria. Thus, rather than merely going “across” or “up and down” along the image frames of matrix500, in some examples the method may compare diagonally. Various combinations of imaging devices and times is possible. One non-limiting example is to look also at four other cameras, and look at times ti−5, ti−10, ti+5, and ti+10for each camera, and requiring a location match of the candidate regions in image frame562in a certain percentage of the respective images. Note that the disclosure above has been of a case of attempting to verify candidate regions515,550of image frame562. The same techniques may be used to verify, for example, candidate damage region520of image frame564, or any other candidate damage region (or other candidate target detection region) detected in one of the set of images captured during the inspection/imaging session. Note that the above description is a non-limiting example only. Attention is now drawn toFIG.6schematically illustrating an exemplary generalized depiction of image frame overlap, in accordance with some embodiments of the presently disclosed subject matter. In some examples, image frames taken consecutively in time should have at least partial overlap610,620. The example of the figure shows image frames560,562,564, captured at consecutive ti−1, tiand ti+1. However the same principle may apply as well to image frames captured at the same time by different image devices. For example, images562and572, captured by Cameras 1 and 2, may overlap at least partially (not shown inFIG.6). The configuration of cameras, of image frame capture rate, and of speed of motion of the vehicle or of the cameras or lights, should be such that each part of the vehicle has multiple image frames that include a capture thereof. This may help to enable determination whether the same appearance indicative of damage (e.g. a particular shape of a scratch) repeats itself in multiple regions515,520that have the same location on the data representation, or, alternatively whether a candidate damage region550has no corresponding image of the candidate damage in the overlap areas610,620of other images. In some examples, the candidate damage region(s) appear at least partially in the overlap area610,620. The regions510,515,520,525illustrate the partial overlap concept. They all show imaged appearances of the same scratch220, where220is on one particular location of the vehicle. However, each of those regions has a different position relative to the edges of the respective image frame, as can be seen inFIG.5. Similarly,FIG.5shows that region510which is “cut off” on its image frame560, corresponds to only part of the other regions, and is an example of a candidate damage region appearing only partially in the overlap area. In some cases, this may be enough to determine that the image of the scratch220, for example, repeats itself in multiple image frames, and that the detected scratch can thus be verified. Note that the above description is a non-limiting example only. In some examples, the computational cost of verifying candidate target detection regions using spatial detection repetition criteria (across different cameras) is higher than that of using temporal detection repetition criteria (across different capture times). Therefore, in some examples, the system720first verifies detected candidate damage regions using one or more temporal detection repetition criteria. This may in some examples winnow out a majority of the “false positives/false alarms”, for example 90%, in some cases even 100%. Some false alarms may still exist in some examples, since certain reflections may “follow” the vehicle, appearing in most or almost all of the image frames captured of that portion of the vehicle. The verification system may then take the verified damage regions, which are a smaller number of regions compared to the original candidate regions, and treat them as candidate damage regions that are to be verified. The system may analyze these verified damage regions again, using one or more spatial detection repetition criteria, to further winnow out the verified damage regions, and thus generate doubly-verified damage regions that meet both the temporal and the spatial detection repetition criteria. In some examples, it is more efficient to screen the regions515,550in this order, e.g. comparing image frames560,562,564,566, rather than first screening using spatial criteria, e.g. first comparing image frames562,572,574,576, or using both temporal and simultaneously.FIG.10discloses an example of such a process. Note that doubly-verified damage regions are a non-limiting example of the more general doubly-verified target detection regions. Attention is now drawn toFIG.7, illustrating a generalized example schematic diagram of an inspection, detection and verification system102, in accordance with certain embodiments of the presently disclosed subject matter. In one non-limiting example, system102is a vehicle damage inspection, detection and verification system. In some examples, system102comprises imaging system705. Further details of example architectures of system705are disclosed with reference toFIG.8. In some examples, system102comprises target detection system710. In the present disclosure, system710will be exemplified, for ease of exposition, by a damage detection system710. In some examples, target detection system710receives images707from imaging system705, and detects candidate target detection regions, for example candidate damage regions indicative of detected possible damage to the at least one vehicle. System710is in some examples a damage detection system known in the art, and in some examples may utilize machine learning, for example using a deep neural network. In some examples, it is a single-input single-frame damage (or other target) detection system. In other examples, system710is a multi-image-inputs single frame target detection system, such as for example system1210disclosed with reference toFIG.12. In some examples, system102comprises target detection verification system720, for example a damage verification system. System720may include a computer. It may, by way of non-limiting example, comprise processing circuitry730. Processing circuitry730may comprise a processor740and memory732. The processing circuitry730may be, in non-limiting examples, general-purpose computer(s) specially configured for the desired purpose by a computer program stored in a non-transitory computer-readable storage medium. They may be configured to execute several functional modules in accordance with computer-readable instructions. In other non-limiting examples, processing circuitry730may be a computer(s) specially constructed for the desired purposes. In some examples, target detection verification system720comprises input interfaces722. In some examples, interfaces722receive, from target detection system710, information711indicative of two or more images of the vehicle, where this information comprises one or more candidate target detection regions. Non-limiting examples of the format of such information are disclosed with reference to the matrix500of images ofFIG.5and data flow900of9. Note that in some examples, target detection system710(e.g. system1210disclosed further herein with reference toFIG.12) is comprised within target detection verification system720. In some examples, interfaces722receive motion maps712, and/or geometric maps714, from imaging system705. In some examples, interfaces722receive geometric maps718from CAD Model716, in addition to, or instead of, receiving these maps from system705. In other examples, target detection verification system720creates these motion maps (e.g. using a vehicle motion estimation module, not shown in the figure) and/or geometric maps. Processor740may comprise at least one or more function modules. In some examples it may perform at least functions, such as those disclosed inFIGS.3,4and5, as well as those disclosed further herein with reference to, for example, block1030of flowchartsFIGS.10aand10b. This includes processing the information711indicative of the at least two images, to determine whether each candidate damage region (as a non-limiting example) meets one or more detection repetition criteria. In some cases, this processing also makes use of maps712,714,718. In response to the detection repetition criteria being met, the processor may classify the relevant candidate damage region as a verified damage region. In some examples, processor740comprises overlap determination module742. This module may be configured to determine which portions of the relevant image frames overlap, that is to determine what are the overlap areas610,620. Such a function is visualized schematically inFIG.6. In some examples, this module makes use of motion maps712and/or geometric maps714,718. In some examples, processor740comprises region position determination module742. This module may be configured to determine the relative location of candidate damage regions515,550,325,463etc. within their respective images, as well as identifying corresponding regions510,520,535,327,468,467etc. that are located in a relative position of a data representation of the vehicle that corresponds to the respective candidate damage regions. In some examples, this module makes use of motion maps712and/or geometric maps714,718. In some examples, processor740comprises damage region verification module746. This may in some examples compare candidate damage region(s), associated with a particular image, and their corresponding regions associated with other images, and use detection repetition criteria to verify whether the candidate damage region is indicative of actual vehicle damage, or is a false-alarm, such as an indication of reflection. Damage region verification module746is a non-limiting example of a more general target detection region verification module746. In some examples, damage verification system720comprises output interfaces724. For example, in response to the module746classifying the candidate damage region as a verified damage region, the system720may output775via interface724an indication of the verified damage region. Note that indication(s) of verified damage region(s) are one example of the more general indication(s) of verified target detection region(s). For example, the output may be to a display computer770, viewable by a human operator. In the non-limiting example of the figure, the display780on the computer770shows an output image, with the now-verified damage/target region784highlighted, marked, or otherwise indicated on the output image. The output image can be, for example, the relevant captured image frame (for example454,562), or a composite of several captured image frames. In such an example, the operator's attention is easily drawn to the damage782. In some examples, the output of the indication includes coordinates associated with the verified target detection region(s). For example, these may be 2-D or 3-D (three-dimensional) coordinates associated with the data representation of the vehicle or other object, that correspond to the damage or other target, e.g. in the reference frame of the data representation. In another example, these are coordinates associated with one or more of the images utilized to verify the target detection region, for example the coordinates of region535in the reference frame of 2-D image frame574. In another non-limiting example, the interface724outputs a text report of verified possible damages and/or other targets, and their locations on the vehicle. The report in some examples may also include an indication of the type of possible damage, e.g. a description of the type of each damage (long scratch, small scratch, dent etc.) and/or other target type (e. g. “label with Company X logo” or “signature of celebrity Y”). In some examples, an indication of classifying the candidate damage region as a verified damage region is not output. Instead, other systems can access this information from a database stored in memory732, or in a data store (not shown) of verification system720. Memory732may in some examples store data derived during the processing of image information, and used by the various processing stages. Non-limiting examples of such data include lists of candidate damage regions and of verified damage regions, overlap areas to determine, portions of motion maps712and geometric maps714,718that are being used by the modules, calculated positions310,410of candidate damage regions360,463and corresponding regions327,468,460,467, calculated counts and statistics of repetitions of candidate damage regions etc. In some examples, damage verification system720comprises a data store (not shown). One example use is to store data that may be used for later calculations, in a more long-term fashion. Attention is now drawn toFIG.8, illustrating a generalized exemplary schematic diagram of an imaging system705, in accordance with certain embodiments of the presently disclosed subject matter. In some examples, imaging system705comprises image capture system843. In some examples, image capture system843comprises processing circuitry840that serves as an image capture system. Processing circuitry840may comprise a processor850and a memory842. Processor850may in some examples comprise input module857. This may receive images from imaging devices such as140,142. In some cases the images are received in a synchronous fashion. The processor may also comprise image capture module852. Module852may in some examples receive images from input module857, as indicated by the arrow. Module852may adjust each image (e.g. image resolution), associate the image with information such as capture time and image device ID, and possibly store the information and/or image in memory842. Processor850may, in some examples, comprise motion map module854, which in some examples may also be referred to herein as vehicle motion estimation module854. This module may generate motion maps712, for example as disclosed with reference to block1015ofFIG.10. Processor850may in some examples comprise geometric map module856. This module may generate geometric maps714, for example as disclosed with reference to block1017ofFIG.10. In some examples, modules854and/or856receive images from image capture module852, as shown by the arrows. Processor850may, in some examples, comprise output module858. This module may output images, motion maps712and/or geometric maps714, for example to verification system720. In some examples, imaging system705comprises triggering system820. In some examples, system820comprises processing circuitry824. Processing circuitry824may comprise a processor826and a memory828. In some examples, processor826is configured to perform triggering functions such as those disclosed with reference to blocks1004and1013ofFIG.10. Memory828can store, for example, the current triggering state of the system. In some examples, imaging system705comprises image brightness detection or recognition system830, which detects or recognizes the brightness of the portion of the vehicle being imaged by specific cameras at specific times, and causes imaging adjustments to compensate for the brightness, such as, for example, adjustments in lighting. In some examples, this system830is referred to herein also as an image lighting compensation system, or as an image brightness compensation system. In some examples, system830comprises processing circuitry834. Processing circuitry834may comprise a processor836and a memory838. In some examples, processor836is configured to perform image adjustment functions such as those disclosed with reference to block1010and toFIG.11. Memory838can store, for example, the currently-determined degree of brightness of the vehicle being imaged by each camera140, the current light intensity setting of each illumination device130, and/or the current exposure/aperture settings of each camera140. In the non-limiting example ofFIGS.1,7and8, the imaging devices140etc. and the illumination devices130etc., as well as monitoring camera136, are shown as separate from imaging system705and as operatively coupled to it. In other examples, these devices are part of the imaging system705, e.g. as part of image capture system843. Similarly, in some examples, monitoring camera136can be part of triggering system820and/or brightness compensation system830, within imaging system705. FIGS.1,7and8illustrate only a general schematic of the system architecture, describing, by way of non-limiting example, certain aspects of the presently disclosed subject matter in an informative manner only, for clarity of explanation only. It will be understood that that the teachings of the presently disclosed subject matter are not bound by what is described with reference toFIGS.1,7and8. Only certain components are shown, as needed to exemplify the presently disclosed subject matter. Other components and sub-components, not shown, may exist. Systems such as those described with respect to the non-limiting examples ofFIGS.1,7and8, may be capable of performing all, some, or part of the methods disclosed herein. Each system component and module inFIGS.1,7and8can be made up of any combination of software, hardware and/or firmware, as relevant, executed on a suitable device or devices, which perform the functions as defined and explained herein. The hardware can be digital and/or analog. Equivalent and/or modified functionality, as described with respect to each system component and module, can be consolidated or divided in another manner. Thus, in some embodiments of the presently disclosed subject matter, the system may include fewer, more, modified and/or different components, modules and functions than those shown inFIGS.7and8. To provide one non-limiting example of this, in some examples there may be separate modules to perform verification based on temporal detection repetition criteria (across different capture times), and based on spatial detection repetition criteria (across different cameras)—rather than using the single module746. As another example, in some cases, the functions of triggering system820and brightness detection system830can be combined. As another example, one processing circuitry840can provide all the functions disclosed with reference to imaging system705. One or more of these components and modules can be centralized in one location, or dispersed and distributed over more than one location. Each component inFIGS.1,7and8may represent a plurality of the particular component, possibly in a distributed architecture, which are adapted to independently and/or cooperatively operate to process various data and electrical inputs, and for enabling operations related to damage detection and verification. In some cases, multiple instances of a component may be utilized for reasons of performance, redundancy and/or availability. Similarly, in some cases, multiple instances of a component may be utilized for reasons of functionality or application. For example, different portions of the particular functionality may be placed in different instances of the component. Those skilled in the art will readily appreciate that the components of systems102and705, for example, can be consolidated or divided in a manner other than that disclosed herein. Communication between the various components of the systems ofFIGS.1,7and8, in cases where they are not located entirely in one location or in one physical component, can be realized by any signaling system or communication components, modules, protocols, software languages and drive signals, and can be wired and/or wireless, as appropriate. The same applies to interfaces such as722,724,857,858. Note also, that the above statements concerning the systems disclosed with referenceFIGS.1,7and8, apply as well to the systems ofFIG.12, mutatis mutandis. The non-limiting example of inputting to system720a set of image frames, each captured by a particular camera at a particular time, was disclosed with reference to the images matrix500ofFIG.5. In one example, the image frames are output by damage detection system710, including indications of candidate damage regions. In another example, the system710outputs only those images where candidate damage regions were detected, while imaging system705provides to system720the images that do not include candidate damage regions. In one example, the candidate damage regions are indicated on the image itself, e.g. by a defined border. In other examples, the captured images are sent to system720untouched, and the system710outputs metadata indicative of the candidate damage regions, e.g. their coordinates relative to the image and (in some examples) a description of the associated damage type (e.g. “small vertical scratch”).FIG.9discloses still another example implementation of data input to damage verification system720. Attention is now drawn toFIG.9, illustrating a generalized exemplary schematic diagram of data input to damage verification system720, in accordance with certain embodiments of the presently disclosed subject matter. In example data flow900, sets of data are input for each detected candidate damage region. For the candidate regions510,515,550,535, the data is shown. The example data set also includes the time of capture and the imaging device used (e.g.510was captured at ti−1by camera1). The data set further includes the position information, e.g. coordinates, of the candidate region within its corresponding image frame. In the non-limiting example of the figure, the data sets for regions510,515,550,535include a portion of the image frame itself, corresponding to the relevant candidate region. Note that in this implementation, the entire image560need not be sent to verification system720, but rather only the image of the candidate damage region portion510. In some examples, this may reduce the amount of data sent to system720. In some other examples, the position information is sufficient, and the data set may not include images, for some or all candidate damage regions. This may be relevant, for example, where the repetition criteria are concerned only with whether or not some appearances are detected in multiple image frames, and the repetition criteria are not concerned with the particular category or the details of the detected target. For example, in some examples, detection of a scratch above the vehicle wheel in one image frame, and detection in a second image frame of a dent, corresponding to the same location above the wheel, may be considered to meet a repetition criterion, that is the candidate target detection region may be considered to be repeated in multiple image frames. Note also that in the example of the figure, candidate regions550and515are sent as separate data sets, with differing relative position information A and B, even though both appear on the same image562, since the image562itself is not input to the system720. Similarly, in such an implementation there is, in some examples, a need to send system720an indication of the relevant images which contain no candidate damage regions. Thus, data sets are shown for images570,572, providing capture camera and time information, and an indication that the particular image is not associated with any detected candidate damage region. In some examples, there is no need to send such data for image frames570,572. Rather, system720knows which cameras image a particular portion of the object at each point in time. Therefore, if no data is received for e.g. image frame570, system720can infer that Camera 2 did not capture any candidate damage regions associated with that portion of the object during the capture time ti−1. Note that the above description with reference to data flow900is a non-limiting example only. A number of exemplary flows are disclosed herein.FIGS.10A and10Bdisclose an exemplary method of damage inspection, detection and verification.FIG.11discloses an exemplary method for image adjustment and brightness compensation, based on the brightness of the imaged vehicle portion.FIG.13discloses a method for damage detection by a multi-frame-input single-frame-output damage detection system. Attention is now drawn toFIGS.10A and10B, illustrating one example of a generalized flow chart diagram of a process1000for damage inspection, detection and verification, in accordance with certain embodiments of the presently disclosed subject matter. This process is in some examples carried out by systems such as those disclosed with reference toFIGS.1,7and8. Damage inspection, detection and verification are disclosed here as one non-limiting example of the more general process of object inspection, and target detection and verification. At a high level,FIG.10can be seen as disclosing five example processes. Imaging process1005(blocks1004-1017) captures images of vehicle1005, over time, from multiple cameras140,142,146. In block1024, candidate damage regions are detected. Verification process1030discloses two example sub-processes: a stage of verification based on temporal detection repetition criteria (e.g. in blocks1032-1044), followed by a stage of verification based on spatial detection repetition criteria (e.g. in blocks1050-1064). Example temporal detection repetition criteria are disclosed also with reference to images330,334,338, images450,454and458, and images560,562,564,566, inFIGS.3-5. Example spatial detection repetition criteria are disclosed also with reference to images334,364, images454,474, and images562,572,574,576, inFIGS.3-5. In the fifth example process, the verified damage regions are output (block1066). The flow starts at1004. According to some examples, and optionally, image acquisition or capture is triggered on (block1004). For example, processor826of triggering system820may perform this. Processor826may detect an image capture trigger condition, in response to capture of monitoring image(s) by one or more monitoring imaging devices136. For example, processor826may monitor images captured and received from monitoring camera136. The images may be referred to herein also as monitoring images. In some examples, the image capture trigger condition comprises the vehicle meeting a location condition. As a non-limiting example of a location condition, when processor826sees relative motion of vehicle105over vehicle-crossing entry indicator150, it may trigger on the image acquisition process, and it may instruct image capture module852to begin capture of vehicle images using capturing imaging devices140,142,146. In some examples, vehicle-crossing entry indicator150comprises a defined line, which has a defined positional relationship with the poles125,127, with the cameras such as140and with other components of image acquisition system100. As another non-limiting example of a location condition, when processor826detects relative motion of vehicle105into image capture region155, it may instruct image capture module852to begin capture of vehicle image frames. In some examples, the image capture region has a defined positional relationship with the poles125,127, the cameras such as140, and other components of system100. In some examples, the image capture region155has a rectangular shape. In some examples, the triggering system is configured to distinguish between the vehicle105and other objects, such as people, animals and tools that may be brought into the work area of image acquisition system100. In this way, it will not, for example, trigger image acquisition when a pet dog enters image capture region155. According to some examples, the multiple imaging devices140,142,146capture image frames, sending them to processing circuitry840which processes them (block1008). This is repeated, for example as the vehicle105passes through the system100, or as the imaging devices140,142,146pass by the vehicle105. In some examples, the various imaging devices may capture images synchronously. According to some examples, and optionally, in parallel with block1008, an image adjustment/brightness compensation process may occur (block1010). In some examples, this is carried out utilizing brightness detection system830. Example details of this block are disclosed further herein with reference toFIG.11. According to some examples, and optionally, image acquisition or capture is triggered off (block1013). As one non-limiting example, processor826of triggering system820may monitor images received from monitoring camera136. In some examples, when the processor sees relative motion of vehicle105over vehicle-crossing exit indicator157, it may trigger off the image acquisition process, and may instruct image capture module852to stop capture of vehicle image frames using capturing imaging devices140,142,146. In other examples, when processor826detects relative motion of vehicle105out of imaging triggering region or image capture region155, it may instruct image capture module852to stop capture of vehicle images. One or more motion maps712may be created (block1015). In some examples, this is carried out utilizing motion map module854. In some other examples, this can be carried out in parallel, building the maps712as the image frames are captured in block1008. In some examples, a motion map712is indicative of motion of points on the data representation of the vehicle, e.g. corresponding to points on the vehicle surface, on captured image frames that include those points, over time. As one non-limiting example, a motion map712may indicate that the right end of the front door handle appears as pixel number100in image frame number3120, captured by camera140at time t, but that the same right end of the handle appears as pixel number500in a different image frame number3121, captured by the same camera140at the consecutive point in time t+1. Motion maps can be derived from the captured image frames, and thus can be associated with the capture of the images. For each imaging session of a vehicle105, its motion map may be different, since, for example, the driver may drive at a different speed, and make different turns, each time the driver drives the vehicle through image acquisition system100. Note that in some examples a different motion map exists for each capturing image device used in the image capture session. Note that in some other examples, generation of motion maps is carried out by system720, at a later stage of the flow. One non-limiting example of generating motion maps712is now disclosed. In some examples, the motion map module854analyzes the images to detect known features of the vehicle, such as headlights, grille, side lights, side mirrors, door handles and the recesses around them, the curving lines associated with the border between the front side panel and the front doors, the front and rear doors, and the rear door and the read side panel, etc. It identifies, on images, point(s) indicative of one or more of these features. The positions and orientations of the cameras140,142may be known. By comparing different images that contain the same feature, and considering the position of the particular feature within each image frame, as well as the image capture times, the module can create a map over time of the motion between consecutive image frames of points indicative of the features. In some examples, based on these maps, the module can also map the motion of other points on the image frames, that are not points indicative of the detected features, for example based on positions of the other points relative to positions of the features. These other points can include, in some cases, areas of the vehicle that are associated with candidate damage regions or with other candidate target detection regions. The module can thus know the position of these other points during the imaging session as a function of time. In some examples, the module can also generate a data representation of the vehicle, based on this information. Geometric maps714, or models, of the vehicle may be created (block1017). In some examples, this is carried out utilizing geometric map module856. In some other examples, this can be carried out in parallel, building the maps714as the images are captured in block1008. In some examples, the geometric map714is indicative of different relative positions of multiple capturing imaging devices140,142and of points on the data representation of the vehicle corresponding to a particular imaged portion of the object. In some examples, the generation of geometric maps714may be done in a manner similar to that disclosed above for motion maps712, with reference to block1015, but considering images captured at the same time by multiple cameras140,142,146. In such a case, each geometric map may be associated with a particular time of image capture, for example with the same value of a trigger ID field associated with the image frame. As one non-limiting example, a geometric map714may indicate that the right end of the front door handle appears as pixel number100in image frame number4120, captured by camera140at time t, but that the same right end of the handle appears as pixel number500in a different image frame number5120, captured by the adjacent camera142at the same time t. Note that in some examples a different geometric map exists for each point in time of the image capture session. Note that in some examples, the use of geometric maps requires at least two imaging devices that are calibrated together, and that there is a known transformation from the coordinates of e.g. camera142to the coordinates of camera140. Recall that the different capturing imaging devices140,142,146have known imaging device relative positions and known relative imaging device imaging angles, due to their known position and orientation. Note also that since the vehicle is three-dimensional, each point on it has an associated depth relative to each camera imaging that point. By detecting features on the vehicle, and analyzing where the features appear within the different image frames, and how they appear, the module can determine how each camera views these features, and thus how the cameras view other points on the vehicle at a particular time of image capture. In some examples, the model of the object in the picture is estimated from the image frames using techniques known in the art such as point matching. In other examples, the geometric maps718are pre-defined, and are known prior to the image acquisition. In some examples this requires registration. Such maps may be based on for example on a CAD model point cloud received718from CAD Model716. Note that in some other examples, generation of geometric maps is carried out by system720, at a later stage of the flow. In some examples, the blocks1004-1017may be referred to, together, also as imaging process1005, carried out for example by imaging system705, along with the capturing cameras140,142,146,144,148, and in some cases with illumination devices130,132and/or monitoring cameras136. Captured images, motion maps and/or geometric maps may be output707,712,714(block1020). In some examples, this is carried out by output module858and output774. For example, the output may be sent to damage detection system710. One or more candidate damage regions may be detected (block1024). In some examples, this is carried out by damage detection system710. One non-limiting example of a target detection system is disclosed further herein with reference to system1210ofFIG.12, with an example method1300disclosed with reference toFIG.13. The candidate damage region(s) is output for verification, as a post-processing step that follows the detection process (block1026). The output711may be from system710to damage verification system720. The next blocks, from1032to1064, are in some examples carried out in target detection verification system720, e.g. a damage verification system. This is indicated by their inclusion in an overall block1030, indicating the target detection verification process carried out e.g. in damage verification system720. In block1032, the first candidate damage region is selected, for example region515. In block1034, it is determined whether, in earlier-captured images338,458,560, and/or in later-captured images330,450,564,566, captured by the same camera140, corresponding regions327,468,460,510,520,525show the same damage appearance325,425as appears in candidate damage region360,463,515, that is whether there is repetition of the particular candidate damage region360,463,515,550in locations of these image frames338,458,560,330,450,564,566that are associated with the same location on a data representation of the vehicle, as does the candidate damage region360,463,515. Such a determination1034may consider temporally-varying image frames. This block may include use of the overlap determination module742, to determine overlap areas610,620, within which repetitions of the candidate damage region will be sought. (Overlap areas are further disclosed with reference toFIG.6.) This block may also use region position determination module744, to determine the position (e.g.310,410) within each of the other images338,330,458,450where the candidate damage region360,463should appear. The block1034may also use damage region verification module746, to perform statistics on the number of repetitions found for the particular candidate damage region360,463,515,550. Examples of such region matches, choice of other image frames338,330,458,450,560,564,566to process and temporal detection repetition criteria, are disclosed with reference toFIG.5. Example temporal detection repetition criteria are disclosed also with reference to images330,334,338, images450,454and458, and images560,562,564,566, inFIGS.3-5. In some examples, block1034utilizes motion maps712to determine the positions of the portion of the vehicle(s) that is associated with the image frames being processed. The motion maps may be used as well, in some examples, when determining overlap areas610,620(further disclosed with reference toFIG.6), using for example overlap determination module742. The following is one non-limiting example of a method for determining1034the position310,410within each of the other image frames338,330,458,450of the images being used in the verification, where the candidate damage region360,463,515,550(associated with a first image frame334,454,562of the images being used in the verification) should appear, while utilizing a motion map712, within the process of block1034, carried out for example by region position determination module744:i. determine the positions of the portion of the data representation of the vehicle105that is associated with the images334,338,330,454,458,450being compared, and determine the different relative positions230,234,238,250,254,258of the capturing imaging devices associated with the image frames and the imaged portion of the object associated with the image frames. This determination, in some examples, is based, at least, on information indicative of the capturing imaging device140associated with the each image of two or more images334,338,330,454,458,450(e.g. the camera IDs associated with each image), on the positions and the orientations of the capturing imaging device(s)140, and on the motion map(s)712;ii. determine a position (e.g.310,410) of the candidate damage region360,463,515,550with respect to a position of that portion of the data representation of the vehicle which is associated with the first image frame334,454, which includes the candidate damage region. This position of the candidate damage region360,463is also referred to herein as the first relative position310,410;iii. determine one or more expected relative positions (e.g.310,410) of one or more expected corresponding candidate damage regions327,468,460on other image frames338,330,458,450(not the first image334,454). This determination may be based at least on the positions of that portion of the data representation of the vehicle105that is associated with the images334,338,330,454,458,450being processed, on the different relative positions230,234,238,250,254,258of the capturing imaging device(s) associated with the images and the imaged portion of the object associated with the images, and on the first relative position (e.g.310,410) of the candidate damage region360,463,515,550. In some examples this may make use of the motion map(s)712; andiv. determining whether the expected corresponding candidate damage region(s) appears on some or all of the other images at the expected relative position(s) (e.g.310,410). For example, at468,460the expected corresponding candidate damage region appears, while at327the expected corresponding candidate damage region does not appear. In the above process, as was indicated earlier, candidate damage regions are disclosed as an example of the more general candidate target detection regions. Note that in some examples, the determination, in block1034, whether there is repetition of the particular candidate damage region in locations of these images that are associated with the same location on a data representation of the vehicle as does the candidate damage region, implies that the damage appearances325,425are in the same location (e.g.310,410) within a certain tolerance. The size of the tolerance is in some cases based on factors such as, for example, rate of image frame capture, level of synchronization between cameras140,142, speed of vehicle or system movement110,180, size and resolution of image frames334,338,330,454,458,450, size of damage areas325,425to be detected and verified (e.g. giant scratches vs small pits in paint) and distances and angles230,234,238,250,254,258of cameras relative to the imaged portion of the vehicle. In some examples, the tolerance is dependent on the amount of movement of the object between adjacent image frames. In some non-limiting examples, this tolerance is in the order of magnitude of millimeters. In some examples, the tolerance is dependent on the desired sensitivity of the system, that is the tolerance of the application to having a larger or smaller percentage of false positive detections. In some other non-limiting examples, this tolerance is a few cm. In block1036, it is determined whether the relevant temporal detection repetition criteria are met. This may, in some examples, be performed by damage region verification module746. A non-limiting example criterion is that the candidate damage region360,463,515,550appears in the relevant location (e.g.310,410) in both the previous image338,458,560and the following image330,450,564that was captured by the same camera140that captured the candidate region360,463,515,550. In response to determination that the criteria are not met, the module may classify the particular candidate damage region360,550as not verified (block1038). In response to determination that the criteria are met, the damage region verification module746may classify the particular candidate damage region463,515as verified (block1040). These classifications could for example be stored in memory732. A determination may be met whether all candidate damage regions515,550have been processed for verification (block1042). In response to determination that not all have been processed, the next candidate damage region550may be selected for processing (block1044). This may in some examples be performed by damage region verification module746. The process loops back to block1034. In response to determination that all candidate damage regions515,550have been process and evaluated, the process may continue to block1050. In1050, a first temporally-verified damage region463is selected, e.g. by processor740, to constitute a candidate damage region463for further verification. In block1052, it is determined whether, in image frames474,572,574,576, captured at the same time by other cameras142,146, corresponding regions535,540show the same damage appearance425as appears in temporally-verified candidate damage region474,515, that is whether there is repetition of the particular candidate damage region463,515in locations of these images474,572,574,576that are associated with the same location on a data representation of the vehicle105as does the candidate damage region463,515. Such a determination1052may consider spatially-varying image frames, taken by other cameras142,146. This block may include use of the overlap determination module742, to determine overlap areas610,620, within which repetitions of the candidate damage region463,515will be sought. Overlap areas are further disclosed with reference toFIG.6.) This block may also use region position determination module744, to determine the position (e.g.410) within each of the other image frames474,572,574,576where the candidate damage region463,515should appear. The block may also use damage region verification module746, to perform statistics on the number of spatial repetitions found for the first temporally-verified candidate damage region515. Examples of such region matches, choice of other image frames to process, and spatial detection repetition criteria, are disclosed with reference toFIG.5. Example spatial detection repetition criteria are disclosed also with reference to images334,364, images454,474, and images562,572,574,576, inFIGS.3-5.i. In some examples, the positions of the portion of the vehicle associated with the processed image frames, are determined utilizing geometric map(s)714,718. For example, assume that cameras A and B (e.g.140,142) capture at the same moment image frames Fa (a first image frame) and Fb (a second image frame), respectively. Assume that in each of these two frames there is a single target detection in the overlap area, denoted Da and Db respectively. In some examples, the method may proceed as follows: transform the coordinates of the first candidate detection region Da, which are in the coordinate system first imaging device A, to the coordinate system of a second imaging device, camera B, using the known calibration and the geometric map model.ii. compare the transformed coordinates of first candidate detection region Da to the coordinates of the second candidate detection region Db on the object of interest. In block1054, it is determined whether the relevant spatial detection repetition criteria are met. This may, in some examples, be carried out by damage region verification module746. A non-limiting example criterion is that the candidate damage region463,515also appears in the relevant location (e.g.410) in image frames474,572,574,576of two nearby-positioned cameras142,146, that were captured at the same time206that the image454,562containing the candidate region463,515was captured. In the example above, the system may determine whether the two sets of coordinates Da (transformed coordinates of the first candidate detection region) and Db (coordinates of the second candidate detection region) match, within a defined tolerance. In response to determination that the criteria are not met, the module may classify the particular candidate damage region463,515as not verified, in that the spatial verification failed (block1056). In some examples, the region463,515could be classified as e.g. “partially verified”, or “temporally-only-verified”—in that the region passed the temporal criteria of block1036, but failed the spatial criteria of block1054. This classification could for example be stored in memory732. In response to determination that the spatial criteria are met, the damage region verification module746may classify the particular candidate damage region as a doubly verified damage region (block1060). It can be classified as being a spatially-verified damage region as well as a temporally-verified damage region. This indication could for example be stored in memory732. A determination may be met whether all temporally-verified candidate damage regions463,515, that is candidate damage regions verified at block1036, have been processed for verification (block1062). In response to determination that not all have been processed, the next temporally-verified candidate damage region515may be selected for processing (block1064). This may in some examples be carried out by damage region verification module746. The process loops back to block1052. In response to determination that all temporally-verified candidate damage regions463,515, have been processed and evaluated, the process may continue to block1066. In block1066, one or more of the doubly-verified damage regions463,515are output775, e.g. by output interface724, to an external system such as display system770, for display780(e.g. to a human operator). Note also, that, for ease of exposition, the figure discloses the non-limiting example of first verifying for temporal detection repetition criteria, e.g. in blocks1032-1044, and only then verifying based on spatial detection repetition criteria, e.g. in blocks1054-1064. Similarly, in the example process ofFIG.10the candidate damage regions360,463,515,550are processed one a time. In other examples, the spatial criteria1054can be applied before application of the temporal criteria1036. In still other examples, only a single verification stage is carried out. For example, a mix of temporal1036and spatial1054detection repetition criteria can be applied, concurrently. Also note that in some examples more than one candidate damage region360,463,515,550can be processed1030and verified in parallel, in either or both stages1032-1044,1054-1064of verification (or in the single stage, if relevant). Note also that the above description of process1000is a non-limiting example only. In still other examples, only one of temporal criteria and spatial criteria are applied. Note further, that in the disclosure of blocks1034-1040and1052-1060, the detection repetition criteria are indicative of repetition of candidate target detection region(s) in locations of the multiple image frames that are associated with a same location on the data representation of the object(s). In some other examples, the detection repetition criteria are indicative of repetition of the candidate target detection region(s) in locations of the frames that are associated with different locations on a data representation of the object(s). In such a case, blocks1034and1052are modified to look for such detection non-repetition criteria. Also in such a case, blocks1036and1054are modified, in that in response to the relevant detection non-repetition criteria being met, the damage region verification module746may classify the particular candidate damage region as a non-verified target detection region, e.g. in blocks1038and/or1056. An example of this is a reflection, whose appearance “moves” along the object from frame to frame, as the object moves202,204,208during the image capture process (temporally), and/or as the object is imaged at the same time by different image capture devices140,142that have different relative positions from the imaged portion of object105. Attention is now drawn toFIG.11, illustrating one example of a generalized flow chart diagram of a process1010for image adjustment, in accordance with certain embodiments of the presently disclosed subject matter. Process1010in this figure is an exemplary detailed breakdown of the image adjustment/brightness compensation process1010inFIG.10A. This process of image brightness compensation, or image adjustment, is in some examples carried out by systems such as disclosed with reference toFIGS.1,7and8. The flow starts at1110. According to some examples, the at least one monitoring image of an imaged portion of the vehicle is captured (block1110). In some examples, the capture is by one or more monitoring imaging devices, e.g. monitoring camera136. The capture may be controlled by processor836of brightness detection or recognition system830, referred to herein also as an image brightness compensation system or a brightness compensation system. In some examples this is carried out by a brightness compensation module (not shown) of processor836. Note that in the example ofFIG.1, the same monitoring camera136captures images for both triggering1004,1013purposes, as well as for brightness recognition in block1110. In other examples, separate monitoring imaging devices, or separate sets of monitoring imaging devices, are used for each of the two functions. According to some examples, the brightness of the particular portion of the vehicle, that appears in the monitoring image(s) that was imaged in block1110, is analyzed, to detect, recognize or identify the relevant level or degree of brightness or intensity (block1120). In some examples, the recognition is carried out by a monitoring module (not shown) of the processor836of brightness compensation system830. According to some examples, an imaging adjustment action is carried out based on at least the identified intensity level of the imaged portion of the vehicle (block1130). In some examples, the adjustment or compensation action is carried out by a brightness compensation module (not shown) of the processor836of brightness compensation system830. In some examples, the brightness compensation module may send instructions to the relevant illumination devices130,132to adjust their intensity, as a function of the brightness or intensity. In some examples, the device is a color camera, rather than a grayscale camera, and the brightness is also associated with the color of the imaged portion. For example, in some examples if the vehicle surface is white, light gray, yellow, light tan, the lighting intensity necessary is less than if the surface is black, dark blue or dark gray. Another example action is instructing relevant capturing imaging devices140,142,146to adjust their exposure time—for example a longer exposure time for a relatively dark object portion, e.g. dark gray or darker colors, to allow more total light during the image capture, and a shorter exposure time for a relatively light or bright object portion, e.g. light gray or lighter or brighter colors. In other examples, the module instructs a combination of adjustments to compensate for the brightness or intensity. Note that the exact adjustment levels to be performed, as a function of the particular vehicle portion brightness, are in some cases dependent on the components and geometries of the particular system100. For example, the speed of the vehicle movement110may in some examples require a relatively short exposure time, to help prevent motion blur. According to some examples, a determination is made whether there was a trigger indicating that capturing of vehicle images is to stop (block1140). In some examples, the determination is based on whether trigger1013ofFIG.10Aoccurred. In response to there not occurring such a capture-stop trigger, in some examples the brightness compensation process continues, looping back to block1110(block1150). Capture of monitoring images, brightness identification, and brightness compensation continue, with respect to another imaged portion of the vehicle. This other imaged portion of the vehicle constitutes the imaged portion of the vehicle for the purpose of blocks1110,1120,1130. In some examples, the frequency of performing brightness adjustments is based on the particular parameters of the system100, including for example vehicle movement speed110. Note that if a vehicle has mostly one level of brightness, there may be relatively few adjustments. If it has multiple levels of brightness, e.g. the roof has a different color from the body panels, there may be relatively more adjustments. In response to such a capture-stop trigger, the brightness compensation process stops (block1160). Note that the above description of block1010is a non-limiting example only. In some examples, such a color compensation process1010may have at least certain example advantages. For example, if light of high intensity is aimed at a surface that is white or is of another very light shade, in some cases at least a portion of the captured images will show mostly reflected light, and will show very little content that represents the features on that portion of the vehicle. Such an image may be of little use for functions such as damage detection and verification. A brightness compensation process such as1010may in some examples prevent, minimize or ameliorate such undesirable phenomena. Attention is now drawn toFIG.12, illustrating a generalized example schematic diagram of a damage detection system, in accordance with certain embodiments of the presently disclosed subject matter. Damage detection system1210is shown within the context of computerized system1202for vehicle damage inspection, detection and verification. In some examples, system1202may have features similar to that of system102. In some examples, system1202comprises a multiple-image-input single-frame damage detection system1210. Note that multiple-image-input single-frame damage detection system1210is disclosed herein as one non-limiting example of, more generally, a multiple-image-input single-frame imaged target detection system1210. In some examples, multiple-image-input system1210may include a computer. It may, by way of non-limiting example, comprise processing circuitry1220. Processing circuitry1220may comprise a processor1230and memory1240. Processor1230may in some examples comprise a machine learning functionality such as neural network1235. Neural network1235may in some examples be a deep neural network, for example a convolutional neural network (CNN), as shown in the figure. Neural network1235may function to process input captured images, and to detect on them candidate damage regions. In some examples, the processor performs computations associated with the neural network1235. Memory1240may in some examples store data associated with CNN1235. As on example, in some embodiments the memory1240stores the weights and parameters associated with the neural network1235. In some examples, multiple-image-input single-frame damage detection system1210receives two or more image inputs, for example the three inputs1204,1206and1208. The example of the figure shows receipt of three image frames1250,1260,1270. The received image frames are of the vehicle or other object, e.g. still images and/or frames of a video capture. The received images are associated with different relative positions of the relevant capturing imaging device(s) and an imaged portion(s) of the object associated with the image (for example, as disclosed with reference to image matrix500ofFIG.5). In some examples, the received input images have at least partial overlap610,620. In some examples, the images are color images. In such a case, each image frame may comprise more than one component, for example of a different color. In the example of the figure, each image has three components, e.g. red, blue and green (RGB) components, and thus the inputs1204,1206and1208in fact represent nine input channels, each of a different combination of image frame plus color component. Options other than RGB are possible. Similarly, in some examples one or more of the inputs are black and white or grayscale, rather than color. In some examples, the system1210receives on inputs1204,1206,1208, in addition to image frame1250,1260,1270, information indicative of the different relative positions the capturing imaging device(s) associated with the images and an imaged portion(s) of the object. In some examples different relative imaging angles and the different relative imaging positions are associated at least partially with image frames captured at different times. The system1210may receive indication of a relative order of capture of each of the multiple images1250,1260,1270. In some non-limiting examples, the system1210may receive indication of a time of capture of each of the multiple images. In some examples the different relative positions are associated at least partially with image frames captured by different capturing imaging devices. The system1210may in some examples receive indication of relative position of the capturing imaging device associated with each of the multiple images1250,1260,1270. In some non-limiting examples, the system1210may receive indication of a capturing imaging device associated with each of the multiple images. In some examples indication is received both of relative orders of capture and of capturing imaging devices. In some examples, the function of the system1210is detect candidate damage region(s) on only one of the input images, for example the image frame received on the first input1204. This image1204on which detection is performed is referred to herein also as the detection image frame, as the detection image, or as the first image frame. In such an example, the images1260and1270, received respectively on second and third inputs1206,1208may be referred to herein also as the non-detection image frames or supporting image frames (or images), as the second image frame. In the example of the figure, for ease of exposition, it is shown that image1250and image1260show roughly the same scratch or other damage at corresponding locations1253and1263respectively. Similarly, it is shown that image1260and image1270show roughly the same scratch or other damage at corresponding locations1257and1277respectively. In some examples, the CNN1235processes the nine input components, representing the multiple input images, and outputs1280a single target output such as, for example, image frame1290. In some examples, image1290also comprises multiple color components. The network can be trained, and verified, using sets of relevant input images. Some of the sets will feature the damage in corresponding regions of all of the second images. In some of the sets, the damage appears in corresponding regions of only some of the second images (e.g. the damage appears in the image of input1206but not of1208, or vice versa). In some of the sets, the damage appears in only some of the corresponding regions (e.g. image1260has a correspondence between1253and1263, but has no correspondence for scratch1257). In some sets, there is only one appearance of the damage. In some sets, there is no damage appearance at all. If the sets used in training are associated with, for example, movement of the object and/or of the imaging devices during capture, the sequence (i.e. relative order) of images in the training sets should be the same sequence as will be used in operation. When used operationally, the trained neural network1235can be used to detect candidate damage region(s)327,463,515,550, on an operational input detection image frame1250,334,454, where the candidate damage region(s)327,463,515,550appears at least partially in an area of the first detection1250,334,454that is associated with the at least partial overlap610,620. Note that in some cases the ability to detect candidate damage regions is not a function of the degree of overlap. So long as some portion of the candidate damage region is present in one or more of the second, third etc. images (that is, the supporting image frames), in some examples detection may be possible. For example, if it may be possible to perform detection with a 10% overlap, with a 25% overlap, with a 50% overlap etc. In some examples, use of a multiple-image-input single-frame damage detection system1210to detect candidate damage regions may provide at least certain example advantages, as compared to using single-image-input single-frame damage detection systems. Instead of detecting candidate damage regions on the detection image by processing only that detection image in a neural network, a multiple-image-input system such as1210detects candidate damage regions on the detection image frame by processing with neural network1235a set of multiple image inputs1204,1206,1208, having for example characteristics as disclosed above. In some examples, the additional inputs make for a richer neural network, where the additional input of the second images allow for additional learning per input detection image. For example, in the figure, the network has the scratch1263on the second image to provide additional information when attempting to learn that the feature1253of the detection image is likely a scratch and should be detected as such. Therefore, in some examples a multiple-image-input single-frame damage detection system1210may enable a higher detection accuracy associated with detection of candidate damage regions, as compared to a lower detection accuracy associated with detection of candidate damage regions by a single-image-input single-frame damage detection system. In some examples the required detection accuracy for the particular application is defined in terms of required precision and/or recall scores. Other examples of advantages include:A. simultaneously inspecting several occurrences of the same target appearanceB. incorporating spatial information to the detection, which introduces more information to the neural network, information that does not exist in single shot detectorsC. experiments show that such input can reduce false alarms of detections, in some non-limiting example cases, by 50-70%, depending on the application.D. computational efficiency: processing two or more frames as input to a neural network is more efficient than processing the same inputs separately and then applying post processing such as disclosed with reference to the flow ofFIG.10. In some non-limiting examples, this increase in efficiency is almost by a factor of two. In some examples, multiple-image-input single-frame target detection system1210is one example of the more general target detection system710ofFIG.7. Similar to reference711, in some examples damage target system1210can therefore output1280indication(s) of detected candidate target detection region(s) such as e.g. detected candidate damage region(s) (e.g. regions1293and1297in image1290) for post-processing by target detection verification system720. In some examples, the indication of the candidate damage region(s) comprises the detection image frame (i.e. first image)1290. Note that in the example, candidate damage region1293corresponds to damage1253, and candidate damage region1297corresponds to damage1257. System720can verify which candidate damage regions received from1210are in fact indicative of actual damage to the vehicle. Non-limiting examples of formats for the output1280indication are disclosed further herein with reference to image matrix500ofFIG.5and data flow ofFIG.9. In other non-limiting examples, output1290is a list of coordinates of candidate detection regions1293,1297. Other non-limiting examples are output formats similar to those disclosed with reference to output interface724inFIG.7. In some examples, indication(s) of candidate damage region(s) (e.g. regions1293and1297in image1290) can be output1280to display1285, for display to a user or operator. Note that indication(s) of candidate damage region(s) are one example of the more general indication(s) of candidate target detection region(s). In some examples, the indication of the candidate damage region(s) comprises the detection image1290. In some examples, the candidate damage region(s) is marked, highlighted or otherwise indicated on the detection image1290. In some examples, display1285may be the same as, or similar to, display770. In some other examples, only coordinates of the candidate damage region(s) are output, without outputting the image frame. In some non-limiting examples, the indication of a candidate damage region comprises an indication of a type of possible damage associated with the candidate damage region. In some other examples, the indication of a candidate target detection region comprises an indication of a type of possible target associated with the candidate target detection region, e.g. “label with Company X logo” or “signature of celebrity Y”. In some examples, system1210can output1280to both720and1285. Note that the input and output interface modules of system1210are not shown in the figure, for simplicity of exposition. Note also that the above description with reference to system1202is a non-limiting example only. In some examples, the post-processing shown in the figure may have at least certain example advantages. For example, even though in some examples detection of candidate damage regions by a multiple-image-input single-frame damage detection system1210can be done with higher accuracy than detection by a single-image-input single-frame damage detection system, and may provide for the accuracy required of the particular application, the verification process performed by damage verification system720can provide verification of such detected candidate damage regions, using for example methods disclosed with reference to block1030ofFIG.10. This therefore in some cases may further increase the accuracy of the detection, in that only regions that are verified by system760are output1280to the user. Another example advantage is that the multi-frame detector system1210has limited input, according to the network configuration. One non-limiting example is three frames input, as disclosed inFIG.12. Applying post-processing enables combining several results of detection, from frames which are not visible to a single multi-frame detector1210, for example combining results of frames 1-2-3 with those of 2-3-4. In this way, expanding the coverage of multi-frame information can be possible. Attention is now drawn toFIG.13, illustrating one example of a generalized flow chart diagram of a process1300for target detection, in accordance with certain embodiments of the presently disclosed subject matter. This process is in some examples performed by systems such as disclosed with reference toFIG.12. The disclosure here will exemplify target detection by damage detection. The flow starts at1310. According to some examples, two or more images1250,1260,1270are received, via two or more image inputs1204,1206,1208(block1310). In some examples, this is done by multiple-image-input system1210. In some examples, the images have at least partial overlap610,620, and each of the images received within the set is associated with different relative imaging angles and with different relative imaging positions. According to some examples, the received two or more images are processed (block1320). In some examples, this is done by multiple-image-input system1210. In some examples this is done utilizing convolutional neural network1235. According to some examples, candidate damage region(s) are detected on the detection image1250(block1330). In some examples, this is done by multiple-image-input system1210. In some examples this is done utilizing convolutional neural network1235. According to some examples, an indication of the candidate damage region(s) is output1280(block1340). In some examples, this is done by multiple-image-input system1210. The output may be, in some examples, to system720and/or to system1285. Note that the above description of process1300is a non-limiting example only. In some embodiments, one or more steps of the various flowcharts exemplified herein may be performed automatically. The flow and functions illustrated in the various flowchart figures may for example be implemented in systems102,705,720,820,830,843and1210and processing circuitries730,824,834,840, and1220, and may make use of components described with regard toFIGS.1,7,8and12. It is noted that the teachings of the presently disclosed subject matter are not bound by the flowcharts illustrated in the various figures. The operations can occur out of the illustrated order. For example, steps1015and1017, shown in succession can be executed substantially concurrently, or in a different order. Similarly, some of the operations or steps can be integrated into a consolidated operation, or can be broken down to several operations, and/or other operations may be added. As one non-limiting example, in some cases blocks1032and1034can be combined. It is also noted that whilst the flowchart is described with reference to system elements that realize steps, such as for example systems102,705,720,820,830,843and1210and processing circuitries730,824,834,840, and1220, this is by no means binding, and the operations can be carried out by elements other than those described herein. In embodiments of the presently disclosed subject matter, fewer, more and/or different stages than those shown in the figures can be executed. In embodiments of the presently disclosed subject matter one or more stages illustrated in the figures can be executed in a different order and/or one or more groups of stages may be executed simultaneously. In the claims that follow, alphanumeric characters and Roman numerals, used to designate claim elements such as components and steps, are provided for convenience only, and do not imply any particular order of performing the steps. It should be noted that the word “comprising” as used throughout the appended claims is to be interpreted to mean “including but not limited to”. While there has been shown and disclosed examples in accordance with the presently disclosed subject matter, it will be appreciated that many changes may be made therein without departing from the spirit of the presently disclosed subject matter. It is to be understood that the presently disclosed subject matter is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The presently disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the present presently disclosed subject matter. It will also be understood that the system according to the presently disclosed subject matter may be, at least partly, a suitably programmed computer. Likewise, the presently disclosed subject matter contemplates a computer program product being readable by a machine or computer, for executing the method of the presently disclosed subject matter or any part thereof. The presently disclosed subject matter further contemplates a non-transitory machine-readable or computer-readable memory tangibly embodying a program of instructions executable by the machine or computer for executing the method of the presently disclosed subject matter or any part thereof. The presently disclosed subject matter further contemplates a non-transitory computer readable storage medium having a computer readable program code embodied therein, configured to be executed so as to perform the method of the presently disclosed subject matter. Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope, defined in and by the appended claims.
130,655
11943571
DETAILED DESCRIPTION Embodiments disclosed herein relate to systems, devices and methods that enable high data rate optical packet switches with all-optical packet buffering. FIGS.2A-2Bshow an all-optical packet switch according to some embodiments. As shown inFIG.2Aa N×N optical packet switch210may include N input ports212-1. . .212-N and N output ports214-1. . .214N. Input data signals102may feed optical data packets into input ports212and these optical data packets may be routed to one of output ports214to be output as output data signals104. Data signals102,104are packet-based optical data signals. It should be appreciated that although optical switch210includes N input ports212and N output ports214, not all of the input ports212and output ports214are connected to other communication devices. Switch110includes a scheduler216for directing of data packets within switch210. Packets may be directed by scheduler216into a buffer220to slow the throughput of a packet in order to prevent discarding of packets or packet loss due to contention. Scheduler216is a computing device as defined herein. Scheduler216is in data communication with the components of buffer220as described further below. In some embodiments, the components of optical packet switch210may be provided as an integrated circuit (IC) on a shared semiconductor substrate. It should be appreciated that other switch internal components aside from scheduler216and buffer220may be required for the operation of switch210but these are not shown in the figures in order to reduce the complexity of the figures. The components of buffer220are described in more detail with reference toFIG.2Band may include a clock generator221, an optical unbalanced Mach Zehnder Interferometer (MZI)245and an FDL247configured in a circuit arrangement. The components221,245, and247of buffer220as described below are exemplary implementations and other implementations may be contemplated. In some embodiments, the components of buffer220may be formed on a common semiconductor as an IC. In clock generator221, a tunable continuous wave (CW) laser222may provide a source laser signal (herein designated λ2) that may be modulated by an EO modulator228based on the signal from a clock226to form an optical clock signal (herein designated λ2 CLOCK). The wavelength of laser222may be chosen by scheduler216according to the output port wavelength required for switching purposes as described further below. The period of clock226may be substantially the same as the data rate of signals102. A driver224may provide power to CW laser222. The optical clock signal from clock generator221may be fed into an unbalanced Mach Zehnder Interferometer (MZI)245acting as an optical logical AND gate as described, for example, in Singh, Pallavi, et al. “All-Optical Logic Gates: Designs, Classification, and Comparison.” Advances in Optical Technologies (2014), hereby incorporated by reference. Within MZI245, the optical clock signal may be split by a splitter230into an upper branch231and a lower branch233. The splitting ratio of splitter230into upper branch231and lower branch233may be controlled by scheduler216to optimize the functionality of buffer220. Non-limiting examples of split ratios include 50/50, 90/10 and 80/20. In some implementations, a clock generator221may be implemented using a tunable pulsed laser (not shown). In MZI245, the outputs of splitter230may be connected to circulators232-1and232-2. Circulators232-1and232-2may in turn be connected to, respectively, semiconductor optical amplifiers (SOAs)234-1and234-2. InFIGS.2B-2D, the circulators are shown as having a clockwise or anticlockwise direction, but it should be appreciated that the direction employed will be functional. SOAs234-1and234-2may be respectively provided with power by, respectively, drivers236-1and236-2. In some embodiments, SOAs234-1and234-2may be quantum dot SOAs. An input packet (herein designated λ1-S1) to be buffered is provided from a port212that may be in communication with MZI245at coupler238. Coupler238may be connected to a splitter240. The output of splitter240may be connected to circulators242-1and242-2. The splitting ratio of splitter240may be controlled by scheduler216in order to optimize the functionality of buffer220. Non-limiting examples of split ratios include 50/50, 90/10 and 80/20. Circulators242may include ports connected to SOAs234and to a coupler244. Coupler244may combine the signals from circulators242. The output of MZI245at coupler244may be connected to a splitter246. Splitter246may be connected to SOA252and FDL247. SOA252may be connected to one of output ports214. In some embodiments, MZI245may be implemented using a co-propagating scheme including tunable filters (not shown). In some embodiments, MZI245may be replaced with an ultra-nonlinear interferometer (UNI) configuration (not shown). In some embodiments, MZI245may be replaced with Sagnac interferometer (SI) gates. The length of FDL247substantially determines the delay introduced by a single circulation of a packet through FDL247in buffer220and hence determines the optical buffer memory size. As a non-limiting example, an FDL of 1 km will introduce a delay of approximately 5 μs. For a data rate of 100 Gbps, such a delay translates to an approximate optical buffer memory size of 0.5 Mbit. The output of FDL247may be connected to an SOA248. The output of SOA248may be connected to an optical dispersion management (DM) module250. DM250may correct for dispersion management introduced by FDL247. In some embodiments, DM250may include chirped Bragg gratings controlled by a temperature controller (not shown). The output of DM250may be connected to coupler238. In buffer220, scheduler216may monitor and control all components and provide for automatic adjustment of adjustable components such as laser222, SOA234-1, SOA234-2, SOA248, DM250, drivers224,236and SOA252. Scheduler216may be configured to ensure the synchronization of the clock signal and optical packet to be buffered. The signal path within buffer220is illustrated inFIGS.2C and2D. In use, as shown inFIG.2C, an optical clock signal at a second wavelength (herein designated λ2 CLOCK) may be fed into MZI245at splitter230. Splitter230may introduce a phase shift of π/2 into the clock signal such that upper branch231carries the original clock signal λ2 CLOCKwhile lower branch233carries the phase shifted clock signal (herein designated λ2 CLOCK-PS) or vice versa. The clock signals λ2 CLOCKand λ2 CLOCK-PSmay pass through circulators232-1and232-2into SOAs234-1and234-2respectively. An input optical data packet at a first wavelength (also referred to herein as a data signal and herein designated λ1-S1) to be buffered may be provided from port212to MZI245at coupler238. Scheduler216directs packet λ1-S1into buffer220for a period determined by scheduler216before the buffered packet is released from buffer220to an output port214. Since the output port214may operate at a different wavelength to the input port, switch210may provide for wavelength conversion as part of the buffering process. The wavelength chosen by scheduler for tunable laser222may be the wavelength of the destination output port214. Packet λ1-S1may then be directed by splitter240towards both of circulators242-1and242-2. In some embodiments, splitter240may split the signal strength unequally between the two branches to thereby unbalance MZI245. Non-limiting examples of split ratios include 90/10 and 80/20. Circulators242may direct packet λ1-S1into SOAs234in a counter-propagating direction to that of λ2 CLOCKand λ2 CLOCK-PS. The interaction of data packets λ1-S1with the clock signals to λ2 CLOCKand λ2 CLOCK-PSmay result in cross-gain and cross phase modulation (XGM, XPM) of the clock signals with the packet signals resulting, in SOA234-1, in a modulated clock signal (herein designated λ2-XGMand, in SOA234-2, in a phase shifted modulated clock signal (herein designated λ2-XGM-PS). It should be appreciated that each XGM clock λ2-XGMand λ2-XGM-PSis essentially a partially reshaped wavelength-converted data packet. Similarly, in the counter-propagating direction, data packets may be modulated via XGM and XPM with the clock signals resulting in λ1-XGMand λ1-XGM-PS. The counter-propagating modulated data packets λ1-XGMand λ1-XGM-PSmay be directed by circulators232into terminators235. The modulated clock signals λ2-XGMand λ2-XGM-PSmay be combined in coupler244to form a reshaped, wavelength converted output packet, herein designated λ2-S1. Coupler244may introduce a further phase shift of π/2 into λ2-XGM-PSfor a complete phase shift of π in order to provide the required logical AND at the output (coupler244) of MZI245. The reshaped, wavelength converted output packet λ2-S1of coupler244may be directed by splitter246to SOA252and FDL247. When scheduler216determines that output packet λ2-S1has completed a required buffering period, scheduler216may power on SOA252to thereby “release” packet λ2-S1to one of connected output ports214. It should be appreciated that scheduler216may time the opening (powering on) of SOA252such that it coincides with an nT time period, where n is integer and T is the packet circulation time through buffer220, such that a buffered packet may be released through SOA252from the beginning to the end of buffered packet, and such that release of a partial packet through SOA252may be prevented. Once a packet has been released from buffer220, laser222and SOA248may be powered off by scheduler216in order to “empty” buffer220. Packet λ2-S1traverses FDL247until SOA248. SOA248may amplify the signal after FDL247to compensate for signal losses incurred in FDL247. The amplified signal may further be passed through DM250to compensate for dispersion introduced by FDL247. As shown inFIG.2D, the output signal from DM250, herein designated λ2-S1-REGEN, may be reintroduced into MZI245at coupler238. Packet λ2-S1-REGENmay then be directed by splitter240towards both of circulators242-1and242-2. In some embodiments, splitter240may split the signal strength unequally between the two branches to thereby unbalance MZI245. Non-limiting examples of split ratios include 90/10 and 80/20. Circulators242may direct packet λ2-S1-REGENinto SOAs234in a counter-propagating direction to that of λ2 CLOCKand λ2 CLOCK-PS. The interaction of data packets λ2-S1-REGENwith the clock signals to λ2 CLOCKand λ2 CLOCK-PSmay result in XGM and XPM of the clock signals with the circulated packet signals resulting, in SOA234-1, in a modulated clock signal (herein designated λ2-XGMand, in SOA234-2, in a phase shifted modulated clock signal (herein designated λ2-XGM-PS). It should be appreciated that each XGM clock λ2-XGMand λ2-XGM-PSmay essentially be a partially reshaped circulated data packet. In the counter-propagating direction, data packets may be modulated via XGM and XPM with the clock signals resulting in λ2-XGMand λ2-XGM-PS. The counter-propagating modulated data packets λ2-XGMand λ2-XGM-PSmay be directed by circulators232into terminators235. The packet reshaping and regeneration process may thus enable several circulations of the packet through FDL247, with the packet being effectively regenerated on each circulation. As with the first circulation (FIG.2C), the modulated clock signals λ2-XGMand λ2-XGM-PSmay be combined in coupler244to form a regenerated output packet, herein designated λ2-S1. Coupler244may introduce a further phase shift of π/2 into λ2-XGM-PSfor a complete phase shift of R in order to provide the required logical AND at the output (coupler244) of MZI245. The regenerated output packet λ2-S1of coupler244may be directed by splitter246to SOA252and FDL247. When scheduler216determines that output packet λ2-S1has completed a required buffering period, scheduler216may power on SOA252to thereby “release” packet λ2-S1to one of output ports214as described above. As determined by scheduler216, packet λ2-S1may traverse FDL247for a second circulation until SOA248. SOA248may amplify the signal after FDL247to compensate for signal losses incurred in FDL247. The amplified signal may further be passed through DM250to compensate for dispersion introduced by FDL247. The output signal from DM250, herein designated λ2-S1-REGENmay be reintroduced into MZI245at coupler238for regeneration and release (through SOA252) or further circulations. FIGS.3A-3Cshow an all-optical packet switch according to some embodiments. As shown inFIG.3A, a N×N optical packet switch310is the same as packet switch210described above, but additionally may include multiple all-optical buffers220-1,220-2. . .220-M and a shared FDL347. In some embodiments, shared FDL347may use a single core fiber optic cable. In some embodiments, shared FDL347may make use of multi-core fiber optic cable. In some embodiments, up to 64 wavelengths may be transmitted per fiber optic core in shared FDL347. As shown inFIGS.3B and3C, shared FDL347may be shared by multiple optical buffers220using wavelength division multiplexing (WDM). A WDM multiplexer (mux)312may receive a signal from splitter246-1of buffer220-1and may combines the received signal with signals received from splitters246of other buffers220for transmission over shared FDL347. Following transmission through shared FDL347, a WDM demultiplexer (demux)314may split the multiplexed signals for transmission to SOA248of each of buffers220. In the claims or specification of the present application, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the invention, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended. Implementation of the method and system of the present disclosure may involve performing or completing certain selected tasks or steps manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present disclosure, several selected steps may be implemented by hardware (HW) or by software (SW) on any operating system of any firmware, or by a combination thereof. For example, as hardware, selected steps of the disclosure could be implemented as a chip or a circuit. As software or algorithm, selected steps of the disclosure could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the disclosure could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions. Although the present disclosure is described with regard to a computing device, or a computer, it should be noted that optionally any device featuring a data processor and the ability to execute one or more instructions may be described as a computing device, including but not limited to any type of personal computer (PC), a server, a distributed server, a master control unit, a virtual server, a cloud computing platform, a cellular telephone, an IP telephone, a smartphone, a smart watch or a PDA (personal digital assistant). Any two or more of such devices in communication with each other may optionally form a “network” or a “computer network”. It should be understood that where the claims or specification refer to “a” or “an” element, such reference is not to be construed as there being only one of that element. In the description and claims of the present application, each of the verbs, “comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb. While this disclosure describes a limited number of embodiments, it will be appreciated that many variations, modifications, and other applications of such embodiments may be made. The disclosure is to be understood as not limited by the specific embodiments described herein, but only by the scope of the appended claims.
16,294
11943572
DETAILED DESCRIPTION FIG.1ais a diagram showing an interface module105and an AED110according to an embodiment of the invention. AED110may be any type of automated external defibrillator from any manufacturer with a capability of communicating status information to an interface module such as interface module105. For example, AED110may be the AED Plus® or AED Pro® manufactured by ZOLL Medical Corporation of Chelmsford, Massachusetts Status information may be any type of information or data related to an AED and/or accessories such as but not limited to configuration information, diagnostic information, troubleshooting/repair information, usage information, patient information, location information, code event information and/or the like. For example, status information may comprise self-test diagnostic results of the AED, at least one electrode expiration date, an amount of energy delivered during a self-test shock delivered by an AED or information related to at least one code event. In some embodiments, an interface module may be completely embedded within an AED and non-removable. For example, an interface module may be permanently affixed directly on the AED's motherboard and contained within the chassis of the AED. In some embodiments, an interface module may be embedded within an AED and removable. In some embodiments, an interface module such as interface module105may be disposed at least partially external to an AED and removable. For example, an interface module may be a removable interface module such as interface module105and capable of electrically and physically coupling and decoupling with AED110. Interface module105may include a serial data interface, parallel data interface, Universal Serial Bus (USB) interface and/or like data interface including a connector such as connector115to enable communication of data with an AED and to stay physically coupled and electrically connected with the AED especially while the AED is being transported. Further, an AED such as AED110may comprise a slot such as slot120configured to receive connector115and allow interface module to reliably, electrically and physically couple with AED110. Other means may be employed to reliably couple an interface module to an AED such as using at least one screw, bolt or other fastener(s), for example, such as screws216ofFIG.3. If interface module connector115is a USB connector, the USB connector itself may under certain conditions provide a means reliable and secure enough to ensure the external interface module105stays electrically and physically connected with an AED. In other situations, a USB connector may need to be further secured to the AED with a more secure and reliable means such as screws216. FIG.1bis a diagram showing an interface module105electrically and physically coupled with AED110according to an embodiment of the invention. In the embodiment, removable interface module105has been inserted into AED110with at least a portion125of interface module105protruding beyond edge130of AED110. In some embodiments, a removable interface module may be fully inserted into AED110such that there are no external, protruding portions. FIG.2ais a diagram showing a system200comprising a removable interface module105electrically and physically coupled with an AED110and a wireless dock220mounted on a wall203according to an embodiment of the invention.FIG.2bis a diagram showing the system200ofFIG.2amounted in a wall cabinet according to an embodiment of the invention. In some embodiments, AED110may be powered off or in a sleep mode, ready to be used and mounted directly on wall203or within an AED cabinet212, for example, using a mounting or hanging means such as a mounting hook205. Wireless dock220may be mounted on wall203adjacent to AED110or behind AED110. In some embodiments, a wireless dock may be coupled with or integrated into at least part of an AED cabinet such as AED cabinet212. In some embodiments, wireless dock220along with AED110and interface module105are portable. For example, wireless dock220and AED110including interface module105may be stored together in a case, bag or other container, which may be carried by a user or located in a vehicle. In any case, wireless dock220may be located proximate to AED110so wireless dock220may communicate information wirelessly with interface module105and so wireless dock220may supply power wirelessly to interface module105. A “wireless” dock refers to a dock's ability to wirelessly communicate data with and/or deliver power to an AED interface module even though wireless dock may include wires such as a power cord and/or Ethernet cable, for example. FIG.3is a diagram showing a system200comprising an interface module105coupled with an AED110and a wireless dock220according to an embodiment of the invention. An interface module105such as interface module105may comprise a processor248, which may be any type of processor including but not limited to an Intel microprocessor or microcontroller and a memory such as a volatile memory256or a non-volatile memory254, for example, a FLASH memory or electrically erasable programmable read only memory (EEPROM) and/or the like. Interface module105may comprise at least one processor such as processor248and at least one other processing component. Processor248may comprise circuitry for implementing interface module features such as receiving status information from AED110, receiving and storing power wirelessly received from wireless dock220, determining a present location of AED110, wirelessly transmitting AED status information to wireless dock220as well as other interface module functionality. For example, at least one processor248may comprise a digital signal processor device, a microprocessor device, a digital to analog converter, other support circuits, and/or the like. Further, the processor248may comprise features to operate one or more software programs for implementing interface module105functionality. Volatile memory256and/or non-volatile memory254may comprise computer program code, which may be configured with processor248to execute one or more subroutines to receive status information via communication interface238from AED110or receive AED software from wireless dock220via interface235. For example, the processor and memory of the removable interface module may be configured to receive software for the AED from the wireless dock via wireless interface232-262. Volatile memory256may comprise a cache area for the temporary storage of data. Interface module105may use memory to store information including computer program code to implement one or more features of interface module105including but not limited to receiving status information from AED110, receiving and storing power wirelessly received from wireless dock220, determining a present location of AED110, wirelessly transmitting AED status information to wireless dock220, receiving AED software from wireless dock220as well as other interface module functionality. Volatile memory256and/or a non-volatile memory254may be removable by a user. Interface module105may comprise a data communication interface238such as a serial data interface, parallel data interface, Universal Serial Bus (USB) interface and/or the like including a connector such as connector120to allow communication of data with AED110and to stay physically coupled and electrically connected with the AED especially while the AED is being transported. In an embodiment, the interface module105comprises at least one antenna232for communicating with transmitter234and receiver236. Transmitter234and/or receiver236are coupled with interface235to enable processor248to communicate wirelessly through antenna232with devices such as wireless dock220. Transmitter234and receiver236may be packaged as a low-power radio transceiver such as a low-power Bluetooth transceiver and/or an IEEE 802.11 Wireless LAN transceiver. Further, transmitter234and/or receiver236coupled with interface235may be configured to communicate information such as status information and/or location information such as GPS location information with wireless dock220. The memory and low-power radio transceiver may be communicatively coupled with the processor and configured to receive status information from an AED. In some embodiments, interface module105further comprises a user interface250, which may include at least one input and/or output device coupled with processor248such as but not limited to a display, touch screen, keyboard, keypad, mouse and/or the like. In an embodiment, a display coupled with processor248may be capable of displaying status information and/or location information related to AED110, for example, AED electrode expiration data, AED capacitor discharge data, battery status, GPS location information and/or the like. In some embodiments, user interface250may comprise a visual or audible beacon such but not limited to a flashing light or alarm, which may be activated by a data center, network management station, interface module105or AED110, for example. In some embodiments, a touch screen, keypad, keyboard, buttons and/or other input features may be included on interface module105to enable a user to enter data such as query, configuration and/or status information. In some embodiments, interface module105comprises an accelerometer to detect movement of AED110and/or interface module105. Interface module105may be configured to wake-up from a sleep mode when accelerometer detects motion of AED110. In some embodiments, interface module105may be configured to send a signal to AED110to wake-up or power-on when motion is detected. In some embodiments, interface module105further comprises at least one energy storage device246, which may be rechargeable such as a rechargeable battery and/or capacitor for providing power to interface module105. For example, the capacitor may be a super capacitor, which may provide a faster charging time than a battery. In some embodiments, interface module105further comprises a wireless power receiver242and wireless power controller244configured to receive power from a wireless power transmitter such as wireless power transmitter268of wireless dock220. The rechargeable energy storage device may be electrically coupled with the wireless power receiver and configured to receive power wirelessly for the removable interface module. In some embodiments, wireless power receiver242includes Qi-compliant wireless power technology developed by the Wireless Power Consortium (WPC), which was established in 2008v. In some embodiments, wireless power receiver242may be the bq5101xB WPC 1.1 Compatible Fully Integrated Wireless Power Receiver IC manufactured by Texas Instruments Incorporated. In some embodiments, interface module105may receive at least some power from an AED such as AED110. In other embodiments, interface module105receives no power from an AED such as AED110. In some embodiments, the processor and memory of the interface module are configured to receive status information from the AED without the removable interface module utilizing power from the AED. Interface module105may further comprise a location determining unit252for determining the location of AED110. In some embodiments, location determining unit252may comprise a global positioning system (GPS) receiver for receiving a geographic location of AED110. A wireless dock such as wireless dock220comprises a processor286, which may be any type of processor including but not limited to an Intel microprocessor or microcontroller and a memory such as a volatile memory270or a non-volatile memory272, for example, a FLASH memory. Further, wireless dock220may comprise a plurality of processors and at least one other processing component. Processor286may comprise circuitry for implementing one or more wireless dock features. For example, at least one processor248may comprise a digital signal processor device, a microprocessor device, a digital to analog converter, other support circuits, and/or the like. Volatile memory270may comprise a cache area for the temporary storage of data. Non-volatile memory272may comprise an electrically erasable programmable read only memory (EEPROM), FLASH memory, and/or the like. In an embodiment, wireless dock220may use memory to store information including computer program code. Processor286coupled with volatile memory270and/or non-volatile memory272may be configured to implement one or more features of wireless dock220including, but not limited to receiving AED information wirelessly from interface module105, transmitting AED information to an external network element via networking interface282, providing power to interface module105via wireless power transmitter268as well as other wireless dock functionality. Volatile memory270and/or a non-volatile memory272may be removable and/or upgradable by a user. Wireless dock220may comprise a wireless or wired networking interface282such as an IEEE 802.11 Wireless LAN interface, cellular interface or Ethernet interface and/or the like, which may include a connector such as Ethernet connector284to allow wireless dock220to communicate information including software such as AED information with another networked element such as a data center computer via the Internet over Ethernet cable225, for example. For example, the processor and memory of the wireless dock may be configured to transmit status information to one of a data center and network management station. The wireless dock220may comprise at least one antenna262for communicating with transmitter266and receiver264. Transmitter266and/or receiver264are coupled with an interface265to enable processor286to communicate wirelessly through antenna262with devices such as interface module105. Transmitter266and receiver264may be packaged as a low-power radio transceiver such as a low-power Bluetooth transceiver and/or an IEEE 802.11 Wireless LAN transceiver. The memory and low-power radio transceiver may be communicatively coupled with the processor and configured to receive status information from the removable interface module when the AED is powered off and transmit the status information through a networking interface of the wireless dock, for example. Processor286may be configured to provide at least one signal to interface265and receive at least one signal from interface265. Further, transmitter266and/or receiver264coupled with interface265may be configured to transmit and receive information such as status information and/or location information such as GPS location information with interface module105. In some embodiments, wireless dock220further comprises a user interface276, which may include at least one input and/or output device coupled with processor286such as but not limited to a display, touch screen, keyboard, keypad, mouse and/or the like. In an embodiment, a display coupled with processor286may be capable of displaying status information and/or location information related to an AED such as but not limited to AED electrode expiration data, AED capacitor discharge data, battery status, location information and/or the like. In some embodiments, a touch screen, keypad, keyboard, buttons and/or other input features may be included on wireless dock262to enable a user to enter data such as query, configuration and/or status information. In some embodiments, wireless dock220further comprises a device management module274, which allows wireless dock220to be managed by a management system over a network such as the Internet using, for example, a Web-based management or Simple Network management Protocol (SNMP). A data center with access to the Internet, for example, may request information from wireless dock220via network interface282. If Web-based management is utilized, wireless dock220may further comprise an embedded Web Server, which may respond to Hyper Text Transfer Protocol (HTTP) browser requests for information related to wireless dock220and AED110. A data center may query wireless dock220in order to determine status information related to AED110. Device management module274of wireless dock220may respond by providing AED110status information received from interface module105related to AED110such as but not limited to data related to the result of the one or more self-tests performed by AED110, for example. In some embodiments, wireless dock220may comprise a power source interface280such as 120 A/C power interface including a transformer to receive and convert A/C power from line230to power wireless dock220. In some embodiments, wireless dock220may comprise at least one energy storage device278such as but not limited to a rechargeable battery as a backup in case A/C power from line230fails for any reason, for example. In some embodiments, wireless dock may receive power from an Ethernet cable such as Ethernet cable225using an IEEE Power over Ethernet or Power over Ethernet Plus Standard, for example. In some embodiments, wireless dock220comprises a wireless power transmitter268and wireless power controller269configured to transmit power to a wireless power receiver such as wireless power receiver242of interface module105. In some embodiments, wireless power transmitter268includes Qi-compliant wireless power technology developed by the Wireless Power Consortium (WPC). In some embodiments, wireless power transmitter268may be the bq500410A Qi Compliant Free-Positioning Wireless Power Transmitter manufactured by Texas Instruments Incorporated. In some embodiments, wireless power transmitter is configured to be coupled with a power source such as power source230and transmit power wirelessly to the removable interface module when the removable interface module is within range. In some embodiments, the wireless power receiver is configured to receive power wirelessly from the wireless power transmitter and store the power in the rechargeable energy storage device of the removable interface module when the AED is powered off. An AED such as AED110may be stored in an AED cabinet such as AED cabinet212or hung on a wall when not in use. An AED may automatically power-on periodically while not in use and execute one or more self-tests. For example, AED110may execute one or more diagnostic self-tests which may include testing of one or more internal components such as a capacitor, battery, and/or memory, testing one or more external components such as electrodes and/or testing system functionality such as defibrillation energy delivery levels. In some embodiments, prior to executing one or more self-tests, AED110may send a wake-up signal to interface module105causing interface module105to power-on or wake-up from a sleep mode. In other embodiments, interface module105may be configured to power-on or wake-up when interface module105detects that AED110is on and/or when interface module105detects movement, for example, by receiving an indication of movement from an internal accelerometer embedded within interface module105. After performing one or more diagnostic self-tests, AED110may communicate information such as status information related to one or more self-tests to interface module105, which may be removably coupled with or permanently integrated into AED110. Interface module105may record the information in an internal memory such as non-volatile memory254. After interface module105has recorded the information received from AED110, interface module105may signal AED110that the status information has been stored. AED110and/or interface module105may then power off to conserve battery power. AED110may repeat the cycle of powering-on, running one or more diagnostic self-tests and sending status information to interface module105weekly or monthly, for example. Non-volatile memory254may over time comprise a table of status information related to AED110including a date/time that the status information was received and/or that at least one self-test related to the status information was executed. Further, if AED110comprises location determining functionality, AED may send location information such as map data related to the previous location of AED110to interface module105. In some embodiments, interface module105may comprise functionality, for example, a location determining unit252for determining one or more locations of AED110. In some embodiments, wireless dock220may automatically detect by using Qi-compliant wireless power technology whether interface module105coupled with AED110is within range to communicate with and/or supply power to interface module105. If wireless dock220is not within range of the interface module105, for example, AED110including interface module105is not inside the AED cabinet212or located on the mounting hook205, wireless dock220may record a time and date that the attempt to communicate with or provide power to interface module105occurred. If the wireless dock220is within range of interface module105, the processor and memory of the wireless dock may be configured to cause the removable interface module105to power-on by wirelessly transmitting a power-on or wake-up command to interface module105. Further, wireless dock220may provide power to interface module105wirelessly and charge energy storage device246of interface module105using Qi-compliant wireless power technology, for example. In some embodiments, if AED110including interface module105are within range of wireless dock220and wireless dock105detects that interface module105has sufficient power, wireless dock220may periodically request and receive status information from interface module105related to AED110. One or more requests for status information by wireless dock220may be transmitted wirelessly by interface module105to wireless dock220using a low-power radio transceiver, for example, using transmitter266packaged as a transceiver in an integrated circuit and received by receiver236of interface module105also packaged as a transceiver. Interface module105may receive the request and respond by transmitting status information wirelessly to wireless dock220. Requests for status information from wireless dock220to interface module105may occur periodically, for example, daily, weekly or monthly. Wireless dock220may transmit status information received from interface105to a data center or management station, via the Internet for example, when requested or periodically. In some embodiments, wireless dock220may have continuous connectivity with a data center or management station. In some embodiments, data center or a management station may send a request to one or more wireless docks including wireless dock220, to send status information relating to AED110. When wireless dock220receives the request for AED status information, wireless dock220may respond by sending AED status information including data/time information and/or AED location information. FIG.4is a flow diagram depicting a method400according to an example embodiment of the invention. At405, power is received wirelessly from a wireless dock such as wireless dock220and stored in a rechargeable energy storage device such as energy storage device246of a removable interface module, which is communicatively coupled with an AED. Interface module105ofFIG.3may comprise at least one energy storage device246such as a rechargeable battery and/or super capacitor for providing power to interface module105. Interface module105may further comprise a wireless power receiver242and wireless power controller244configured to receive power from a wireless power transmitter such as wireless power transmitter268of wireless dock220. Power may be received wirelessly from the wireless dock220and stored in energy storage device246when the AED is either powered-on, powered-off or in a sleep mode. A wireless dock such as wireless dock220may receive power from at least one of A/C power source and a battery. At410, status information is received from an AED in a memory of a removable interface module. For example, after one or more diagnostic self-tests, AED110may communicate information such as status information related to one or more self-tests to interface module105. Interface module105may receive the status information from AED110and record the information in an internal memory such as non-volatile memory254. After interface module105has recorded the information received from AED110, interface module105may signal AED110that the status information has been stored. AED110and/or interface module105may then power off to conserve battery power. At415, status information is transmitted from a first low-power radio transceiver such as transceiver234/236of the removable interface module105to a second low-power radio transceiver264/266of AED wireless dock220when the AED is powered off. Wireless dock220may receive status information from interface module105related to AED110, which may be collected over time. At420, the status information is transmitted through a networking interface such as networking interface282of the wireless dock220. For example, wireless dock220may transmit status information received from interface module105to a data center or management station via the Internet when requested or periodically. In some embodiments, wireless dock220may have continuous connectivity with a data center or management station. If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. It is also noted herein that while the above describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present invention as defined in the appended claims.
26,283
11943574
DESCRIPTION OF EMBODIMENTS The present disclosure will be described hereinafter through embodiments according to the present disclosure. However, the present disclosure set forth in the claims are not limited to the following embodiments. Further, not all of the components described in the embodiments are necessarily indispensable for solving the problem. For clarifying the explanations, the following descriptions and the drawings are partially omitted and simplified as appropriate. The same reference symbols are assigned to the same elements throughout the drawings, and redundant descriptions thereof are omitted as appropriate. FIG.1shows a schematic view of a communication analysis system1. The communication analysis system1is one specific example of an information processing system. The communication analysis system1is a system for collecting and analyzing big data regarding communication among a plurality of participants2and creating new values. For example, the communication analysis system1obtains, for each speech, an evaluation value for this speech in communication among the plurality of participants2. The communication analysis system1includes a plurality of wearable terminals3and an analysis apparatus4. While the number of participants2who participate in one communication is four inFIG.1, this is merely one example. The number of participants who participate in one communication may be two, four or more, or ten or hundred, for example. The communication is typically conversational communication that is established by speeches made by the participants thereof. Examples of this type of communication include a debate, a round-table discussion, and a workshop (or seminar) meeting. However, the communication is not limited to those in which all participants meet in the same real space. That is, the communication may also include those in which all participants meet in an online virtual space. (Wearable terminal3) Each wearable terminal3is one specific example of a sensor terminal. As shown inFIG.1, the plurality of wearable terminals3are worn by and used by a plurality of respective participants2. That is, one participant2wears one wearable terminal3. In this example embodiment, each of the wearable terminals3is a badge that can be attached to or detached from a top worn on the upper body of a respective one of the participants2, and may be attached to a place above the pit of the stomach of the participant2. The wearable terminal3may instead be a headset, an earphone, glasses, a necklace, a pendant or the like in place of being the badge. FIG.2shows a functional block diagram of each of the wearable terminals3. As shown inFIG.2, the wearable terminal3includes a CPU (Central Processing Unit)3a, a readable/writable RAM (Random Access Memory)3b, and a read-only ROM (Read Only Memory)3c. The wearable terminal3further includes a terminal ID information storage unit10, an oscillation circuit11, and a sensor12. Then, the CPU3aloads a control program stored in the ROM3cand executes the loaded control program, whereby the control program causes hardware such as the CPU3ato function as a terminal clock unit15, a terminal time correction unit16, a generation interval counter17, a terminal data No. counter18, a terminal data generation unit19, and a data transmission/reception unit20. Each of the wearable terminals3can perform two-way radio communication with the analysis apparatus4through the data transmission/reception unit20. The terminal ID information storage unit10stores terminal ID information for identifying a corresponding wearable terminal3from other wearable terminals3. Typically, the terminal ID information may be a MAC address unique to each wearable terminal3. However, the terminal ID information may be a number or a character set by the analysis apparatus4when the wearable terminal3is started up, or a combination thereof. In this example embodiment, the terminal ID information is a natural number set by the analysis apparatus4when the wearable terminal3is started up. The oscillation circuit11outputs signals with a constant frequency. Typically, the oscillation circuit11is an oscillator that outputs signals with a constant frequency by supplying a predetermined voltage. The sensor12outputs sensing data regarding the participant2wearing the corresponding wearable terminal3. In this example embodiment, the sensor12includes a microphone12aand an acceleration sensor12b. The microphone12a, which is one specific example of a sound-collecting unit, converts a sound around the corresponding wearable terminal3into a voltage value, and accumulates the voltage value in the RAM3bas sound pressure data. In this example embodiment, the microphone12aconverts, for every 50 ms, for example, a sound around the corresponding wearable terminal3into a voltage value, and accumulates the voltage value in the RAM3bas the sound pressure data. The acceleration sensor12bconverts three-axis accelerations of the corresponding wearable terminal3into voltage values and accumulates the voltage values in the RAM3bas acceleration data. When the participant2wearing the corresponding wearable terminal3shakes his/her head “vertically”, the upper body of the participant2repeats flexion and extension about the roll axis (an axis parallel to the axis that connects the left and right shoulders). Therefore, in this case, of the output values of the acceleration sensor12b, the output value that corresponds to the vertical component value fluctuates in such a manner that it repeatedly increases and decreases within a predetermined range. On the other hand, when the participant2wearing the corresponding wearable terminal3shakes his/her head “horizontally”, the upper body of the participant2repeats twisting around the yaw axis (an axis parallel to the direction in which the spine extends). Therefore, in this case, of the output values of the acceleration sensor12b, the output value that corresponds to the horizontal component value fluctuates in such a manner that it repeatedly increases and decreases within a predetermined range. By referring to the acceleration data in this way, it is possible to detect the operation of the participant2wearing the corresponding wearable terminal3. In this example embodiment, the acceleration sensor12bconverts, for each 100 ms, for example, three-axis accelerations of the corresponding wearable terminal3into voltage values and accumulates the voltage values in the RAM3bas the acceleration data. The sensor12may include only one of the microphone12aand the acceleration sensor12binstead of including both the microphone12aand the acceleration sensor12b. The sensor12may be formed of, for example, another sensor such as a body surface temperature measurement sensor. The terminal clock unit15includes terminal time data and updates the terminal time data based on the signal output from the oscillation circuit11. Therefore, the terminal clock unit15may also be referred to as an internal clock unit of the corresponding wearable terminal3. The terminal time correction unit16regularly corrects the terminal time data of the terminal clock unit15based on the time data externally acquired. Typically, the terminal time correction unit16corrects the terminal time data of the terminal clock unit15based on the time data received from the analysis apparatus4. Alternatively, the terminal time correction unit16may correct the terminal time data of the terminal clock unit15based on the latest time data acquired through access to a Network Time Protocol (NTP) via the analysis apparatus4and the Internet. Here, the accuracy of the correction of the terminal time data by the terminal time correction unit16will be described. In this example embodiment, each of the wearable terminals3and the analysis apparatus4can perform two-way communication through radio communication. As is well known, a communication delay accidentally occurs in wireless communications. Therefore, the communication time required from the time when the time data is output from the analysis apparatus4to each of the wearable terminals3to the time when each of the wearable terminals3actually receives the time data is not constant and it exhibits a so-called normal distribution. Therefore, the time data of the terminal clock units15of the respective wearable terminals3do not coincide with each other. The generation interval counter17and the terminal data No. counter18are counters that are necessary for the terminal data generation unit19to generate the terminal data. Referring now toFIG.3, a configuration of the terminal data19awill be described. As shown inFIG.3, the terminal data19aincludes a “terminal ID” area, a “data No.” area, a “terminal time data” area, a “sound pressure data” area, and an “acceleration data” area. The “terminal ID” area stores terminal ID information held by the terminal ID information storage unit10. The “data No.” area stores numbers allocated to a large number of pieces of terminal data generated by the terminal data generation unit19, the numbers identifying each of the pieces of terminal data from other terminals. However, the “data No.” area may be omitted. The “terminal time data” area stores the terminal time data held by the terminal clock unit15. The “sound pressure data” area stores the sound pressure data output from the microphone12a. The “acceleration data” area stores the acceleration data output from the acceleration sensor12b. The terminal data generation unit19regularly generates the terminal data19a. That is, the terminal data generation unit19generates the terminal data19aat a predetermined generation time interval (hereinafter, this will be referred to as a data generation time interval). In this example embodiment, the data generation time interval is five seconds. Therefore, the “sound pressure data” area of one piece of terminal data19astores sound pressure data for five seconds, i.e., 100 pieces of sound pressure data. Likewise, the acceleration data for five seconds, i.e., 50 pieces of acceleration data, are stored in the “acceleration data” area of one piece of terminal data19a. Typically, the terminal time data stored in the above “terminal time data” area is the terminal time data that corresponds to the timing when the earliest (oldest) sound pressure data among 100 pieces of sound pressure data included in the corresponding terminal data19ahas been detected. The generation interval counter17counts the time defined as the data generation time interval based on the signal output from the oscillation circuit11. Note that the generation interval counter17does not count the time defined as the data generation time interval based on the terminal time data of the terminal clock unit15. The terminal data No. counter18includes a terminal data No. and increments the terminal data No. every time the terminal data generation unit19generates the terminal data19a. The data transmission/reception unit20transmits the terminal data19agenerated by the terminal data generation unit19to the analysis apparatus4. In this example embodiment, the data transmission/reception unit20transmits the terminal data19ato the analysis apparatus4through short-range radio communication such as Bluetooth (Registered Trademark). Alternatively, the data transmission/reception unit20may transmit the terminal data19ato the analysis apparatus4through wired communication. Further, the data transmission/reception unit20may transmit the terminal data19ato the analysis apparatus4through a network such as the Internet. In this case, the analysis apparatus4may be typically constructed on a cloud system. (Analysis Apparatus4) FIG.4shows a functional block diagram of the analysis apparatus4. As shown inFIG.4, the analysis apparatus4includes a CPU (Central Processing Unit)4a, a readable/writable RAM (Random Access Memory)4b, and a read-only ROM (Read Only Memory)4c. Then, the CPU4aloads a control program stored in the ROM4cand executes the loaded control program, whereby the control program causes hardware such as the CPU4ato function as a data transmission/reception unit30, a terminal data storage unit31, an apparatus clock unit32, an apparatus time correction unit33, an apparatus time data transmission unit34, a terminal data correction unit35, and a communication analysis unit36. The data transmission/reception unit30receives the terminal data19afrom each of the wearable terminals3, and stores and accumulates the received terminal data19ain the terminal data storage unit31.FIG.5shows the plurality of pieces of terminal data19aaccumulated in the terminal data storage unit31. As shown inFIG.5, the terminal data19areceived from each of the wearable terminals3is accumulated in the terminal data storage unit31in the order of the reception. Since the terminal data19areceived from the plurality of wearable terminals3are accumulated in the terminal data storage unit31, the terminal data19aincluding various terminal IDs are accumulated in the terminal data storage unit31.FIG.5only shows the terminal data19athat corresponds to the wearable terminal3whose terminal ID is 71, and the other pieces of terminal data19athat are actually present are not shown inFIG.5. The apparatus clock unit32includes apparatus time data and updates apparatus time data based on a signal output from an oscillation circuit (not shown). Therefore, the apparatus clock unit32may also be referred to as an internal clock unit of the analysis apparatus4. The apparatus time correction unit33regularly corrects the apparatus time data of the apparatus clock unit32based on the time data externally acquired. Typically, the apparatus time correction unit33corrects the apparatus time data of the apparatus clock unit32based on the latest time data acquired through access to a Network Time Protocol (NTP) via the Internet. The apparatus time data transmission unit34transmits the apparatus time data to each of the wearable terminals3regularly, e.g., every minute. The terminal data correction unit35corrects, for each of the wearable terminals3, the terminal time data of the plurality of pieces of terminal data19a. Specifically, the terminal data correction unit35corrects, for each of the wearable terminals3, the terminal time data of the plurality of pieces of terminal data19ain such a way that intervals between the terminal time data of the plurality of pieces of terminal data19abecome even on the time axis. The details thereof will be described later. The communication analysis unit36analyzes communication based on a plurality of pieces of terminal data19athat are accumulated in the terminal data storage unit31and whose terminal time data is corrected by the terminal data correction unit35, stores the results of the analysis in the RAM4b, or outputs the results of the analysis to a display (not shown). (Operation of Each Wearable Terminal3) Referring next toFIG.6, an operation of each of the wearable terminals3will be described.FIG.6shows a control flow of each of the wearable terminals3. S100: First, the generation interval counter17initializes a count value for counting the time defined as the data generation time interval and starts counting the time defined as the data generation time interval. S110: Next, the terminal data generation unit19acquires the terminal time data of the terminal clock unit15. S120: Next, the terminal data generation unit19acquires the sensing data output from the sensor12. S130: Next, the terminal data generation unit19accumulates the acquired sensing data in the RAM3b. S140: Next, the terminal data generation unit19determines if the time defined as the data generation time interval has elapsed. When the terminal data generation unit19has determined that the time defined as the data generation time interval has elapsed, the terminal data generation unit19advances the process to S150. On the other hand, when the terminal data generation unit19has determined that the time defined as the data generation time interval has not elapsed, the terminal data generation unit19returns the process to S120. Accordingly, the RAM3baccumulates the sensing data for five seconds. S150: Next, the terminal data generation unit19acquires the terminal data No. from the terminal data No. counter18. S160: Next, the terminal data generation unit19acquires the terminal ID information from the terminal ID information storage unit10and acquires sensing data for five seconds from the RAM3b. Then, as shown inFIG.3, the terminal data generation unit19generates terminal data19aincluding the terminal ID information, the terminal data No., the terminal time data, and sensing data for five second including the sound pressure data and the acceleration data. S170: Referring once again toFIG.6, the data transmission/reception unit20transmits the terminal data19agenerated by the terminal data generation unit19to the analysis apparatus4. S180: Then the terminal data No. counter18increments the terminal data No. S190: Further, the terminal time correction unit16determines whether or not the time data has been received from the analysis apparatus4. When the terminal time correction unit16determines that the time data has been received from the analysis apparatus4, the terminal time correction unit16advances the process to S200. On the other hand, when the terminal time correction unit16determines that the time data has not been received from the analysis apparatus4, the terminal time correction unit16returns the process to S100. S200: Then, the terminal time correction unit16corrects the terminal time data based on the time data received from the analysis apparatus4and returns the process to S100. (Operation of Analysis Apparatus4) Referring next toFIG.7, an operation of the analysis apparatus4will be described.FIG.7shows an operation flow of the analysis apparatus4. S300: First, the data transmission/reception unit30starts receiving the terminal data19afrom each of the wearable terminals3. The data transmission/reception unit30accumulates the received terminal data19ain the terminal data storage unit31. S310: Next, the data transmission/reception unit30determines whether or not the reception of the terminal data19afrom each of the wearable terminals3has been completed. When the data transmission/reception unit30has determined that the reception of the terminal data19afrom each of the wearable terminals3has been completed, the data transmission/reception unit30advances the process to S320. When the data transmission/reception unit30has determined that the reception of the terminal data19afrom each of the wearable terminals3has not been completed, the data transmission/reception unit30repeats the processing of S310. S320: Next, the terminal data correction unit35divides, for each terminal ID, the plurality of pieces of terminal data19aaccumulated in the terminal data storage unit31based on the terminal ID information included in each terminal data19a.FIG.5illustrates the terminal data19areceived from the wearable terminal3whose terminal ID is 71. S330-S360: The terminal data correction unit35separately executes the processing from S330to S360for each terminal ID. S330: First, the terminal data correction unit35determines whether or not the reception of the terminal data19aby the data transmission/reception unit30has been temporarily interrupted. When, for example, the power supply of the wearable terminal3has been temporarily turned off, the reception of the terminal data19aby the data transmission/reception unit30is temporarily interrupted. Referring is now made toFIG.8, which is a drawing that has been prepared for the sake of convenience of description and shows the difference between the terminal time data of the terminal data No. (i) and the terminal time data of the terminal data No. (i-1) added to the column of the terminal data19aof the terminal data No. (i). As described above, since the terminal data19ais created for every five seconds, the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing adjacent to each other in the order of detection, is exactly five seconds except for a few cases, as shown inFIG.8. As shown inFIG.8, the difference between the terminal time data of the terminal data No. 14795 and the terminal time data of the terminal data No. 14796 is 104.213 seconds, that is, less than two minutes. This suggests that the power supply of the wearable terminal3has been temporarily turned off after the terminal data19aof the terminal data No. 14795 is generated but before the terminal data19aof the terminal data No. 14796 is generated. In this case, the terminal data correction unit35determines that the reception of the terminal data19aby the data transmission/reception unit30has been temporarily interrupted between the terminal data No. 14795 and the terminal data No. 14796, and advances the process to S340. On the other hand, when the reception of the terminal data19aby the data transmission/reception unit30has not been temporarily interrupted, the terminal data correction unit35advances the process to S350. Occurrence of a temporary failure in communication between each of the wearable terminals3and the analysis apparatus4may cause the data transmission/reception unit30of the analysis apparatus4to fail to receive several pieces of terminal data19ain a very limited manner. In this case, the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing adjacent to each other in the order of detection, is empirically considered to be about five times larger than the data generation time interval at most. Therefore, when the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing adjacent to each other in the order of detection, is ten times larger than the data generation time interval or more, that is, 50 seconds or more, it is possible for the terminal data correction unit35to determine that the reception of the terminal data19aby the data transmission/reception unit30has been temporarily interrupted. The time period of 50 seconds, which is a criterion at this time, is one specific example of the first value. S340: The terminal data correction unit35divides the plurality of pieces of terminal data19ainto j groups at a point of interruption. In the example shown inFIG.8, the plurality of pieces of terminal data19aare temporarily interrupted between the terminal data No. 14795 and the terminal data No. 14796 and there are no other interruptions. Therefore, the terminal data correction unit35divides the plurality of pieces of terminal data19ainto two groups at the point of interruption. That is, the terminal data19afrom the terminal data No. 14772 to the terminal data No. 14795 belong to the group that comes first in the order of the detection and the terminal data19afrom the terminal data No. 14796 to the terminal data No. 14802 belong to the group that comes later in the order of the detection. FIG.9only shows some of the plurality of pieces of terminal data19athat belong to the group that comes first in the order of the detection. S350and S360: Referring once again toFIG.7, the terminal data correction unit35executes the processing of S350and S360separately for each step. S350: The terminal data correction unit35gives detection order data that is increased or decreased by a predetermined increase/decrease value in the order of the detection to the plurality of pieces of terminal data19a. In this example embodiment, the terminal data correction unit35gives detection order data that is increased by one in the order of the detection to the plurality of pieces of terminal data19a. Therefore, as shown inFIG.9, in this example embodiment, the detection order data is a natural number. Alternatively, however, the predetermined increase/decrease value may be 0.35 or −0.27. As shown inFIG.9, the detection order data (1to14) that is increased by one in the order of the detection is given to the terminal data19afrom the terminal data No. 14772 to the terminal data No. 14785. Likewise, the detection order data (16to24) that is increased by one in the order of the detection is given to the terminal data19afrom the terminal data No. 14787 to the terminal data No. 14795. Further, the terminal data correction unit35gives detection order data that is increased or decreased by a predetermined increase/decrease value in the order of the detection to the plurality of pieces of terminal data19ain view of the missing. Specifically, when the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing adjacent to each other in the order of detection, is longer than the data generation time interval but shorter than 50 seconds, the terminal data correction unit35sets, as the detection order data to be added to the terminal data19athat comes later in the order of the detection, a value obtained by adding a value obtained by multiplying a value obtained by dividing this difference by the data generation time interval by a predetermined increase/decrease value to the detection order data to be added to the terminal data19athat comes first in the order of the detection. In this example embodiment, the above 50 seconds is one specific example of the second value. The ground for the above 50 seconds is as follows. That is, the number of times that the data transmission/reception unit20fails to receive the terminal data19ain succession is empirically less than 10 times at most, and the above 50 seconds correspond to a value obtained by multiplying the data generation time interval by 10. A description will be given using the example shown inFIG.9. As two pieces of terminal data19athat are adjacent to each other in the order of detection, the difference between the terminal time data of the terminal data19aof the terminal data No. 14785 and the terminal time data of the terminal data19aof the terminal data No. 14787 is 10.000 seconds, which is longer than 5 seconds and is shorter than 50 seconds. Therefore, the terminal data correction unit35determines that there has been terminal data19athat has not been received between the terminal data19aof the terminal data No. 14785 and the terminal data19aof the terminal data No. 14787, that is, that there has been a reception loss (missing) of the terminal data19a. Then, the terminal data correction unit35sets, as the detection order data to be added to the terminal data19aof the terminal data No. 14787 that comes later in the order of the detection, to be “16”, which is a value obtained by adding “2”, which is a value obtained by multiplying a value obtained by dividing 10.000 seconds, which is the difference, by five seconds, by “1”, which is a predetermined increase/decrease value to “14”, which is the detection order data to be added to the terminal data19athat comes first in the order of the detection. Note that, in this example embodiment, when each of the wearable terminals3generates terminal data19a, the terminal data No. incremented in the order of the detection is included in the terminal data19a. Therefore, the terminal data No. itself may be used as the detection order data. S360: Next, the terminal data correction unit35obtains a regression line (linear regression equation) between the terminal time data and the detection order data and changes the terminal time data to a value obtained by substituting the detection order data into the linear regression equation. Referring is now made toFIG.10, which shows a graph in which the plurality of pieces of terminal data19ashown inFIG.9are plotted. The horizontal axis of the graph shown inFIG.10indicates the detection order data and the vertical axis indicates the terminal time data.FIG.10shows a regression line E calculated by the terminal data correction unit35. In this example embodiment, the regression line E is indicated by a linear regression equation: y=5.00767 x+143476622.15498. In some embodiments, the regression line E is obtained based on, for example, the least-squares method. Referring is now made toFIG.11, which shows an enlarged view of a part of the graph shown inFIG.10. As shown inFIG.11, the terminal data correction unit35changes, for each terminal data19a, terminal time data t0to a value t1obtained by substituting the corresponding detection order data into x of the linear regression equation of the regression line E. In short, by moving the plot of the terminal data19aparallel to the vertical axis and superimposing it on the regression line E, the terminal time data of the terminal data19ais corrected.FIG.9shows the terminal time data after correction of the plurality of pieces of terminal data19aand a difference between the terminal time data after correction of one piece of terminal data and the terminal data after correction of another piece of terminal data, these two pieces of terminal data being adjacent to each other in the order of the detection. As shown inFIGS.9and11, according to the above correction, intervals between the terminal time data of the plurality of pieces of terminal data19abecome even on the time axis. The technical significance of the correction is as follows. That is, a communication delay inevitably occurs when the terminal time data is corrected by the terminal time correction unit16and the amount of communication delay does not become constant and inevitably fluctuates every time. In the example shown inFIG.8, the correction of the terminal time data by the terminal time correction unit16is executed between the time when the terminal data19aof the terminal data No. 14778 is generated and the time when the terminal data19aof the terminal data No. 14779 is generated. It is estimated that the amount of the communication delay at this time is longer than the previous amount of delay by 0.17 seconds. Likewise, the correction of the terminal time data by the terminal time correction unit16is executed between the time when the terminal data19aof the terminal data No. 14791 is generated and the time when the terminal data19aof the terminal data No. 14792 is generated. It is estimated that the amount of the communication delay at this time is shorter than the previous amount of delay by 0.03 seconds. As the amount of delay fluctuates every time as described above, the terminal times of the respective plurality of wearable terminals3never coincide with each other. However, the aforementioned communication delay can be expressed by a normal distribution and an average value of the amounts of numerous communication delays converges to an average value of the normal distribution. It is considered that the average value of the normal distribution becomes the same value across the plurality of wearable terminals3. Therefore, by correcting the terminal time data of the plurality of pieces of terminal data19ausing the regression line E in such a way that the intervals between the terminal time data of the plurality of pieces of terminal data19abecome even on the time axis, the fluctuation in the amount of the communication delay disappears and the deviation of the terminal time among the plurality of wearable terminals3is eliminated. It is therefore possible to analyze the plurality of pieces of terminal data19areceived from the plurality of wearable terminals3on the same time axis. It should be noted that the terminal time data after the correction of all the wearable terminals3is delayed from the apparatus time data by the amount corresponding to the average value of the amounts of the communication delay (average value of the normal distribution). S370: Referring is made once again toFIG.7. The communication analysis unit36analyzes the communication in which the plurality of participants2participate based on the plurality of pieces of terminal data19athat have been corrected, and outputs the results of the analysis to, for example, a display. While example embodiments of the present disclosure have been described above, the above example embodiments include the following features. As shown inFIG.1, the communication analysis system1(the information processing system) includes the plurality of wearable terminals3(the sensor terminals) worn by the plurality of participants2who participate in one communication and the analysis apparatus4(the information processing apparatus) capable of communicating with the plurality of wearable terminals3. As shown inFIG.2, each of the wearable terminals3includes the sensor12that outputs sensing data regarding the participant2wearing the wearable terminal3, the terminal clock unit15, the terminal time correction unit16that corrects the terminal time of the terminal clock unit15based on the time data externally acquired, the oscillation circuit11, the terminal data generation unit19that acquires the sensing data output from the sensor12and the terminal time data output from the terminal clock unit15at the predetermined data generation time interval (the generation time interval) counted based on the output signal of the oscillation circuit11, and generates the terminal data19aincluding the sensing data and the terminal time data, and the data transmission/reception unit20(the data transmission unit) that transmits the terminal data19ato the analysis apparatus4. As shown inFIG.4, the analysis apparatus4includes the terminal data storage unit31that stores the plurality of pieces of terminal data19areceived from each of the wearable terminals3, and the terminal data correction unit35that corrects, for each of the wearable terminals3, the terminal time data of the plurality of pieces of terminal data19a. The terminal data correction unit35corrects the terminal time data of the plurality of pieces of terminal data19ain such a way that the intervals between the terminal time data of the plurality of pieces of terminal data19abecome even on the time axis. According to the above configuration, the deviation of the terminal time between the plurality of wearable terminals3, which is due to the communication delay at the time of correction by the terminal time correction unit16, is eliminated, whereby it becomes possible to analyze the terminal data19areceived from the plurality of wearable terminals3on the same time axis. Further, as shown inFIGS.9to11, the terminal data correction unit35adds the detection order data that is increased or decreased by a predetermined increase/decrease value in the order of the detection to the plurality of pieces of terminal data19a, obtains a linear regression equation between the terminal time data and the corresponding detection order data, and changes the terminal time data to a value obtained by substituting the corresponding detection order data into a linear regression equation. Further, as shown inFIG.8, when the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing adjacent to each other in the order of detection, is equal to or larger than a first value, the terminal data correction unit35divides the plurality of pieces of terminal data19ainto two groups at a border between the two pieces of terminal data19aand corrects, for each group, the terminal time data of the plurality of pieces of terminal data19a. That is, when the wearable terminal3has temporarily stopped transmission of the terminal data19a, the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing those before and after the stop, becomes large, which ends up being noise for the above linear regression equation. In order to deal with this problem, according to the above configuration, the plurality of pieces of terminal data19aare divided into groups before and after the stop and the correction is executed for each group, whereby it becomes possible to eliminate the influence of the above noise. Further, when the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing adjacent to each other in the order of detection, is longer than the data generation time interval but equal to or smaller than the second value, the terminal data correction unit35sets, as the detection order data to be added to the terminal data19athat comes later in the order of the detection, a value obtained by adding a value obtained by multiplying a value obtained by dividing the difference by the generation time interval by a predetermined increase/decrease value to the detection order data to be added to the terminal data19athat comes first in the order of the detection. That is, when one of the plurality of pieces of terminal data19ais missing, the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing those before and after the missing terminal data19a, becomes larger than the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, which are terminal data19aprior to the missing. When, for example, a piece of terminal data19ais missing, the difference between the terminal time data of one piece of terminal data19aand the terminal time data of another piece of terminal data19a, these two pieces of terminal data19abeing those before and after the missing terminal data19a, becomes twice as large as the generation time interval. Therefore, according to the above configuration, the regression coefficient of the linear regression equation do not change due to the above missing. If the plurality of pieces of terminal data19aare divided into groups and correction processing is performed every time the reception of the terminal data19aends in failure, the difference in the terminal time data of the terminal data19aafter the correction is not likely to converge to the data generation time interval, the deviation of the terminal time between the plurality of wearable terminals3, which is due to the communication delay at the time of correction by the terminal time correction unit16, is not completely eliminated. In this sense as well, by preventing the plurality of pieces of terminal data19afrom being divided into groups every time the reception of the terminal data19aends in failure, it becomes possible to eliminate the deviation of the terminal time between the plurality of wearable terminals3at a high level. In the aforementioned examples, a (the) program can be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as floppy disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g. magneto-optical disks), CD-ROM (compact disc read only memory), CD-R (compact disc recordable), CD-R/W (compact disc rewritable), and semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.). The program may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g. electric wires, and optical fibers) or a wireless communication line. From the disclosure thus described, it will be obvious that the embodiments of the disclosure may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims.
40,652
11943575
DETAILED DESCRIPTION Innovative wireless systems are described by way of reference to specific examples of one or more loudspeaker systems configured to synchronously play audio from an audio media source. Overview As shown inFIG.1, a loudspeaker system can include a plurality of loudspeakers10a-z. Each loudspeaker10a-zin a given loudspeaker system can be operatively coupled to a corresponding amplifier channel20a-n, allowing each loudspeaker to play audio corresponding to the respective channel. The PILL XL® loudspeaker system commercially available from Beats by Dr. Dre is but one example of such a loudspeaker system. A so-called stereo input signal62can comprise a signal for a left channel72and a signal for a right channel82. A selected loudspeaker system200of the type disclosed herein can include one or more loudspeakers70a,bconfigured to reproduce audio from the left channel72of the amplifier94and one or more other loudspeakers80a,bconfigured to reproduce audio from the right channel82of the amplifier. When a stereo (i.e., a two-channel) audio signal62passes through the amplifier94and the loudspeaker system200is set to a stereo mode, the speakers70a,bcoupled to the left channel72of the amplifier94play back the left-channel portion of the stereo signal62, and the speakers80a,bcoupled to the right channel82of the amplifier94play back the right-channel portion of the stereo signal62. When the loudspeaker system200is set to a “mono” mode, the left- and the right-channel signals are reproduced in their entirety on each of the left- and the right-channels72,82of the amplifier, such that all loudspeakers70a,b,80a,bin the loudspeaker system200play substantially identical audio signals. Some disclosed audio systems300can operatively couple (e.g., through a wireless coupling, or “link”) a pair of loudspeaker systems200,200′ to each other. In a first configuration, each of the plurality of loudspeaker systems200,200′ can simultaneously reproduce an entirety of an audio signal. The audio signal can be a stereo signal62, and each loudspeaker system200,200′ can reproduce the stereo signal62in a stereo mode or in a mono mode, as just described. In context of a stereo (i.e., a two-channel) signal62playing through an audio system300configured according to the first configuration, each loudspeaker70a,70b,70a′,70b′ operatively coupled to a left channel72,72′ in each loudspeaker system200,200′ can reproduce a left-channel portion of the stereo signal62. Similarly, each loudspeaker80a,80b,80a′,80b′ operatively coupled to a right channel82,82′ in each loudspeaker system200,200′ can reproduce a right-channel portion of the stereo signal62. In a second configuration, each of the plurality of loudspeaker systems200,200′ can reproduce a corresponding portion of an audio signal synchronously with each of the other loudspeaker systems200,200′. In context of a stereo signal62playing through an audio system300configured according to the second configuration, each loudspeaker70a,70b,80a,80bin a first loudspeaker system200can reproduce the left-channel portion of the stereo signal62and each loudspeaker70a′,70b′,80a′,80b′ in a second loudspeaker system200′ can reproduce the right-channel portion of the stereo signal62synchronously with the first loudspeaker system200, regardless of whether each respective loudspeaker70a,70b,80a,80b,70a′,70b′,80a′,80b′ in either loudspeaker system200,200′ is operatively coupled to a left-channel72,72′ or a right-channel82,82′ of an amplifier94,94′. For example, one of the loudspeaker systems200,200′ can reproduce only the left-channel signal and the other of the loudspeaker systems200,200′ can reproduce only the right-channel signal. Playback of Multi-Channel Signals The foregoing discussion of two-channel audio signals playing through a pair of two-channel amplifiers is provided as but one of many audio systems, for conciseness. In general, an audio system300can include a plurality of loudspeaker systems, each having a corresponding one or more amplifiers, respectively having one or more channels coupled to a corresponding one or more loudspeakers. Each of the loudspeaker systems can be operatively coupled to each other in any of a selected plurality of configurations to reproduce a selected multi-channel media signal in any of a variety of ways (e.g., in mono, in stereo, in a 2.1 theater mode, in a 5.1 theater mode, in a 7.1 theater mode, in a 9.1 theater mode, or in a mode having a plurality of zones, with each zone being configured to reproduce the media signal in any of a variety of corresponding modes, e.g., mono, stereo, and so on). Although reproduction of a two-channel input signal is briefly described above, benefits can accrue from a plurality of loudspeakers configured to play audio from each of any plurality of channels (e.g., two, three or more channels). In a general sense, a loudspeaker system configured to play two or more discrete audio signals is sometimes referred to in the art, and herein, as a “multi-channel” loudspeaker system. Similarly, a signal including information for each of a plurality of channels is sometimes referred to in the art, and herein, as a “multi-channel signal.” In certain configurations, a multi-channel loudspeaker system (e.g., a system100shown inFIG.1, a system200shown inFIG.2) can reproduce an entirety of a multi-channel signal (e.g., signal62) through all channels simultaneously, such that each loudspeaker in the loudspeaker system reproduces a substantially identical signal. As described above, in context of a stereo signal62having a left channel component and a right channel component, operating a given multi-channel loudspeaker system200in a “mono” mode can play the left channel component and the right channel component simultaneously through both channels72,82of the loudspeaker system. In such an instance, the loudspeaker system200can emit a relatively higher level of sound power (e.g., since, in the case of a two-channel signal, both loudspeaker channels emit the same signal). However, one or more of the benefits of channel separation (e.g., perception of various sources of sound) can be lost when a plurality of loudspeaker channels substantially simultaneously reproduce the entirety of the multi-channel input signal. In other configurations, a multi-channel loudspeaker system can reproduce each respective channel of a multi-channel signal through a corresponding loudspeaker channel. For example, a multi-channel signal can include a signal component corresponding to each of a center channel, a left channel, and a right channel. Other multi-channel signals can include a signal component corresponding to a left, front channel, a right, front channel, a left, rear channel, a right, rear channel, and a center, front channel. In instances, a multi-channel loudspeaker system can provide one or more benefits arising from channel separation, though the overall sound power emitted by the loudspeaker system can be less than if all (e.g., all five) channels simultaneously emitted substantially identical signals. As shown by way of example inFIG.3, some disclosed wireless audio systems300can operatively couple a plurality of wireless loudspeaker systems200,200′ to each other and to a selected one or more media sources60. Each loudspeaker system200,200′, in turn, can have a plurality of loudspeakers arranged in a selected multi-channel configuration. Generalized Loudspeaker System Configurations FIG.4shows a generalized audio system400. The system400has a plurality of loudspeaker systems200a-200nand at least one media source60. (Other audio systems can include a plurality of media sources, or a given media source can emit a plurality of media signals, e.g., one media signal for each respective zone having a corresponding one or more loudspeaker systems.FIG.4shows a single zone.) In a first configuration mode, each wireless loudspeaker system in a selected plurality of wireless loudspeaker systems corresponding to a given zone can reproduce an entirety of a multi-channel audio signal corresponding to the zone. For example, each wireless loudspeaker system in the plurality of wireless loudspeaker systems can operate in a multi-channel mode in which each respective loudspeaker channel reproduces a corresponding channel of a multi-channel audio signal. Such a configuration is sometimes referred to as an “amplify” mode because the entirety of the multi-channel signal is reproduced by a plurality of loudspeaker systems, despite that each respective channel of each loudspeaker system might reproduce but one of a plurality of channels within the multi-channel signal. In a second configuration mode, each wireless loudspeaker system in a plurality of wireless loudspeaker systems can reproduce a respective one channel of a multi-channel audio signal. For example, a first wireless loudspeaker system can be configured to reproduce a left-channel signal in a stereo signal, and a second wireless loudspeaker system can be configured to reproduce a right-channel signal in the stereo signal. The first wireless loudspeaker system and the second wireless loudspeaker system can be configured to reproduce the left-channel signal and the right-channel signal, respectively, synchronously with each other. Referring now toFIG.5by way of example, some particular loudspeaker systems500have at least a first loudspeaker501and a second loudspeaker502. The first and the second loudspeakers501,502can be selectively operable in a single-channel mode or in a multi-channel mode. In the single-channel mode, the first and the second loudspeakers501,502are operatively coupled to each other such that each loudspeaker can simultaneously reproduce a substantially identical signal. In the multi-channel mode, the first and the second loudspeakers501,502are operatively coupled to each other such that the first loudspeaker501can reproduce a first-channel signal (e.g., a left-channel signal) and the second loudspeaker502can reproduce a second-channel signal (e.g., a right-channel signal). In a general sense, the first-channel signal and the second-channel signal can constitute respective portions of a multi-channel signal. For example, such a multi-channel signal can, in general, include a plurality of signals corresponding to a corresponding plurality of zones. Each respective signal, in turn, can include a respective plurality of signal portions representing a given channel. A loudspeaker system500can include a mode selector93configured to select one of a single-channel mode and a multi-channel mode. In context of a system including a plurality of zones, the mode selector93can be configured to select one of a single-zone mode and a multi-zone mode, as well as, within each zone, a single-channel mode and a multi-channel mode. As but one example described more fully below, the mode selector93can select between or among a plurality of channel modes in response to a detected proximity of another loudspeaker system500′. For example, a mode selector93can configure a given loudspeaker system500to operate in a multi-channel mode in response to a first detected proximity of another loudspeaker system500′. The mode selector93can configure the given loudspeaker system to operate in a single-channel mode in response to a second detected proximity of the other loudspeaker system500′ within a predetermined duration following the first detected proximity of the other loudspeaker system. Of course, some mode selector embodiments can configure the loudspeaker system500to operate in a single-channel mode in response to the first detected proximity of another loudspeaker system500′, and configure the loudspeaker system500to operate in the multi-channel mode in response to a second detected proximity within a predetermined duration after the first detected proximity. In some instances, each of the plurality of loudspeaker systems500,500′ (and/or others, not shown) can be substantially simultaneously configured by respective mode selectors93,93′ upon mutual detection of a proximity of each other. A loudspeaker system200,300,500can include a transceiver, such as, for example, a wireless transceiver92, configured to receive and/or to transmit a wireless signal containing media information. The media information can include a single- or a multi-channel audio signal62. Media information can also include any of a variety of forms of video signals, or composite video and audio signals. The transceiver can be configured to pair with a wireless media player60in response to a detected proximity of a wireless media player when the transceiver92is not already paired with a media player. In some embodiments, the mode selector93is also configured to select the multi-channel mode when the transceiver92is initially paired with a wireless media player60. In such an instance, the mode selector93can be configured to select the single-channel mode in response to a proximity of another loudspeaker system500′ being twice detected within a predetermined duration. In some embodiments, when paired with a wireless media player60, the transceiver92can also be configured to pair with another loudspeaker system500′ in response to a detected proximity of the loudspeaker system500′. When paired with each other, each loudspeaker system500,500′ can reproduce or otherwise process a media signal62from a media player60. As but one example, two paired loudspeaker systems500,500′ can each simultaneously reproduce a multi-channel signal62in a multi-channel mode (sometimes referred to as an “amplify” mode). As another example, two paired loudspeaker systems500,500′ can each simultaneously reproduce a respective one or more channels of a multi-channel signal. In context of a two-channel signal, one of the paired loudspeaker systems500,500′ can reproduce the left-channel signal and the other of the paired loudspeaker systems can reproduce the right-channel signal, thereby providing a measure of “stereo” playback, as described above. Configuration by User Gesture Some disclosed audio systems500,500′ can be configured according to one or more operating (or configuration) modes using, for example, simple user gestures. As but one example, a user can position a first loudspeaker system500in close proximity (i.e., less than several centimeters (cm), such as less than about 3 cm to about 4 cm, for example, between about 2 cm and about 3 cm apart) to a wireless media player60or to another, e.g., a second, loudspeaker system500′. The first loudspeaker system500can have a transceiver module91configured to detect a presence of a peer transceiver module associated with, for example, the wireless media player60and/or the other loudspeaker system500′. For example, the transceiver module91can be configured to detect a presence of a peer transceiver module (e.g., module91′ in the second loudspeaker system500′) when the transceiver modules are spaced apart by no more than about 4 cm. The respective transceiver modules91,91′ can transmit and receive wireless communication signals to and from each other. Some such communication signals510can contain configuration information associated with the first loudspeaker system500, the second loudspeaker system500′, and/or the media player60. Each loudspeaker system (and/or the media player60), can include a link activator95,95′ configured to establish a peer-to-peer wireless communication link510,510′ between the transceiver module91and the peer transceiver module91′. The peer-to-peer communication link510,510′ can be suitable for the transceiver modules91,91′ to mutually exchange wireless communication signals containing configuration information associated with the corresponding devices (e.g., the first loudspeaker system500, the media player60, and/or the second loudspeaker system500′). In some instances, the peer-to-peer communication link510,510′ can be a first peer-to-peer communication link, and the first loudspeaker system500, the media player60, and/or the second loudspeaker system500′, can accommodate a second peer-to-peer wireless link520. The configuration information exchanged over the first communication link510,510′ can be used to configure the second peer-to-peer wireless link520and associated transceivers (e.g., transceivers92,92′). The second peer-to-peer wireless link520,520′ can be used to carry (or exchange), for example, media information from a media player60to the first loudspeaker system500and/or from the first loudspeaker system to the second loudspeaker system500′. A configuration module96can select one of a single-channel mode and a multi-channel mode for the first loudspeaker system500. The selected configuration can be, but need not be, based in part on configuration information contained in a wireless communication signal510received from the peer transceiver module (e.g., module61). For example, the configuration module96can simply determine whether a peer transceiver61has paired with the transceiver91in the first loudspeaker system500and whether the peer transceiver61has been placed in close proximity to the first loudspeaker system500one or more times within a selected duration. From such proximity information, the configuration module96can select, for example, a single-channel or a multi-channel configuration for the first loudspeaker system. With such a configuration, a loudspeaker system500can be placed in close proximity to a peer device (e.g., a wireless media player60or another loudspeaker system500′). Upon being placed in close proximity to each other, the loudspeaker system500and the peer device60,500′ can link together wirelessly in a suitable manner as to play a media signal520through the loudspeaker system in a selected mode. As an example, a second loudspeaker system500′ can be placed in close proximity to the first loudspeaker system500, and the first and the second loudspeaker systems500,500′ can be wirelessly paired with each other (e.g., linked with each other). For example, placing the loudspeaker systems500,500′ in close proximity to each other can initiate pairing of the first loudspeaker system500and the second loudspeaker system500′. Once paired with each other, the first and the second loudspeaker systems can simultaneously play at least a portion of a media signal520,520′ (e.g., each can be in a single-channel or a multi-channel mode). As a default setting, each of the first and the second loudspeaker systems can be configured to operate in a multi-channel mode upon pairing with each other, and, in the event of being brought into close proximity to each other a second time within a predetermined duration, to operate in complementary single-channel modes (e.g., system500playing a left-channel signal and system500′ playing a right-channel signal). Although systems including powered transceivers placed into close proximity are described above, some contemplated embodiments described above include a device600(e.g., a media device and/or a loudspeaker system) having a powered transceiver601placed into close proximity to a device610having an unpowered communication device611, sometimes referred to in the art as a “tag”. As illustrated inFIG.6, a common example of a tag611is an RFID device. Such a tag611can store information (e.g., configuration information) and can transmit such information when a powered device, e.g., a first transceiver601, is in close proximity to the tag611. A tag is but one contemplated example of a wireless transceiver for a peer-to-peer wireless connection over close proximity. Disclosed systems for and approaches of automatically configuring wireless systems provide substantial simplification of pairing devices, yet provide substantially similar, if not identical, degrees of confidence in security and pairing robustness. Disclosed systems and approaches can be used in connection with contactless transactions, data exchange, and simplified setup of more complex communications systems. Wireless Protocols Existing wireless communication protocols between or among computing environments require substantial interactions from a user to configure them. In contrast, presently disclosed wireless communication protocols can be suitable for automatically configuring wireless systems, including wireless audio systems, as described above. Some disclosed embodiments of such wireless communication protocols require relatively little user interaction to achieve any of a plurality of wireless system configurations. For example, as noted above, one or more of a variety of wireless audio system configurations can be selected using user gestures (e.g., by bringing a pair of loudspeaker systems into close proximity to each other one or more times during a predetermined duration). Some disclosed wireless loudspeaker systems incorporate one or more communications transceivers configured to operate with such a wireless communication protocol. Near Field Communication (NFC) is a set of short-range wireless connectivity technologies that can transmit relatively small amounts of information with little initial setup time and power consumption. NFC enables relatively simple and relatively secure two-way (point-to-point) interactions between electronic devices when brought into close proximity with each other. Disclosed applications for NFC include contactless transactions, data exchange and simplified setup of more complex technologies such as WLAN. NFC communications are based on inductive coupling between two loop antennas and operates in the globally available and unlicensed ISM band of 13.56 MHz. NFC supports data rates of 106 kbit/s, 212 kbit/s and 424 kbit/s. NFC communications protocols and data exchange formats are generally based on existing RFID standards as outlined in ISO/IEC 18092:NFC-A based on ISO/IEC 14443ANFC-B based on ISO/IEC 14443BNFC-F based on FeliCa JIS X6319-4 This makes NFC devices compatible with existing passive 13.56 MHz RFID tags and contactless smart cards in line with the ISO 18000-3 air interface. NFC point-to-point communications typically include an initiator and a target, as shown inFIG.7. For active communications between two powered NFC devices (e.g., transducers61,91inFIG.5), the initiator and the target can alternately generate their own fields as indicated inFIG.7. In passive communications mode, a passive target, such as a tag611(FIG.6), draws its operating power from the RF field actively provided by the initiator, for example an NFC reader. In this mode an NFC target can take very simple form factors, such as a sticker, because no battery is required. NFC-enabled devices generally support any of three operating modes:Reader/writer: Compliant with the ISO 14443 and FeliCa specifications, the NFC device is capable of reading a tag (an unpowered NFC chip) integrated, for example, in a smart poster, sticker or key fob.Peer-to-peer: Based on the ISO/IEC 18092 specification, two self-powered NFC devices can exchange data such as virtual business cards or digital photos, or share WLAN link setup parameters.Card emulation: Stored data can be read by an NFC reader, enabling contactless payments and ticketing within the existing infrastructure. As but specific examples of such wireless protocols, an implementation of the NFC (near-field-communications) standard can be used to configure one or more Bluetooth-enabled devices (e.g., transceivers92inFIG.5, and a corresponding transceiver in the media device60). Other wireless devices can be configured, as well. For example, IEEE 802.11 devices (sometimes referred to as “Wi-Fi” devices) can be complementarily configured, including with passwords and security codes or phrases, to pair with each other and/or other network devices using user gestures as described herein. One particular example of a disclosed wireless system includes an NFC peer-to-peer (p2p) chip, a processor, memory, an out of band radio circuit (including but not limited to Bluetooth), and interrupt hardware. A task scheduler, an interrupt service routine, interprocess messaging system, and an NFC data encapsulation parser can be executed in the microprocessor. Alternatives to detecting proximity of another device include received signal strength indication (RSSI) in connection with a Bluetooth, a Bluetooth Low Energy, or a Wi-Fi transmitter. A loudspeaker system of the type disclosed herein can include a transceiver (e.g., transceivers61or91inFIG.5) that acts as an initiator and/or a target. When acting as an initiator710, the transceiver700can send a SENSF_REQ command to the handset or other peer device (e.g., another loudspeaker system500′ (FIG.5)). The data and payload format contained in the NFC Forum Digital Protocol Technical Specification (dated Nov. 17, 2010) (e.g., Section 6.4, p. 74; FIG. 23) can be followed. A typical interaction between an NFC-enabled loudspeaker system500,500′ and another NFC-enabled device (e.g., media player60) will be described as but one possible example of disclosed systems. A typical NFC-enabled device, such as, for example, an Android phone or other media player60, can poll through a plurality of protocols in a “round robin” cycle, as indicated inFIG.8. For example, the device61can poll sequentially through the protocols: ISO 15693, Card Emulation, NFC Active, etc. Some disclosed wireless systems, e.g., some disclosed loudspeaker systems, include a commercially available NFC device configured to poll between NFC Initiator Mode (at 424 kpbs) for a first duration (e.g., about 100 milliseconds (ms)) and NFC Target Mode (at 424 kpbs) for a second duration (e.g., about 400 ms), as shown inFIG.7. One example of such an NFC device is a TRF7970A NFC device commercially available from Texas Instruments.FIG.7shows an example of such polling. Referring now toFIG.9, an example of peer-to-peer operation using the Simple NDEF Exchange Protocol (SNEP) will be described. The example SNEP operation900described herein includes the NFC-F protocol, NFC-DEP, SNEP, NDEF Message Format, and a Logical Disconnection Process as but one example. The operational overview depicted inFIG.9is based on a commercially available TRF7970A device interacting with another NFC-enabled peer-to-peer device, such as an NFC-enabled Android operating system handset. The TRF7970A can be placed in active initiator mode710at 424 kbps (FIG.7). The SENSx_REQ (first command)910can determine the protocol to be followed (e.g., NFC-F or NFC-A). For purposes of illustration, communication using the NFC-F standard will be discussed. However, other protocols and devices are contemplated, as will be understood by those of ordinary skill in the art after reviewing the entirety of this disclosure. For convenience, relevant NFC Forum Specifications are listed beside each command inFIG.9. As used herein, the term “DP” refers to the NFC-Forum Technical Specification Digital Protocol 1.0; the term “LLP” refers to the NFC-Forum Technical Specification Logical Link Protocol; and the term “SNEP” refers to the NFC-Forum Technical Specification SNEP 1.0 Once a connection (e.g., wireless link510) between the wireless devices61,91(FIG.5) is established, data can flow in either direction.FIGS.5and9are simplified illustrations of the flow of information; SYMM PDUs can, and for the most part are, exchanged multiple times in between the respective illustrated commands. Memory can be allocated in the initiator and in the target as follows: FlashRAMmain.c200 bytesmain.c70 bytesmcu.c (timer)300 bytesmcu.c (timer)10 bytesspi.c500 bytesspi.c01 bytetrf797x.c1500 bytestrf797x.c144 bytessnep.c1000 bytessnep.c20 bytes11cp.c2000 bytes11cp.c16 bytesnfc_dep.c1000 bytesnfc_dep.c18 bytesNfc_fc500 bytesnfc_f c .12 bytesnfc_p2p.c1000 bytesNfc_p2p.c12 bytesEstimated Total FLASH: 10 kBstack70 bytesEstimated RAM: 373 withstack and 303 w/o stack FIG.10shows a simplified schematic illustration of an exchange of data, or other information, between an initiator and a target when establishing an initial pairing between transceivers brought into close proximity to each other. The initiator can send a SENSF_RQ and the target responds with a SENSF_RES911. FIG.11shows an example of a SENSF_REQ. SENS_F can be transmitted, then EOTX IRQ can be received and handled. First in, first out (FIFO) can be cleared, etc. (similar to other commands transmitted with the TRF7970A.) The following table describes the word allocation of the SENSF_REQ910(shown inFIG.11). Byte #DescriptionValue (hex)0Length061Command00 (DP_SENSF_REQ)2:3System Code (SC)FF FF (DP, Section 6.6.1.1, default)4Request Code (RC)00 (DP, no system code informationrequested)5Time Slot Number (TSN)03 (DP, Table 42, 4 time slots) As used in the table above:1. The term “SC” refers to System Code (SC) and contains information regarding the NFC Forum Device to be polled for (e.g., the Technology Subset). (see Requirements 80 table in DP for more information);2. The term “RC” refers to the Request Code (RC) is used to retrieve additional information in the SENSF_RES Response and Table 41 (page 76 in DP) specifies the RC code(s); and3. The term “TSN” refers to the Time Slot Number (TSN) is used for collision resolution and to reduce the probability of collisions. An anticollision scheme can be based on a definition of time slots in which NFC Forum Devices in Listen Mode are invited to respond with minimum identification data. The NFC Forum Device in Poll Mode can send a SENSF_REQ Command with a TSN value indicating the number of time slots available. Each NFC Forum Device in Listen Mode can present within the range of the Operating Field, and then randomly select a time slot in which it responds. The TSN byte set to 00 h can force all NFC Forum Devices in Listen Mode to respond in the first time slot, and therefore, this TSN value can be used if collision resolution is not used. In response to the SENSF_REQ910command sent by the initiator, the target can respond with a SENSF_RES911. The SENSF_RES word can be allocated as follows: Byte #DescriptionValue (hex)0Length12 (or 14, see note below on RD)1Command01 (SENSF_RES)2:9NFCID201 FE 6F 5D 88 11 4A 0F (for example)10:11PAD0C0 C112:14PAD1C2 C3 C415MRTICHECKC516MRTIUPDATEC617PAD2C718:19Request Data (RD)(only present when RC ≠ 00, sent inSENSF_REQ) EORX ITRQ can be received, and FIFO status register can be read for the SENSF_RES (response). In an example, the response can include 18 bytes: Register 0x1C=0x12=DEC 18. Then the FIFO can reset, similar to other TRF7970A RX operations. Although NFCID2is shown in the table above as an example, each device/session can have a corresponding unique number returned here. The NFC Forum Device can set PAD0 to a different value if configured for Type 3 Tag platform in a particular configuration. (The NFC specification says this value must otherwise be set to FF FF.) The PAD1 format can depend on the NFC-F Technology Subset for which the NFC Forum Device in Listen Mode is configured. NFC Forum Devices configured for the NFC-DEP Protocol do not generally use PAD1. Coding of MRTICHECK can depend on the NFC-F Technology Subset for which the NFC Forum Device in Listen Mode is configured. NFC Forum Devices configured for the NFC-DEP Protocol do not generally use MRTICHECK. The MRTIUPDATE format can depend on the NFC-F Technology Subset for which the NFC Forum Device in Listen Mode is configured. NFC Forum Devices configured for the NFC-DEP Protocol do not generally use MRTIUPDATE. The PAD2 format can depend on the NFC-F Technology Subset for which the NFC Forum Device in Listen Mode is configured. NFC Forum Devices configured for the NFC-DEP Protocol do not generally use PAD2. Request Data (RD) can be included in the SENSF_RES Response911if requested in the RC field of the SENSF_REQ Command910. The Request Data (RD) format can depend on the NFC-F Technology Subset for which the NFC Forum Device in Listen Mode is configured. Following the initialization and anti-collision procedure defined in [DIGITAL], the Initiator device can send the Attribute Request ATR_REQ command920(FIGS.9,12): Byte #DescriptionValue (hex)NFC-DEP portion0Length25 (37 bytes)1:2CommandD4 00 (ATR REQ)3:12NFCID3INFCID3I= 01 FE 6F 5D 88 11 4A 0F 00 0013DIDI0014BSI0015BRI0016PPI32 (max payload 254 bytes)LLCP portion17:19LLCP Magic #46 66 6D20:22TLV: Version #01 01 11 (v1.1)23:26TLV: MIUX02 02 07 80 (128 + MIU (1792) = 1920 bytes)27:30TLV: Services03 02 00 03 (WKS LLC Link Management)31:33TLV: LTO04 01 32 (500mSec timeout, Figure 22, LLP)34:36TLV: OptionParam 07 01 03 (Class 3) (Table 7, LLP)37:48TLV: PrivateTap To Pair Data The format of the ATR_REQ920is shown in FIG. 28 of the LLP Specification andFIG.12herein, and summarized in the table above. The Initiator can include the NFC Forum LLCP magic number921in the first three octets of the ATR_REQ General Bytes field922. All LLC parameters defined in Section 4.5 Table 6 for use in PAX PDUs that are to be exchanged can be included as TLVs beginning at the fourth octet923of the ATR_REQ General Bytes field922. The PAX PDU exchange described in the LLC link activation procedure (cf. Section 5.2) need not be used. The ATR_REQ General Bytes field need not contain any additional information. NFCID3Iis the NFC Forum Device identifier of the Initiator for the NFC-DEP Protocol. The Initiator Device Identification Number (DIDI)924can be used to identify different Targets (e.g., different loudspeaker systems500,500′) that are activated at one time. If multiple target activation is not used, the DIM field can be set to zero. BSI925and BRI926indicate the bit rates in Active Communication mode supported by the Initiator in both transmission directions. The coding of BSI and BRI is specified in Table 88 and Table 89 of the Digital Protocol Specification. The PPI field927indicates the Length Reduction field (LRI) and the presence of optional parameters. The format of the PPI byte is specified in Table 90 of the Digital Protocol Specification. The NFC-DEP MAC component can use the three octet sequence “46h 66h 6Dh” as the NFC Forum LLCP magic number. This magic number is encoded into the ATR_REQ920/ATR_RES930General Bytes fields, as described below. The use of the magic number by the Initiator and Target can indicate compatibility with the requirements of this specification. The link activation phase can be started when a peer device capable of executing the LLCP peer-to-peer protocol enters communication range (e.g., is positioned in close proximity), and the local device is instructed to perform peer-to-peer communication. The link activation phase can be different for the Initiator and the Target device and is described separately for each role.The target can send a corresponding response (ATR_RES930,FIG.13) based on the NFC Digital Protocol and the LLCP documents (See NFC Digital Protocol Table 92, LLP Spec Section 6.2.3.2): Byte #DescriptionValue (hex)NFC-DEP portion0Length1F (31 bytes)1:2CommandD5 01 (ATR_RES, fixed values)3:12NFCID3TNFCID3T= F3 95 62 DF C3 28 BD 9D 94 E013DIDT0014BST0015BRT0016TOOE17PPT32 (max payload 254 bytes)LLCP portion18:20LLCP Magic #46 66 6D21:23TLV: Version #01 01 11 (ver1.1)24:27TLV: Services03 02 00 13 (WKS LLC Link Management)28:30TLV: LTO04 01 96(1.5 sec)31:42TLV: PrivateTap To Pair Data Following the initialization and anti-collision procedure defined in [DIGITAL], the Target device can wait until the receipt of the Attribute Request ATR_REQ920command. Upon receipt of ATR_REQ920, the Target can verify that the first three octets921of the General Bytes field922are equal to the NFC Forum LLCP magic number defined in Section 6.2.2. If the octet sequence is equal to the NFC Forum LLCP magic number, the Target can respond by sending the Attribute Response ATR_RES930, as defined in [DIGITAL]. The format of the ATR_RES930can be as shown in FIG. 29 of the LLP Spec (page 43) andFIG.13herein. The Target can include the NFC Forum LLCP magic number in the first three octets931of the ATR_RES General Bytes field932. All LLC parameters defined in Section 4.5 Table 6 for use in PAX PDUs that are to be exchanged can be included as TLVs933beginning at the fourth octet of the ATR_RES General Bytes field932. The PAX PDU exchange described in the LLC link activation procedure (cf. Section 5.2) need not be used. Upon receipt of the Attribute Response ATR_RES930the Initiator can verify that the first three octets931of the General Bytes field932are equal to the NFC Forum LLCP magic number defined in Section 6.2.2. If the octets are equal to the NFC Forum LLCP magic number, the Initiator can notify the local LLC component about the MAC link activation completion and can then enter normal operation described in chapter 6.2.5. For example, each transceiver can exchange configuration information for a media communication link using a Bluetooth, a WiFi, or other protocol. If the first three octets of the General Bytes field are not equal to the NFC Forum LLCP magic number, the link activation can fail. In this case, any further communication between the Initiator and the Target can be terminated and/or reinitiated. After sending ATR_RES930the Target can notify the local LLC component about the MAC link activation completion and can then enter normal operation described in Section 6.2.5. For example, each transceiver can exchange configuration information for a media communication link using a Bluetooth, a WiFi, or other protocol. If the magic number in the received ATR_REQ cannot be verified, the link activation can fail. In this case, any further communication between the Initiator and the Target can be terminated and/or reinitiated. FIG.14shows such information exchange. For example, the configuration information can include, e.g., 12 bits for controlling volume or selecting an audio or other media source. A bit can be used to indicate whether to select a single-channel mode or a multi-channel mode for a given loudspeaker system. Another bit can be used to configure the loudspeaker system as a master (e.g., to receive a media signal from a media source and to transmit a corresponding media signal to a paired loudspeaker system) or as a slave (e.g., to receive a media signal from another loudspeaker system). Another bit can indicate a status of the loudspeaker system. Yet another bit can indicate whether such pairing might be available. Computing Environments FIG.15illustrates a generalized example of a suitable computing environment1100in which described methods, embodiments, techniques, and technologies relating, for example, to control systems, may be implemented. The computing environment1100is not intended to suggest any limitation as to scope of use or functionality of the technology, as the technology may be implemented in diverse general-purpose or special-purpose computing environments. For example, the disclosed technology may be implemented with other computer system configurations, including hand held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The disclosed technology may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. With reference toFIG.15, the computing environment1100includes at least one central processing unit1110and memory1120. InFIG.8, this most basic configuration1130is included within a dashed line. The central processing unit1110executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such, multiple processors can be running simultaneously. The memory1120may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory1120stores software1180that can, for example, implement one or more of the innovative technologies described herein. A computing environment may have additional features. For example, the computing environment1100includes storage1140, one or more input devices1150, one or more output devices1160, and one or more communication connections1170. An interconnection mechanism (not shown) such as a bus, a controller, or a network, interconnects the components of the computing environment1100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment1100, and coordinates activities of the components of the computing environment1100. The storage1140may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment1100. The storage1140stores instructions for the software1180, which can implement technologies described herein. The input device(s)1150may be a touch input device, such as a keyboard, keypad, mouse, pen, or trackball, a voice input device, a scanning device, a first wireless transceiver (e.g., an NFC-enabled device or tag), or another device, that provides input to the computing environment1100. For audio or other media, the input device(s)1150may be a sound card or similar device, or a second wireless transceiver, that accepts media input in analog or digital form, or a CD-ROM reader that provides media samples to the computing environment1100. The output device(s)1160may be a display, printer, loudspeaker, CD-writer, wireless transmitter (or transceiver) or another device that provides output from the computing environment1100. The communication connection(s)1170enable communication over a communication medium (e.g., a connecting network) to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, audio or other media information, or other data in a modulated data signal. The data signal can include information pertaining to a physical parameter observed by a sensor or pertaining to a command issued by a controller, e.g., to invoke a change in an operation of a component in a system. Tangible, non-transitory, computer-readable media are any available tangible and non-transitory media that can be accessed within a computing environment1100. By way of example, and not limitation, with the computing environment1100, computer-readable media include memory1120, storage1140, communication media (not shown), and combinations of any of the above. Other Exemplary Embodiments The examples described herein generally concern automatically configurable wireless systems, with specific, but not exclusive, examples of wireless systems being automatically configurable wireless audio systems. Other embodiments of automatically configurable wireless systems than those described above in detail are contemplated based on the principles disclosed herein, together with any attendant changes in configurations of the respective apparatus and/or circuits described herein. Incorporating the principles disclosed herein, it is possible to provide a wide variety of automatically configurable wireless systems. For example, disclosed systems (e.g., disclosed methods, apparatus, and computer readable media) can be used to automatically configure a keyless entry system, a wireless multi-media system, a wireless biological monitoring system, a wireless gaming system, a wireless control system, etc. Moreover, systems disclosed herein can be used in combination with systems including, inter alia, wired network systems. In context of other than automatically configurable wireless audio systems, media information (described above in connection with a wireless audio or a wireless video signal) can include other types of information, as well. For example, media information can include biological diagnostic information, observed or detected state variables for use in a control system, and other information that can be encoded and transmitted via a wireless signal. Directions and references (e.g., up, down, top, bottom, left, right, rearward, forward, etc.) may be used to facilitate discussion of the drawings but are not intended to be limiting. For example, certain terms may be used such as “up,” “down,”, “upper,” “lower,” “horizontal,” “vertical,” “left,” “right,” and the like. Such terms are used, where applicable, to provide some clarity of description when dealing with relative relationships, particularly with respect to the illustrated embodiments. Such terms are not, however, intended to imply absolute relationships, positions, and/or orientations. For example, with respect to an object, an “upper” surface can become a “lower” surface simply by turning the object over. Nevertheless, it is still the same surface and the object remains the same. As used herein, “and/or” means “and” or “or”, as well as “and” and “or.” Moreover, all patent and non-patent literature cited herein is hereby incorporated by references in its entirety for all purposes. The principles described above in connection with any particular example can be combined with the principles described in connection with any one or more of the other examples. Accordingly, this detailed description shall not be construed in a limiting sense, and following a review of this disclosure, those of ordinary skill in the art will appreciate the wide variety of fluid heat exchange systems that can be devised using the various concepts described herein. Moreover, those of ordinary skill in the art will appreciate that the exemplary embodiments disclosed herein can be adapted to various configurations without departing from the disclosed principles. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed innovations. Various modifications to those embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of this disclosure. Thus, the claimed inventions are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular, such as by use of the article “a” or “an” is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. All structural and functional equivalents to the elements of the various embodiments described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the features described and claimed herein. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 USC 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or “step for”. Thus, in view of the many possible embodiments to which the disclosed principles can be applied, it should be recognized that the above-described embodiments are only examples and should not be taken as limiting in scope. We therefore reserve all rights to the subject matter disclosed herein, including the right to claim all that comes within the scope and spirit of the foregoing and following.
47,887
11943576
DETAILED DESCRIPTION Embodiments described herein generally take the form of a contextual audio system configured to adjust audio playback in response to positional data. The contextual audio system may include a wearable audio device and, optionally, a sensing device. In some embodiments, the sensing device and the wearable audio device are the same. Generally, the contextual audio system employs different types of data to determine a user's location and/or activity (both of which are examples of “context”) and adjust audio output of the wearable audio device portion of the system. “Positional data,” as used herein, generally refer to data about a user's (or device's) location, motion, speed, acceleration, weight distribution, balance, or other spatial location and/or orientation. GPS positioning, travel speed, facing, proximity to objects or places, posture, language selected on an electronic device (insofar as that language may provide suggestion or indication as to a user's native country), location relative to an object, and so on are all non-comprehensive examples of positional data. As one example embodiment, the contextual audio system may employ positional data sensed by a wearable electronic device or other electronic sensing device, optionally coupled with secondary data sensed or received by the wearable electronic device or sensing device, to control audio output to a user. The audio may be outputted by the wearable electronic device to the user's ears. Examples of controlling audio output include: adjusting audio volume; stopping or preventing audio from playing; providing feedback, directions, encouragement, advice, safety information, instructions, and the like; and so on. In some embodiments, the positional data may be GPS data received by either the wearable audio device or a sensing device, and in some embodiments the wearable audio device may be “stand alone” insofar as the sensing device may be omitted (or incorporated into the wearable audio device). As another example embodiment of a contextual audio system, headphones, earphones, earbuds, or the like (collectively referred to as a “wearable audio device”) may be in wired or wireless electronic communication with a second electronic device, such as a smart telephone, watch or other wearable electronic device (e.g., glasses, clothing, jewelry, or the like), tablet computing device, portable media player, computer, and so on. The second electronic device may incorporate, include, or embody one or more sensors configured to sense positional data. The second electronic device, and any electronic device incorporating, including, and/or embodying such a sensor or sensors, is referred to herein as a “sensing device.” It should be appreciated that the wearable audio device may be a single unit (as in the case of most headphones) or include multiple elements (as in the case of most wireless earbuds). Continuing the example, the sensing device may receive positional data, such as GPS data, indicating a position of a user holding or carrying the sensing device. Further, the wearable audio device may incorporate one or more sensors configured to determine whether the wearable audio device is on, adjacent, or inserted into an ear, or are otherwise worn in a position to provide audio to a user, all of which are examples of “wearable data.” The sensing device may receive the wearable data from the wearable audio device and use it in conjunction with the positional data to modify audio outputted by the wearable audio device. As one non-limiting example, the sensing device may determine that the wearable audio device engages both ears and that the user is at a side of, or on, a road. The sensing device may pause or prevent audio playback through the speaker adjacent, within, or otherwise associated with the user's left ear. In some embodiments, audio outputted by the wearable audio device to the user's right ear may be unaffected. This may permit a user to hear traffic while still listening to audio from the wearable audio device, for example. Audio to the left ear may be stopped, muted, or lowered as people typically walk with their left side toward the road. In some embodiments, the sensing device may receive speed (velocity) data or may interpolate a user's speed based on changes in position data over time. Audio may be paused, stopped, muted, or the like only when the user's speed is above or below a threshold, or between two thresholds. As yet another example, a user's speed may suggest he or she is traveling on a bicycle and audio to the user's ear may be paused, stopped, muted, or changed in volume (all of which are encompassed in the term “adjusted”) accordingly. When the user stops, unadjusted audio playback may resume. Further, changes in position data may also indicate a direction of motion. The direction of motion may be used with the position data to determine which audio output (e.g., left or right ear audio) should be adjusted, as described above. For example, if positional data indicates a user is at or moving along a side of, or on, a road, the sensing device or wearable audio device may adjust audio output as described above. However, the ear to which audio output is adjusted may be determined from the user's direction of motion. The direction of motion may indicate a user is moving along a shoulder of a road with his or her right side toward the road (presuming the user is walking forwards). Thus, audio output to the right ear may be adjusted. If the motion data suggests the user is walking with his or her left side towards the road, audio output to the left ear may be adjusted. In still other embodiments, the sensing device and/or wearable audio device may be configured to execute a program, operation, application, or the like associated with a particular activity. For example, a jogging application may track the user's distance traveled, route, and/or other information. In some embodiments the sensing device and/or wearable audio device may only adjust audio output to a user at certain points along the route, as tracked by an application program, operation, or the like; the term “application,” as used herein, encompasses all of the foregoing. As another option, the application may also track when and/or where audio is adjusted. As still another option, audio may be adjusted only if the application is active. As still another option, the type of adjustment to audio may vary with what application is active or otherwise being executed. As a specific example of the foregoing, a sensing device may execute a cycling workout application. Positional data gathered by sensors in the sensing device may indicate the user's location when the application is opened. Further, the positional data may indicate which side of a road (or other hazard) a user is on or near. These factors, optionally along with motion data, may be used to determine which of the user's ears faces the road. The sensing device and/or wearable audio device may then adjust the volume of the user's ear facing the road while leaving volume to the other ear unadjusted. In some embodiments, audio may be outputted only if a sensor in the wearable audio device indicates an ear is unobstructed by the wearable audio device, e.g., the wearable audio device is not in or on the user's ear. Thus, rather than adjusting audio to one ear and playing unadjusted audio to the other ear, audio may not play at all unless the “correct” ear is uncovered or otherwise not in contact with the wearable audio device. The “correct” ear may be the ear closest to a road or other hazard, as determined by the sensing device or wearable audio device in the various manners described herein. Other embodiments may be used to determine or monitor a user's balance, position, compliance with an exercise program, location, posture, activity, or the like, and adjust audio accordingly. Adjusting audio may include any alterations to audio discussed above as well as providing audible coaching, feedback, encouragement, corrections, suggestions, or the like. FIG.1depicts a sample contextual audio system100, including a wearable audio device110and a sensing device120. The wearable audio device110may be any worn device that outputs audio to the ears of a user, such as headphones, earphones, earbuds, glasses, jewelry, and so on. The sensing device120may be any electronic device with one or more sensors capable of sensing positional data. Sample sensing devices may include electronic watches, smart telephones, tablet computing devices, portable computing devices, wearable electronic devices such as glasses, jewelry, clothing, and the like, and so on. In some embodiments, sensing devices are carried or worn by a user, as in the examples given. In other embodiments, sensing devices are removed from or remote from the user; such sensing devices may be stand-alone, incorporated into a vehicle such as a bicycle, car, motorcycle, or the like, positioned within a building or dwelling (such as doorbell cameras, room sensors, and so on), and the like. Generally, the wearable audio device110and the sensing device120are in wired or wireless communication with one another. Data and/or commands can pass from one device to another. For example, the wearable audio device110may transmit data to the sensing device120regarding whether the device is being worn. Likewise, commands to adjust audio output of the wearable audio device110may be transmitted from the sensing device120to the wearable audio device110. In some embodiments, the wearable audio device110and the sensing device120may be the same device, or may be contained within a single housing or enclosure. In other embodiments, the two are physically separate. FIG.2depicts components of a sample wearable audio device110. It should be appreciated that the components are illustrative and not exhaustive. Further, some embodiments may omit one or more of the depicted components or may combine multiple depicted components. The wearable audio device110may include an audio output structure200, an ear sensor210, a transmitter220, a receiver230, a battery240, and/or a processing unit250, as well as other elements common to electronic devices, such as a touch- or force-sensitive input structure, visual output structure (e.g., a light, display, or the like), an environmental audio sensor, and so on. Each depicted element will be discussed in turn. The audio output structure200may be a speaker or similar structure that outputs audio to a user's ear. If the wearable audio device110is a pair of headphones, there are two audio output structures200, one for each ear. If the wearable audio device110is a single earbud, then there is a single audio output structure200. In the latter case, each earbud may be considered a separate wearable audio device110and thus two wearable audio devices may be used by, or included in, certain embodiments. The audio output structure200may play audio at various levels; the audio output level may be controlled by the processor250, as one example. The ear sensor210may be any type of sensor configured to receive or generate data indicating whether the wearable audio device110is on, adjacent, and/or at least partially in a user's ear (generally, positioned to output audio to the user's ear). In some embodiments, the wearable audio device110may have a single ear sensor210configured to provide data regarding whether a single or particular audio output structure200is positioned to output audio to the user's ear. In other embodiments, the wearable audio device110may have multiple ear sensors210each configured to detect the position of a unique audio output structure200(for example, where the wearable audio device is a pair of headphones). Sample ear sensors include capacitive sensors, optical sensors, resistive sensors, thermal sensors, audio sensors, pressure sensors, and so on. The wearable audio device110may include a transmitter220and a receiver230. In some embodiments, the transmitter220and the receiver230may be combined into a transceiver. Generally, the transmitter220enables wireless or wired data transmission to the sensing device120while the receiver230enables wires or wired data receipt from the sensing device120. The transmitter220and the receiver230(or transceiver) may facilitate communication with other electronic devices as well, whether wired or wirelessly. Examples of wireless communication include radio frequency, Bluetooth, infrared, and Bluetooth low energy communication, as well as any other suitable wireless communication protocol and/or frequency. The wearable audio device110may also include a battery240configured to store power. The battery240may provide power to any or all of the other components discussed herein with respect toFIG.2. The battery240may be charged from an external power source, such as a power outlet. The battery240may include, or be connected to, circuitry to regulate power drawn by the other components of the wearable audio device110. The wearable audio device110may also include a processor250. In some embodiments, the processor250may control operation of any or all of the other components of the wearable audio device110. The processor250may also receive data from the receiver230and transmit data through the transmitter220, for example, from and/or to the sensing device120. The processor250may thus coordinate operations of the wearable audio device110with the sensing device120or any other suitable electronic device. The processor250, although referred to in the singular, may include multiple processing cores, units, chips, or the like. For example, the processor250may include a main processor and an audio processor. FIG.3is a block diagram showing sample components of an example sensing device120. As referred to with respect toFIG.2, the sensing device120may include a transmitter320in communication with the receiver230of the wearable audio device110, as well as a receiver330in communication with the transmitter220of the wearable audio device. In some embodiments, a transceiver may replace the separate transmitter320and receiver330. Generally, the transmitter320and the receiver330cooperate to transmit data and/or instructions to, and receive from, the wearable audio device110. The sensing device120may also include a position sensor300. The position sensor300may receive data indicating the sensing device's location, either in absolute terms (such as a GPS sensor) or relative terms (such as an optical sensor that may determine the device's location relative to a transmitter or object). Other types of sensors, such as magnetic sensors, ultrasonic sensors, various proximity sensors, and the like may be used as a position sensor300in various embodiments. Some wearable audio devices110may include multiple position sensors300. In some embodiments, one or more position sensors300may be incorporated into the wearable audio device110in addition to, or instead of, in the sensing device120. The sensing device120may include one or more motion sensors310in addition to the position sensor300. The motion sensor310may detect the wearable audio device's motion, or may detect an attribute from which motion may be determined, such as velocity or acceleration. Accelerometers, magnetometers, gyrometers, optical sensors (including cameras), and the like are all examples of motion sensors. The motion sensor310may be omitted in some embodiments. Certain embodiments omitting a motion sensor310may use data from the position sensor300to estimate the sensing device's motion, while others may entirely omit or not use motion data. As one example of estimation motion from the position sensor data, data corresponding to different locations may be received at different times from the position sensor300. Distance traveled can be estimated from the data. Given estimated distance traveled and the time between measured locations (e.g., the time taken to travel the distance), the sensing device's120velocity can be estimated. In some embodiments, one or more motion sensors310may be incorporated into the wearable audio device110in addition to, or instead of, in the sensing device120. The battery340may supply power to the other components of the sensing device120, in a manner similar to that discussed with respect to the battery240of the wearable audio device110. The battery340may be recharged from an external power source, as discussed above with respect to the battery240ofFIG.2. The sensing device120typically includes a processor350, which may be similar to, or perform functions similar to, those of the processor250discussed with respect toFIG.2. That is, the processor350may control operation of any or all of the other components of the sensing device120. The processor may also receive data from the receiver330and transmit data through the transmitter320, for example from and/or to the wearable audio device110. The processor350may thus coordinate operations of the sensing device120with any other suitable electronic device. The processor350, although referred to in the singular, may include multiple processing cores, units, chips, or the like. The storage360may be magnetic storage, flash storage, optical storage or any suitable, computer-readable storage mechanism. The storage360may store one or more applications that are executed by the processor350of the sensing device120. These applications may enable functionality of the sensing device120, the wearable audio device110, or both. As one example, a fitness application may be stored in the storage360and executed by the processor350to track a user's fitness routine, provide instruction, and the like. FIGS.4A-4Billustrate one sample contextual audio system in operation in an example environment (here, a road400). In more detail,FIGS.4A-4Billustrate one sample embodiment in a sample environment where some combination of positional data, operating data of a sensing device440and/or wearable audio device420, user's travel speed, and/or user's distance traveled, may be used to adjust audio output of the wearable audio device420. “Operating data” may include data related to applications being executed by the sensing device440and/or wearable audio device420, location of the wearable audio device420with respect to the user's ears (e.g., whether the wearable audio device is in or on the user's ears), volume of the audio output, and so on. A specific example of the contextual audio system's operation will now be discussed. As shown inFIG.4A, a user410may be riding a bicycle430along a side of the road400. The user410may be wearing a wearable audio device420and a sensing device440. In this example, the sensing device440is an electronic watch and the wearable audio device420is a pair of earbuds. As discussed above, the wearable audio device420may be in electronic communication with the sensing device440. As shown, the user may occupy a first position450alongside the road400. The sensing device440(e.g., watch) may acquire positional data, such as GPS data, indicating the user's position450. Based on this positional data, the sensing device440and/or wearable audio device420may determine which side of the road400the user is on, and thus which ear, and which earbud of the wearable audio device420, faces the road400. The wearable audio device420may also include one or more sensors that indicate whether the wearable audio device is in, or covers, one of the user's ears, both of the user's ears, or neither of the user's ears. In some embodiments, the sensing device440and/or the wearable audio device420may execute an application associated with the user's activity, such as a cycling application, or may play audio associated with a particular activity, such as a cycling playlist. Such applications, audio, and the like may provide additional information regarding the user's action. Further, as the user410moves along the road from a first position450(as shown inFIG.4A) to a second position450′ (as shown inFIG.4B), the sensing device440and/or the wearable audio device420may utilize positional data to determine the user's velocity and/or distance traveled460. Positional data used to determine velocity and/or distance traveled may include GPS data, accelerometer data, magnetometer data, gyroscopic sensor data, and so on. As the user410cycles along the road400, the embodiment may employ any or all of the positional data, application being executed, audio being played, user's velocity, positioning of the wearable audio device (e.g., whether worn in or on one or both ears) to determine whether to adjust audio output from the wearable audio device420. For example, a processor of the sensing device440may determine that audio should not be played through the wearable audio device420while the user410is cycling along the road400(or is in any location where the user410should be alert, whether for his safety, the safety of others around him, or another reason). As yet another option, a processor of the sensing device440may determine that audio should not be played through the wearable audio device420while the user410is cycling along the road and so long as the wearable audio device is inserted into, covers, or is otherwise adjacent to the user's410ear facing the road400. Put another way, the embodiment may determine that audio should not be played by the wearable audio device unless the user's410ear that faces the road is unobstructed and cannot hear the audio, thereby increasing the likelihood that the user410will hear and be aware of traffic on the road. As yet another example, the embodiment may determine that audio output from the wearable audio device420should be adjusted when the user's410speed is above a threshold and the user410is in a location that suggests or requires audio adjustment, whether for the user's410safety, the safety of those around the user410, or another reason. The location of the user, his or her facing relative to the road, his or her motion or speed, whether a wearable audio device420is worn or not, whether vehicles or other people are on or near the road, and the like are all examples of different contexts that may be used by embodiments in determining whether (and how) to adjust audio output. In any of the foregoing examples, audio adjustment may take the form of lowering or muting a first audio output to the user's410ear facing the road while maintaining (e.g., not adjusting) a second audio output to the user's410other ear. Alternately, audio adjustment may take the form of adjusting the first and second audio output, either in the same manner or different manners. The first audio output may be paused while the second audio output has its volume lowered, as one example. As another example, the first audio output may be paused or lowered while a warning message plays through the second audio output, reminding the rider to pay attention to traffic on the road. These are two non-limiting examples and are not exhaustive. AlthoughFIGS.4A-4Billustrate the contextual audio system as providing feedback while a user rides a bicycle along a road, it should be understood that other embodiments take different forms. For example, the embodiment shown in and described with respect toFIGS.4A-4Bmay be configured to operate when a user is running along a road or other location as opposed to cycling. Similarly,FIGS.5A-5Billustrate a contextual audio system that provides feedback regarding a user's500posture as another example. Here, the contextual audio system includes a first and second earbud510a,510b(collectively forming a wearable audio device) and a sensing device520. In this example the sensing device520may be incorporated into the user's500clothing or may be a separate structure worn or carried by the user500. Further, it should be appreciated that the manner in which audio output is adjusted may depend on the location of the user, the wearable audio device, and/or the sensing device. In some locations, embodiments may pause or prevent audio output, while in others audio output may be reduced in volume or played back only through one audio output structure. As one example, audio may be muted or suspended in locations where a user's attention is necessary, such as hazardous locations, at a job site, in an education facility, and so on. It should be appreciated that audio output may be muted or halted to one or both ears; audio output may be halted or muted to one ear when the user is walking on or along a road or a trail, but may be halted or muted in both ears in a job setting or classroom, by way of example. The relative danger or risk to the user (or to others from the user), as well as the location of such relative risk or danger, also may be a context in determining whether audio output is adjusted to one or both ears. Motion (including speed), applications executing on the wearable audio device or sensing device (or another associated device), user preferences, emergency conditions (such as a flood, accident, dangerous weather, or the like) may also be contexts in adjusting audio output, as well as for which ear or ears audio output is adjusted. Although operation of embodiments have been discussed in the context of bicycling, it should be appreciated that embodiments may operate generally as described with respect to other vehicles, as well. For example, if positional data and/or motion data from the sensing device120determines that a user is in an automobile that has crossed a dividing line of a road or is otherwise incorrectly positioned or located, the embodiment may pause audio output through the wearable audio device110in order to bring the user's attention to the vehicle's location. Further, the sensing device120may adjust the audio output by playing an audible alert through the wearable audio device110rather than muting, pausing, or lowering the volume of the audio output. FIGS.5A-5Billustrate another example embodiment of a contextual audio system, in which a sensing device520and wearable audio devices510a,510bcooperate to determine when audio output is adjusted.FIG.5Aillustrates a user500leaning slightly to his right. The user500has one wearable audio device510ain his right ear and a second wearable audio device510bin his left ear. The user500also wears clothing530that incorporates a sensing device520. The sensing device520may be woven into the clothing, may be contained within the clothing, or the like. The sensing device520may include conductive fabric forming a sensor that is, in turn, connected to an electronic device such as a smart telephone, smart watch, or the like elsewhere on the user's500body. Thus, in the embodiment shown inFIGS.5A-5B, the sensing device520may be distributed across different parts or places of a user's500body or clothing530. In the embodiment ofFIG.5A, the sensing device520may be configured to track a location of a user's center or mass or center torso. Likewise, each of the wearable audio devices510a,510bmay be configured to track or determine their position relative to one another and/or the sensing device520; they may include position sensors configured for this purpose. Accordingly, the sensing device520(or another suitable electronic device in communication with the sensing device and/or wearable electronic devices510a,510b) may determine whether the head is centered over the torso by comparing the relative position of the first wearable electronic device,510awith respect to the sensing device520, to the second wearable electronic device510b, again relative to the sensing device520. Presuming the location of the sensing device520with respect to the center of the user's torso is known, the embodiment may employ the aforementioned relative positions to determine if the user500is leaning to one side. In the example ofFIG.5A, the left wearable audio510bmay be slightly closer to the sensing device520than the right wearable audio device510a. Accordingly, the embodiment (and more specifically, one or both of the processors250,350, discussed with respect toFIGS.2and3) may determine that the user500is leaning to the right. In response to determining the user500is leaning to one side, the embodiment may adjust audio outputted through the audio output structure200of one or both of the wearable audio devices510a,510b. The adjusted audio may prompt the user500to straighten his stance and may provide cues as to which way the user leans, resulting in the user standing straight as shown inFIG.5B. For example, audio may be muted, paused, raised, or lowered on one side or the other to provide audible feedback to the user regarding his posture. Similarly, audio output may take the form of an instruction (“stop leaning to the right”), encouragement (“you can improve your posture by changing your stance!”), or other audio cue outputted through one or both of the wearable audio devices510a,510b. Accordingly, one context used by the contextual audio system when determining how (or whether) to adjust audio output of a wearable audio device510a,510bis a position (e.g., stance) of the user. FIG.6shows a sample workout mat600that is one example of a sensing device. Note that, with respect toFIGS.6-7B, the terms “mat”600and “sensing device” are used interchangeably. The mat600includes drive lines610and sense lines620that, taken together, form a set of capacitive force-sensing nodes. These nodes are examples of position sensors300as discussed above with respect toFIG.3. Here, however, the position sensors300detect a location of a person standing on the sensing device600rather than a location of the sensing device itself. Some embodiments may use resistive sensing nodes, optical sensing nodes, or the like instead of, or in addition to, the capacitive sensing structure discussed with respect toFIGS.6-7B. The mat600further includes a battery630and circuitry640configured to control operations of the mat and any associated wearable audio devices, as well as to facilitate communication between the mat and the wearable audio device(s). The circuitry640may be any or all of the processor350, storage360, transmitter320, and/or receiver330discussed above with respect toFIG.3. FIGS.7A-7Billustrate a user700standing on the sensing device600. The user is wearing a pair of wearable audio devices710a,710b, one in each ear. The wearable audio devices710a,710bmay be in communication with the sensing device (e.g., mat)600via the mat's circuitry640. As show inFIG.7A, the user may be in a yoga pose but her positioning may be slightly off or otherwise suboptimal for the pose. Insofar as the user's700foot rests on multiple force sensors of the mat600, the user's weight distribution can be detected. This, in turn, can permit the sensing device600to determine or otherwise estimate whether the user700is standing leaning to one side while standing on the mat600(as shown inFIG.7A), or standing straight on the mat (as shown inFIG.7B). Further, the sensing device600may transmit a command to adjust audio output of the wearable audio device(s)510a,510bin response to determining that the user's weight is improperly distributed (e.g., the user is leaning to one side). Thus, the user's balance and stance are other contexts that may be used by the embodiment to determine whether, and how, to adjust audio output. In some embodiments the mat600may transmit force or touch data to the wearable audio devices, which may determine the balance, weight distribution, and/or posture of the user700, and/or may adjust audio output accordingly. Although the mat600is discussed as incorporating a set of force sensors formed by capacitive drive and sense lines610,620, it should be appreciated that discrete force sensors may be employed instead. Likewise, touch sensors may be used instead of force sensors and the area and/or shape of a user's touch on the sensing device600may be analyzed to determine weight distribution or posture. FIG.8is a flowchart illustrating one sample method800for a contextual audio system using a variety of contexts or factors to adjust audio output to a user. It should be appreciated that many of the operations discussed with respect to this figure are optional and may be omitted in some embodiments. Likewise, additional operations may be performed in other embodiments. The method800begins in operation810, in which an application is started, initiated, executed, or the like on a suitable electronic device. The electronic device maybe a wearable electronic device110, a sensing device120, or another electronic device in communication with either or both of the wearable electronic device and sensing device. The application may be an exercise application, a driving application, an application associated with a vehicle, or the like. It should be noted that this operation is optional and may be omitted or ignored in some embodiments. In operation820, the embodiment detects a location or otherwise receives positional data. Examples of positional data include: a location of a user, or a device associated with a user, relative to a landmark, object, or the like; an absolute location of a user, or a device associated with a user (such as GPS data or other methods of determining latitude and longitude); a position of a user on an object; a facing of a user or a device associated with a user; a balance of a user; a tilt or angle of a user's body, whether absolute or relative to an object such as a sensing device; and so on. The positional data may be determined by a sensing device120. In some embodiments, the sensing device120may be the wearable audio device110. Positional data may be supplied by a position sensor300. In operation830, the embodiment determines a user's motion. The user's motion may be determined from motion sensor310data or may be determined based on successive sets of positional data from the position sensor300. Velocity and/or acceleration may likewise be determined in operation830; the terms “velocity” and “speed” are used interchangeably herein. Operation830is optional and may be omitted in some embodiments. In operation840, the embodiment determines if the user's location (or other position) is one where the user should be alert or otherwise prompted, whether for the user's safety, the safety of others, to improve the user's performance, or the like. If not, the method800ends in end state895. If so, the method800proceeds to operation850. In operation850the embodiment adjusts audio output from the wearable audio device110. As discussed elsewhere herein, audio adjust may take the form of stopping, pausing, muting, lowering, or raising an audio output as well as outputting specific feedback, messages, prompts, or the like. Audio output may be adjusted to one or more wearable audio devices110, again as discussed herein. As one example, audio may be adjusted to one of a pair of earbuds in certain contexts. In operation860, the embodiment determines if the audio being outputted is over. If so, the method800terminates in end state895. Otherwise, the method proceeds to operation870. Operation860is optional and may be omitted in some embodiments. In operation870, the embodiment determines whether a user's location or other position changes. If not, the method800terminates in end state895. Otherwise the method800proceeds to operation880. Operation870is optional and may be omitted in some embodiments. In operation880, the embodiment determines if the application initiated in operation810has ended. If so, then adjusting the audio output of the wearable audio device110is no longer necessary and the method800ends at end state895. Otherwise the method800proceeds to operation890. Operation880is optional and may be omitted in some embodiments. In operation890, the embodiment determines whether a user's (or a device's) rate of motion is below a threshold. If the velocity is below the threshold, then the method800terminates in end state895. If not, then the method800returns to operation820. It should be appreciated that some embodiments may determine whether velocity exceeds a threshold, in which case the “yes” and “no” branches of the operation890may be reversed. In some embodiments, acceleration of a user or device may be analyzed against a threshold rather than velocity. Generally, operations860-890may be performed in any order and the order shown is but one example. Further any or all of these operations may be omitted or skipped by embodiments and any combination of these operations may be executed in various embodiments. Operations in which the embodiment “determines” an outcome, such as operations840and860-890, may be performed by a processor250,350of the wearable audio device110or sensing device120, or the two in concert. Likewise, various operations may be performed by the components of either or both of the wearable audio device110and sensing device120, as appropriate. In some embodiments one or more operations of the method800may be performed by another electronic device in communication with either or both of the wearable audio device and sensing device. The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art, after reading this description, that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art, after reading this description, that many modifications and variations are possible in view of the above teachings.
37,928
11943577
DETAILED DESCRIPTION In order to illustrate the technical solutions related to the embodiments of the present disclosure, brief introduction of the drawings referred to in the description of the embodiments is provided below. Obviously, drawings described below are only some examples or embodiments of the present disclosure. Those skilled in the art, without further creative efforts, may apply the present disclosure to other similar scenarios according to these drawings. It should be understood that the exemplary embodiments are provided merely for better comprehension and application of the present disclosure by those skilled in the art, and not intended to limit the scope of the present disclosure. Unless obviously obtained from the context or the context illustrates otherwise, the same numeral in the drawings refers to the same structure or operation. As used in the disclosure and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise. In general, the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including,” merely prompt to include steps and elements that have been clearly identified, and these steps and elements do not constitute an exclusive listing. The methods or devices may also include other steps or elements. The term “based on” is “based at least in part on.” The term “one embodiment” means “at least one embodiment”, and the term “another embodiment” means “at least one additional embodiment”. Related definitions of other terms will be given in the description below. Hereinafter, “player”, “speaker device”, “speaking device” or “speaker” will be used in describing the sound conduction related techniques in the present invention. This description is only a form of speaker application. For those skilled in the art, “speaker device”, “speaker”, or “earphone” can also be replaced by other similar words, such as “player”, “hearing aid”, or the like. In fact, the various implementations in the present disclosure may be easily applied to other non-speaker-type hearing devices. For example, for those skilled in the art, after understanding the basic principle of the speaker device, various modifications and changes to the implementation of the speaker device may be performed on the specific methods and details of the speaker device without departing from this principle. In particular, the environment sound picking and processing function may be added to the speaker device, so that the speaker device has the function of the hearing aid. For example, in the case of using a bone conduction speaker device, a sound transmitter such as a microphone may pick up an ambient sound close to the user/wearer. The sound may be further processed using a certain algorithm, and the processed sound (or a generated electrical signal) may be transmitted to the user/wearer. That is, the speaker device may be modified and have the function of picking up ambient sound. The ambient sound may be processed and transmitted to the user/wearer through the speaker device, thereby implementing the function of a hearing aid. The algorithm mentioned above may include a noise cancellation algorithm, an automatic gain control algorithm, an acoustic feedback suppression algorithm, a wide dynamic range compression algorithm, an active environment recognition algorithm, an active noise reduction algorithm, a directional processing algorithm, a tinnitus processing algorithm, a multi-channel wide dynamic range compression algorithm, an active howling suppression algorithm, a volume control algorithm, or the like, or any combination thereof. FIG.1is a flowchart illustrating an exemplary process for generating auditory sense through a speaker device according to some embodiments of the present disclosure. The speaker device may transfer sound to an auditory system through bone conduction or air conduction, thereby generating auditory sense. As shown inFIG.1, the process for generating the auditory sense through the speaker device may include the following operations. In101, the speaker device may acquire or generate a signal (also referred to as a “sound signal”) containing sound information. In some embodiments, the sound information may refer to a video file or an audio file with a specific data format, or data or files that may be converted to a sound through specific approaches. In some embodiments, the signal containing the sound information may be obtained from a storage unit of the speaker device itself. In some embodiments, the signal containing the sound information may be obtained from an information generation system, a storage system, or a transmission system other than the speaker device. The signal containing the sound information may include an electrical signal, and/or other forms of signals other than the electrical signal, such as an optical signal, a magnetic signal, and a mechanical signal, or the like. In principle, as long as the signal includes information that may be used to generate sounds by a speaker device, the signal may be processed as the sound signal. In some embodiments, the sound signal may come from a signal source, or a plurality of signal sources. The plurality of signal sources may be independent of or dependent on each other. In some embodiments, manners of generating or transmitting the sound signal may be wired or wireless and may be real-time or time-delayed. For example, the speaker device may receive an electrical signal containing sound information via a wired or wireless connection or may obtain data directly from a storage medium and generate a sound signal. Taking bone conduction technology as an example, components with sound collection function(s) may be added to a bone conduction speaker device. The bone conduction speaker device may pick up sound from the ambient environment and convert mechanical vibration of the sound into an electrical signal. Further, the electrical signal may be processed through an amplifier to meet specific requirements. The wired connection may be realized by using, including but not limited to a metal cable, an optical cable, or a hybrid cable of metal and optical, such as a coaxial cable, a communication cable, a flexible cable, a spiral cable, a non-metal sheathed cable, a metal sheathed cable, a multi-core cable, a twisted pair cable, a ribbon cable, a shielded cable, a telecommunication cable, a double-stranded cable, a parallel twin-core wire, a twisted-pair wire. The wired connection may also be realized by using other types of transmission carriers, such as transmission carriers for electrical or optical signal. The storage device or storage unit mentioned herein may include a storage device or storage unit on a direct attached storage, a network attached storage, a storage area network, and/or other storage systems. The storage device may include but is not limited to common types of storage devices such as a solid-state storage device (a solid-state drive, a solid-state hybrid hard drive, etc.), a mechanical hard drive, a USB flash drive, a memory stick, a storage card (e.g., CF, SD, etc.), and other drives (e.g., CD, DVD, HD DVD, Blu-ray, etc.), a random access memory (RAM), a read-only memory (ROM), etc. The RAM may include but is not limited to a decimal counter, a selection tube, a delay line memory, a Williams tube, a dynamic random access memory (DRAM), a static random access memory (SRAM), a thyristor random access memory (T-RAM), a zero capacitive random access memory (Z-RAM), etc. The ROM may include but is not limited to a magnetic bubble memory, a magnetic button line memory, a thin film memory, a magnetic plating line memory, a magnetic core memory, a drum memory, an optical disk driver, a hard disk, a magnetic tape, an early non-volatile memory (NVRAM), a phase change memory, a magneto-resistive random access memory, a ferroelectric random access memory, a non-volatile SRAM, a flash memory, an electronically erasable rewritable read-only memory, an erasable programmable read-only memory, a programmable read-only memory, a shielded heap read memory, a floating connection gate random access memory, a nano random access memory, a racetrack memory, a variable resistance memory, a programmable metallization unit, etc. The storage device/storage unit mentioned above is only used for illustration purposes. The storage medium used in the storage device/unit is not limited. In102, the speaker device may convert the signal containing the sound information into a vibration to generate the sound. The speaker device may use a specific transducer to convert the signal into a mechanical vibration, and the generation of the mechanical vibration may accompany with energy conversion. The energy conversion process may include coexistence and conversion of multiple types of energy. For example, the electrical signal may be directly converted into the mechanical vibration by the transducers, and generate the sound. As another example, the sound information may be included in an optical signal, which may be converted into mechanical vibrations by a specific transducer. Other types of energy that may be coexisting and converted when the transducer works may include thermal energy, magnetic field energy, etc. In some embodiments, an energy conversion type of the transducer may include but is not limited to, a moving coil type, an electrostatic type, a piezoelectric type, a moving iron type, a pneumatic type, an electromagnetic type, or the like, or any combination thereof. A frequency response range and sound quality of the speaker device may be affected by the energy conversion type and a property of each physical component of the transducer. For example, in a transducer with the moving coil type, a wound cylindrical coil may be connected to a vibration plate, the coil driven by a signal current may drive the vibration plate to vibrate in a magnetic field and generate the sound. Factors, such as material expansion and contraction, folds deformation, a size, a shape, and a fixation manner of the vibration plate, a magnetic density of a permanent magnet, etc., may have a relatively great effect on the sound quality of the speaker device. The term “sound quality” used herein may indicate the quality of the sound, which may refer to an audio fidelity after the sound is processed, transmitted, or the like. In an audio device, the sound quality may include audio intensity and magnitude, an audio frequency, an audio overtone, or harmonic components, etc. For an audio device, the sound quality may include audio intensity and magnitude, an audio frequency, an audio overtone, a harmonic component, or the like, or any combination thereof. When the sound quality is evaluated, a measuring manner and an evaluation criterion for objectively evaluating the sound quality may be used, other manners that combine different elements of the sound and subjective feelings for evaluating various properties of the sound quality may also be used. In103, the sound is transmitted by a transmission system. In some embodiments, a transmission system refers to a substance that can deliver a vibration signal containing sound information, such as the skull, the bony labyrinth, the inner ear lymph, the spiral organ of a human or/and an animal with the auditory system. As another example, the transmission system also refers to a medium (e.g., air and liquid) that may transmit a sound. To illustrate the process of transmitting sound information by the transmission system, a bone conduction speaker device may be taken as an example. The bone conduction speaker device may directly transmit a sound wave (e.g., a vibration signal) converted from an electrical signal to an auditory center through bones. In addition, the sound wave may be transmitted to the auditory center through air conduction. More descriptions regarding the air conduction may be found elsewhere in the present disclosure. In104, the sound information may be transmitted to a sensing terminal. Specifically, the sound information may be transmitted to the sensing terminal through the transmission system. In some embodiments, the speaker device may pick up or generate a signal containing the sound information, convert the sound information into a sound vibration by the transducer. The speaker device may transmit the sound to the sensing terminal through the transmission system, and a user may hear the sound. Generally, a subject of the sensing terminal, the auditory system, the sensory organ, etc., described above may be a human or an animal with the auditory system. It should be noted that the following description of the speaker device used by a human does not constitute a restriction on the application scene of the speaker device, and similar descriptions may also be applied to other animals. It should be noted that the above descriptions of the implementing process of the speaker device are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications in forms and details of the specific methods and operations of implementing the speaker device may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, a signal correcting or enhancing operation may be added between acquiring the signal containing sound information in operation101and generating the sound in operation102, which may enhance or correct the signal acquired in101according to a specific algorithm or parameter. As another example, a vibration enhancing or correcting operation may be additionally added between generating the sound in operation102and transmitting the sound in operation103. The speaker device described according to some embodiments of the present disclosure may include, but is not limited to, an earphone, an MP3 player, or a hearing aid. In the following specific embodiments of the present disclosure, an MP3 player is taken as an example to describe the speaker device in detail.FIG.2is a schematic diagram illustrating an exploded structure of an exemplary MP3 player according to some embodiments of the present disclosure.FIG.3is a schematic diagram illustrating a part of a structure of an exemplary ear hook of an MP3 player according to some embodiments of the present disclosure.FIG.4is a schematic diagram illustrating a partially enlarged view of part A inFIG.3according to some embodiments of the present disclosure. As shown inFIG.2, in some embodiments, an MP3 player may include an ear hook10, a core housing20, a circuit housing30, a rear hook40, an earphone core50, a control circuit60, and a battery70. The core housing20and the circuit housing30may be disposed at two ends of the ear hook10respectively, and the rear hook40may be further disposed at an end of the circuit housing30away from the ear hook10. A count of the core housings20may be two. The two core housings20may be configured to accommodate two earphone cores50, respectively. A count of the circuit housings30may be two. The two circuit housings30may be configured to accommodate the control circuit60and the battery70, respectively. Two ends of the rear hook40may be connected to the corresponding circuit housings30respectively. The ear hook10refers to a structure surrounding and supporting a user's ear when the user wears a bone conductive MP3 player, and then suspending and fixing the core housing20and the earphone core50at a predetermined position of the user's ear. Referring toFIGS.2-4, in some embodiments, the ear hook10may include an elastic metal wire11, a wire12, a fixed sleeve13, a first plug end14, and a second plug end15. The first plug end14and the second plug end15may be disposed at both ends of the elastic metal wire11. In some embodiments, the ear hook10may further include a protective sleeve16and a housing sheath17integrally formed with the protective sleeve16. The elastic metal wire11may be mainly used to keep the ear hook10in a shape that matches the user's ear. The elastic metal wire11may have a certain elasticity, so as to generate a certain elastic deformation according to the user's ear shape and head shape to adapt to users with different ear shapes and head shapes. In some embodiments, the elastic metal wire11may be made of a memory alloy, which has good deformation recovery ability. Thus, even if the ear hook10is deformed by an external force, it may still be restored to its original shape when the external force is removed, and continue to be used by users, thereby extending the life of the MP3 player. In other embodiments, the elastic metal wire11may also be made of a non-memory alloy. The wire12may be used for electrical connection with the earphone core50, the control circuit60, the battery70, etc., for power supply and data transmission for the operation of the earphone core50. The fixed sleeve13may be used to fix the wire12on the elastic metal wire11. In this embodiment, there are at least two fixed sleeves13. The at least two fixed sleeves13may be spaced apart along the elastic metal wire11and the wire12, and disposed on the outer periphery of the wire12and the elastic metal wire11by wrapping to fix the wire12on the elastic metal wire11. In some embodiments, the first plug end14and the second plug end15may be made of hard materials, such as plastic. In some embodiments, the first plug end14and the second plug end15may be formed respectively on both ends of the elastic metal wire11by injection molding. In some embodiments, the first plug end14and the second plug end15may be formed by injection molding separately. Connection holes to connect with the end of the elastic metal wire11are respectively reserved during the injection molding of the first plug end14and the second plug end15. After the injection molding is completed, the first plug end14and the second plug end15may be inserted into the corresponding ends of the elastic metal wire11respectively by the connection holes or fixed by bonding. It should be noted that, in this embodiment, the first plug end14and the second plug end15may not be directly formed by injection molding on the periphery of the wire12, which avoids the wire12during injection molding. Specifically, when the first plug end14and the second plug end15are injection molded, the wire12located at both ends of the elastic metal wire11may be fixed to be far away from the position of the first plug end14and the second plug end15. Further, a first wiring channel141and a second wiring channel151may be disposed respectively on the plug14and the second plug end15to extend the wire12along the first wiring channel141and the second wiring channel151after the injection molding. Specifically, the wire12may be threaded into the first wiring channel141and the second wiring channel151in a threading way after the first wiring channel141and the second wiring channel151are formed. In some embodiments, the first plug end14and the second plug end15may be directly injection molded on the periphery of the wire12according to actual conditions, which is not specifically limited herein. In some embodiments, the first wiring channel141may include a first wiring groove1411and a first wiring hole1412connecting with the first wiring groove1411. The first wiring groove1411may be connected with the sidewall of the first plug end14. One end of the first wiring hole1412may be connected with one end of the first wiring groove1411and another end of the first wiring hole1412may be connected with the outer end surface of the first plug end14. The wire12at the first plug end14may extend along the first wiring groove1411and the first wiring hole1412and be exposed on the outer end surface of the first plug end14to further connect with other structures. In some embodiments, the second wiring channel151may include a second wiring groove1511and a second wiring hole1512connecting with the second wiring groove1511. The second wiring groove1511may be connected with the sidewall of the second plug end15, one end of the second wiring hole1512may be connected with one end of the second wiring groove1511, and another end of the second wiring hole1512may be connected with the outer end surface of the second plug end15. The wire12at the second plug end15may extend along the second wiring groove1511and the second wiring hole1512and be exposed on the outer end surface of the second plug end15to further connect to other structures. In some embodiments, the outer end surface of the first plug end14refers to the surface of the end of the first plug end14away from the second plug end15. The outer end surface of the second plug end15refers to the surface of the end of the second plug end15away from the first plug end14. In some embodiments, the protective sleeve16may be injection molded around periphery of the elastic metal wire11, the wire12, the fixed sleeve13, the first plug end14, and the second plug end15. Thus, the protective sleeve16may be fixedly connected with the elastic metal wire11, the wire12, the fixed sleeve13, the first plug end14, and the second plug end15respectively. There is no need to form the protective sleeve16separately by injection molding and then further wrap protective sleeve16around the periphery of the elastic metal wire11, the first plug end14, and the second plug end15. It may simplify the manufacturing and assembly processes and make the fixation of the protective sleeve16more reliable and stable. In some embodiments, when the protective sleeve16is formed, a housing sheath17disposed on the side close to the second plug end15may be integrally formed with the protective sleeve16. In some embodiments, the housing sheath17may be integrally formed with the protective sleeve16to form a whole structure. The circuit housing30may be connected to one end of the ear hook10by being fixedly connected to the second plug end15. The housing sheath17may be further wrapped around the periphery of the circuit housing30in a sleeved manner. Specifically, when manufacturing the ear hook10of the MP3 player, the following operation may be implemented. In operation S101, the fixed sleeve13may be used to fix the wire12on the elastic metal wire11. An injection position is reserved at both ends of the elastic metal wire11. Specifically, the elastic metal wire11and the wire12may be placed side by side in a preset way, and then the fixed sleeve13is further sleeved around the wire12and the elastic metal wire11, so as to fix the wire12on the elastic metal wire11. Since the two ends of the elastic metal wire11still need the injection molded first plug end14and the second plug end15, the two ends of the elastic metal wire11may not be completely wrapped by the fixed sleeve13. A corresponding injection position needs to be reserved for injection molding of the first plug end14and the second plug end15. In operation S102, the first plug end14and the second plug end15may be injection molded at the injection positions of the two ends of the elastic metal wire11, respectively. The first wiring channel141and the second wiring channel151are disposed on the first plug end14and the second plug end15, respectively. In operation S103, the wire12may be disposed to extend along the first wiring channel141and the second wiring channel151. Specifically, after the forming of the first plug end14and the second plug end15is completed, the two ends of the wire12may be further threaded into the first wiring channel141and the second wiring channel151manually or by a machine. The part of the wire12located between the first wiring channel141and the second wiring channel151may be fixed on the elastic metal wire11by the fixed sleeve13. In operation S104, the protective sleeve16may be formed by injection molding on the periphery of the elastic metal wire11, the wire12, the fixed sleeve13, the first plug end14, and the second plug end15. In some embodiments, when operation S104is performed, the housing sheath17may be integrally formed with the protective sleeve16on the periphery of the second plug end15by injection molding. In some embodiments, it should be noted that the wire12may not be disposed when the fixed sleeve13is installed. The wire12may be further disposed after the first plug end14and the second plug end15are injection molded. The specific operations are as follows. In operation S201, the fixed sleeve13may be sleeved on the elastic metal wire11. The injection molding positions may be reserved at both ends of the elastic metal wire11. In operation S202, the first plug end14and the second plug end15may be injection molded at the injection positions of the two ends of the elastic metal wire11, respectively. The first wiring channel141and the second wiring channel151may be disposed on the first plug end14and the second plug end15, respectively. In operation S203, the wire12may be threaded inside the fixed sleeve13, so as to use the fixed sleeve13to fix the wire12on the elastic metal wire11. Further, the wire12may be disposed to extend along the first wiring channel141and the second wiring channel151. It should be noted that, in this way, interference of the wire12may be avoided during injection molding of the first plug end14and the second plug end15, so as to facilitate the smooth progress of molding. It should be noted that the structure, function, and formation of the elastic metal wire11, the wire12, the fixed sleeve13, the first plug end14, the second plug end15, and the protective sleeve16involved in the embodiment set forth above are the same as those in the foregoing embodiment, and for related details, please refer to the foregoing embodiment, which are not repeated herein. In some embodiments, the core housing20may be used to accommodate the earphone core50and may be plugged and fixed with the first plug end14. A count of the earphone core50and the core housing20may both be two, corresponding to the left ear and the right ear of the user, respectively. In some embodiments, the core housing20and the first plug end14may be connected by plugging, clamping, etc., so as to fix the core housing20and the ear hook10together. That is, in this embodiment, the ear hook10and the core housing20may be formed separately first, and then be assembled together, instead of directly forming the two together. In this way, the ear hook10and the core housing20may be molded separately with corresponding molds instead of using the same larger-sized mold to form the two integrally, which may reduce the size of the mold and the difficulty of mold process. In addition, since the ear hook10and the core housing20are processed by different molds, when the shape or structure of the ear hook10or the core housing20needs to be adjusted in the manufacturing process, it is sufficient to adjust the mold corresponding to the structure instead of adjusting the mold of another structure, so as to reduce the cost of production. In other embodiments, the ear hook10and the core housing20may be integrally formed according to the situation. In some embodiments, the core housing20may be disposed with a first socket22connecting with the outer end surface21of the core housing20. The outer end surface21of the core housing20refers to the end surface of the core housing20facing the ear hook10. The first socket22may include an accommodating space for the first plug end14of the ear hook10to be inserted into the core housing20, so as to further realize the plug and fixation between the first plug end14and the core housing20. FIG.5is a schematic diagram illustrating a partial sectional view of an exemplary MP3 player according to some embodiments of the present disclosure.FIG.6is a schematic diagram illustrating a partially enlarged view of part B inFIG.5. Referring toFIG.2,FIG.5, andFIG.6, in some embodiments, the first plug end14may include an inserting portion142and two elastic hooks143. Specifically, the inserting portion142may be at least partially inserted into the first socket22and abut against an outer side surface231of a stopping block23. A shape of an outer sidewall of the inserting portion142may match that of an inner sidewall of the first socket22, so that the outer sidewall of the inserting portion142may abut against the inner sidewall of the first socket22when the inserting portion142is at least partially inserted into the first socket22. The outer side surface231of the stopping block23refers to a side of the stopping block23facing the ear hook10. The inserting portion142may further include an end surface1421facing the core housing20. The end surface1421may match the outer side surface231of the stopping block23, so that the end surface1421of the inserting portion142may abut against the outer side surface231of the stopping block23when the inserting portion142is at least partially inserted into the first socket22. In some embodiments, the two elastic hooks143may be disposed on a side of the insertion unit facing an inside of the core housing. For example, the two elastic hooks143may be disposed side by side and spaced apart symmetrically on the side of the inserting portion142facing an inside of the core housing20along a direction of insertion. Each elastic hook143may include a beam portion1431and a hook portion1432. The beam portion1431may be connected to a side of the inserting portion142facing the core housing20. The hook portion1432may be disposed on the beam portion1431away from the inserting portion142and extend perpendicular to the inserted direction. Further, each hook portion1432may include a side parallel to the inserted direction and a transitional slope14321away from the end surface1421of the inserting portion142. In some embodiments, after the core housing20and the first plug end14are plugged and fixed, at least a portion of the inserting portion142may be inserted into the first socket22. The other portion (i.e., the exposed portion) of the inserting portion142outside of the first socket may have a stepped structure, so as to form an annular table1422disposed apart from the outer end surface21of the core housing20. The exposed portion of the inserting portion142refers to the portion of the inserting portion142exposed to the core housing20. The exposed portion of the inserting portion142refers to the portion exposed to the core housing20and close to the outer end surface of the core housing20. In some embodiments, the annular table1422may be disposed opposite to the outer end surface21of the core housing20. A space between the annular table1422and the outer end surface21refers to a space along the direction of insertion and a space perpendicular to the direction of insertion. In some embodiments, the protective sleeve16may extend to the side of the annular table1422facing the outer end surface21of the core housing20. When the first socket22and the first plug end14of the core housing20are in a plugged-in connection, the protective sleeve16may be at least partially filled in the space between the annular table1422and the outer end surface21of the core housing20, and elastically abut against the core housing20. Thus, it is difficult for external liquid to enter the inside of the core housing20from a junction between the first plug end14and the core housing20, thereby realizing the sealing between the first plug end14and the first socket22, protecting the earphone core50, etc., inside the core housing20, and improving the waterproof effect of the MP3 player. In some embodiments, the protective sleeve16may include an annular abutting surface161on the outer end surface21of the annular table1422facing the outer end surface of the core housing20. The annular abutting surface161may be the end surface of the protective sleeve16facing the core housing20. In some embodiments, the annular table1422may be disposed opposite to the outer end surface21of the core housing20. A space between the annular table1422and the outer end surface21refers to a space along the direction of insertion and a space perpendicular to the direction of insertion. In some embodiments, the protective sleeve16may extend to the side of the annular table1422facing the outer end surface21of the core housing20. When the first socket22and the first plug end14of the core housing20are in a plugged-in connection, the protective sleeve16may be at least partially filled in the space between the annular table1422and the outer end surface21of the core housing20, and elastically abut against the core housing20. Thus, it is difficult for external liquid to enter the inside of the core housing20from a junction between the first plug end14and the core housing20, thereby realizing the sealing between the first plug end14and the first socket22, protecting the earphone core50, etc., inside the core housing20, and improving the waterproof effect of the MP3 player. Specifically, in some embodiments, the protective sleeve16may form an annular abutting surface161on the outer end surface21of the annular table1422facing the outer end surface of the core housing20. The annular abutting surface161may be an end surface of the protective sleeve16facing the core housing20. In some embodiments, the protective sleeve16may further include an annular convex table162locating inside the annular abutting surface161and protruding from the annular abutting surface161. Specifically, the annular convex table162may be formed inside of the annular abutting surface161facing the first plug end14, and may protrude toward the core housing20toward the annular abutting surface161. Further, the annular convex table162may be directly formed on the periphery of the annular table1422and cover the annular table1422. In some embodiments, the core housing20may include a connecting slope24configured to connect the outer end surface21of the core housing20and the inner side wall of the first socket22. The connecting slope24may be a transitional surface between the outer end surface21of the core housing20and the inner side wall of the first socket22. The connecting slope24may not be on a same plane as the outer end surface21of the core housing20and the inner side wall of the first socket22. In some embodiments, the connecting slope24may be a flat surface, a curved surface, or other shapes according to actual requirements, which is not limited herein. In some embodiments, when the first plug end14is fixedly plugged in the core housing20, the annular abutting surface161and the annular convex table162may elastically abut against the outer end surface of the core housing20and the connecting slope24, respectively. It should be noted that since the outer end surface21of the core housing20and the connecting slope24are not on the same plane, the elastic abutment between the protective sleeve16and the core housing20may be not on the same plane. Thus, it is difficult for external liquid to enter the core housing20from the junction of the protective sleeve16and the core housing20, and further enter the earphone core50thereby improving the waterproof effect of the MP3 player, protecting the inner structure of the MP3 player, and extending the service life of the MP3 player. In some embodiments, the inserting portion142may include an annular groove1423on the side of the annular table1422facing the outer end surface21of the core housing2, and the annular groove1423may be adjacent to the annular table1422. The annular convex table162may be formed in the annular groove1423. In some embodiments, an end of the wire12of the ear hook10disposed outside the core housing20may pass through the second wiring channel151to connect the circuits outside the core housing20, such as the control circuit60, the battery70, etc., included in the circuit housing30. Another end of the wire12may be exposed to the outer end surface of the first plug end14along the first wiring channel141, and further enter the core housing20through a first socket22along with an inserting portion142. FIG.7is a schematic diagram illustrating a part of a structure of an exemplary core housing according to some embodiments of the present disclosure.FIG.8is a schematic diagram illustrating a partially enlarged view of part D inFIG.7according to some embodiments of the present disclosure.FIG.9is a schematic diagram illustrating a partial cross-section view of an exemplary core housing according to some embodiments of the present disclosure. Referring toFIG.2,FIG.7,FIG.8, andFIG.9, in some embodiments, the core housing may include a main housing25and a partition assembly. In some embodiments, the partition assembly26may be located inside the main housing25and may be connected to the main housing25, thereby separating an inner space27of the main housing25into a first accommodating space271and a second accommodating space272near a plug hole22. In some embodiments, the main housing25may include a peripheral sidewall251and a bottom wall252connected to one end surface of the peripheral sidewall. The peripheral sidewall251and the bottom wall252may form the inner space27of the main housing25. In some embodiments, the partition assembly26may be located on one side of the main housing near the plug hole22and may include a side partition261and a bottom partition262. The side partition261may be disposed along a direction perpendicular to the bottom wall252, and two ends of the side partition261may be connected to the peripheral sidewall251, thereby separating the inner space27of the main housing25. The bottom partition262may be parallel or nearly parallel to the bottom wall252and spaced apart. The bottom partition262may be connected to the peripheral sidewall251and the side partition261respectively, thereby dividing the inner space27formed by the main housing25into two spaces, which are the first housing space surrounded by the side partition261, the bottom partition262, the peripheral sidewall251far away from the plug hole22, and the bottom wall252, and the second accommodating space272surrounded by the bottom partition262, the side partition261, and the peripheral sidewall251adjacent to the plug hole22. The second accommodating space272may be smaller than the first accommodating space271. In some embodiments, the partition assembly26may also divide the inner space27of the main housing25through other arrangements, which are not specifically limited here. In some embodiments, the earphone core may include a functional component51that may be disposed in the first accommodating space271and used for vibrating and generating sound. In some embodiments, the MP3 player may further include a wire80connected to the functional component51. An end of the wire80may be extended from the first accommodating space271to the second accommodating space272. In some embodiments, the side partition261may be disposed with a wiring groove2611at the top edge away from the bottom wall252. The wiring groove2611may connect the first accommodation space271and the second accommodation space272. Further, an end of the wire12away from the functional component may extend into the second accommodating space272through the wire groove2611. After the end of the wire12away from the circuit housing30entering the core housing20with the inserting portion142, the end of the wire12may further extend into the second accommodating space272and be electrically connected to the wire80in the second accommodating space272, so that a wire path connecting the first accommodating space271to an external circuit through the second accommodating space272may be formed. Thus, the functional component51may be electrically connected to the external circuit disposed outside the core housing20through the wire path. In some embodiments, the bottom partition262may also be disposed with a wiring hole2621, which connects the first socket22with the second accommodating space272, so that the wire12entering the core housing from the first socket22may extend to the second accommodating space272through the wiring hole2621. The wire12and the wire80may be coiled and disposed in the second accommodating space272after being connected in the second accommodating space272. Specifically, the wire12and the wire80may be connected together by welding. Further, the functional component51may be electrically connected to the external circuit, so as to provide power for the normal operation of the functional component51through the external circuit or transmit data to the earphone core50. It should be noted that when assembling the bone conductive MP3 player, the wire is often longer than the actual requirement to facilitate assembly. However, if the extra wires of the earphone core50may not be placed reasonably, it is easy to vibrate and make abnormal noises when the functional component51is working, thereby reducing the sound quality of the bone conductive MP3 player and affecting the user's experience of listening. In this embodiment, the second accommodating space272may be separated from the inner space27formed by the main housing25of the core housing20and used for accommodating extra wires12and wires80, so as to avoid or reduce the influence of the extra wires on the sound generated by the bone conductive MP3 player due to vibration, thereby improving the sound quality. In some embodiments, the partition component26may further include an inner partition263. The inner partition263may further divide the second accommodating space272into two sub-accommodating spaces2721. Specifically, the inner partition263may be disposed perpendicular to the bottom wall252of the main housing25and connected to the side partition261and the peripheral sidewall251respectively, and further extend to the wiring hole2621, so as to divide the wiring hole2621into two, while dividing the second accommodating space272into two sub-accommodating spaces2721. Each of the two wiring holes2621may be connected with a corresponding sub-accommodating space2721respectively. In this embodiment, there are two wires12and two wires80. The two wires12may extend into respective sub-accommodating spaces2721along the corresponding wiring holes2621respectively. The two wires80may enter the second accommodating space272through the wiring groove2611together, separate after entering the second accommodating space272, be welded with the corresponding wires12in the corresponding sub-accommodating spaces2721respectively, and further be coiled and disposed in the corresponding sub-accommodating space2721. In some embodiments, the second accommodating space272may be further filled with sealant. In this way, the wire12and the wire80included in the second accommodating space272may be further fixed, which may reduce the adverse effect on the sound quality caused by the vibration of the wire, improve the sound quality of the bone conductive MP3 player, and protect the welding point between the wire12and the wire80. In addition, the purpose of waterproof and dustproof may also be achieved by sealing the second accommodating space272. Referring toFIG.2andFIG.3, in some embodiments, the circuit housing30and the second plug end15may be plugged and fixed, so that the circuit housing30may be fixed to the end of the ear hook10away from the core housing20. When worn by the user, the circuit housing30including the battery70and the circuit housing30including the control circuit60may correspond to the left and right side of the user, respectively. The way of plug and connection of the circuit housing30and the control circuit60may be different from the corresponding plug end15. Specifically, the circuit housing30may be connected to the second plug end15through plug and connection, snap connection, or the like. In other words, in this embodiment, the ear hook10and the circuit housing30may be formed separately, and then be assembled after the form is completed, instead of directly forming the two together. In this way, the ear hook10and the circuit housing30may be molded separately with respective corresponding molds, instead of using the same larger-sized mold to form the two integrally, which may reduce the size of the molding mold and the difficulty of mold process. In addition, since the ear hook10and the circuit housing30are processed by different molds, when the shape or structure of the ear hook10or the circuit housing30needs to be adjusted in the manufacturing process, it is sufficient to adjust the mold corresponding to the structure. There is no need to adjust the mold corresponding to another structure, so as to reduce the cost of production. In some embodiments, the circuit housing30may be disposed with a second socket31. A shape of the inner surface of the second socket31may match that of at least part of the outer end surface of the second plug end15, so that the second plug end15may be at least partially inserted into the second socket31. Further, two slots152perpendicular to the inserted direction of the second plug end15with respect to the second socket31may be disposed on opposite sides of the second plug end15, respectively. Specifically, the two slots152may be symmetric and spaced apart on opposite sides of the second plug end15, and both be connected to the sidewall of the second plug end15in the vertical direction along the inserted direction. Referring toFIG.2, the circuit housing30may be flat. For example, the cross-section of the circuit housing30at the second socket31may be elliptical or other shapes that may be flattened. In this embodiment, two opposite sidewalls of the circuit housing30with a larger area may be main sidewalls33and the two opposite sidewalls with a smaller area connecting the two main sidewalls33may be auxiliary sidewalls34. It should be noted that the above descriptions of the MP3 player are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications in forms and details of the specific methods and operations of implementing the MP3 player may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, a count of the fixed sleeves13is not limited to the at least two described in the embodiments set forth above. The count of the fixed sleeves13may also be one, which may be specifically determined according to actual requirements. As another example, the shape of the cross-section of the circuit housing30at the second socket31may not be limited to be elliptical. The shape of the cross-section may also be other shapes, such as a triangle, a quadrilateral, a pentagon, and other polygons. Such variations and modifications do not depart from the scope of the present disclosure. The speaker device described according to some embodiments of the present disclosure may include, but is not limited to, an earphone, an MP3 player, a hearing aid, etc. As shown inFIG.2, in some embodiments, the position of the core housing20on the speaker device may not be fixed, and the core housing20may fit different parts of the user's cheek (e.g., in front of the ear, behind the ear, etc.), so that the user can feel different sound qualities and adjust the sound quality according to his/her preferences. It is also convenient for users with different head sizes. For example, the speaker device shown inFIG.2may be fixed on the human ear via the ear hook10, and the core housing20may be located in front of the ear. In some embodiments, the ear hook10may be elastic and deformable, and the ear hook10may be bent to change the fitting positions of the core housing20on the human body. In some embodiments, a connecting end of the ear hook10and the core housing20may be set according to a position that the user is accustomed to. For example, if the user is used to placing the core housing20behind the ear, the connecting end of the ear hook10may be disposed behind the ear while maintaining the fixing function of the ear hook10. More descriptions about the snap connection between the ear hook10and the core housing20may be found elsewhere in the present disclosure. It should be noted that the connection between the ear hook10and the core housing20may not be limited to the above-mentioned snap connection. For example, the ear hook10and the core housing20may also be connected via a hinge connection (e.g., a hinge assembly). More descriptions about the hinge connection may be found elsewhere in the present disclosure. In some embodiments, the core housing20may fit any area on a user's head, for example, a top of the head, the forehead, the cheek, a sideburn, an auricle, a back of an auricle, etc. In some embodiments, the way of fitting the bone conduction earphone to the head may include a surface fitting or a point fitting. A fitting surface may be provided with a gradient structure. A gradient structure refers to an area of the contact surface where the height of the contact surface changes. The gradient structure may include a convex/concave structure or a step-like structure on an outside of the contact surface (i.e., the side that is attached to the user), or a convex/concave structure or a step-like structure on an inside of the contact surface (i.e., the side facing away from the user). It should be noted that the above descriptions of the core housing are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications in forms and details of the specific methods and operations of implementing the fitting may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, a shape of the ear hook may not be limited to that shown inFIG.2. The shape of the ear hook may be adjusted according to the fitting position of the core housing and the head of the human body. Such variations and modifications do not depart from the scope of the present disclosure. FIG.10is a schematic structural diagram illustrating a structure of an exemplary hinge component according to some embodiments of the present disclosure.FIG.11is a schematic diagram illustrating an exploded structure of an exemplary hinge component according to some embodiments of the present disclosure. As shown inFIG.10andFIG.11, the hinge component may include a hinge2530, which is a structure used to connect two solid bodies and allow relative rotation between them. In some embodiments, the connection between the ear hook10and the core housing20may also be performed by means of the hinge joint. In some embodiments, the ear hook10and the core housing20may also be connected through a hinge, and a fitting position between the core housing20and a human skin may be adjusted by a hinge component. Referring toFIG.2,FIG.10andFIG.11, the hinge component may be disposed at an end of the ear hook10away from the circuit housing30. The hinge component may connect with the core housing20to the end of the ear hook10far from the circuit housing30through the hinge2530. In some embodiments, the hinge component may include a rod-like component2540and a fixing component2550. In some embodiments, the hinge2530may include a hinge base2531and a hinge arm2532. The hinge arm2532may be rotatably connected to the hinge base2531through a rotation shaft2533. The hinge base2531and the hinge arm2532may be respectively connected to two components that need to be rotationally connected. The two components may be rotationally connected together through the rotation shaft2533of the hinge2530. In some embodiments, the hinge base2531of the hinge2530may be connected to the rod-like component2540. In some embodiments, the rod-like component2540may be a partial structure or an overall structure of one of the two members rotationally connected through the hinge2530. In some embodiments, the rod-like component2540may be a connection structure in which one of the two members requiring rotational connection is connected to the hinge2530. When the hinge component is used in an MP3 player, the rod-like component2540may be at least a part of the ear hook10of the MP3 player. For example, the rod-like component2540may be all of the ear hook10. As another example, the rod-like component2540may be part of the end of the ear hook10away from the circuit housing30. In some embodiments, the hinge2530may be set at the end of the ear hook away from the circuit housing30through the part of the ear hook10. In some embodiments, the rod-like component2540may be disposed along the length direction with a hinge cavity2541communicating with the end surface of the rod-like component2540. A sidewall of the rod-like component2540may be disposed with a first insertion hole2542communicating with the hinge cavity2541. The end of the hinge base2531away from the hinge arm2532may be inserted into the hinge cavity2541from the end surface of the rod-like component2540, and may be fixed in the hinge cavity2541by the fixing component2550inserted in the first insertion hole2542. In some embodiments, the hinge cavity2541may communicate with the ear hook10away from the end face of the end of the circuit housing30. The hinge base2531may be inserted into the hinge cavity2541. The hinge2530may be connected to the ear hook10. In some embodiments, the first insertion hole2542may be formed by the rod-like component2540during the molding process, or may be formed on the sidewall of the rod-shaped member by a mean such as drilling after the molding. In some embodiments, the shape of the first insertion hole2542may be circular. In some embodiments, the shape of the first insertion hole2542may be other shapes (e.g., a square, a triangle, etc.). The shape of the fixing component2550may match the shape of the first insertion hole2542. The fixing component2550may be inserted into the first insertion hole2542from the outside of the rod-like component2540. The hinge base2531may be fixed in the hinge cavity2541by abutting the sidewall of the hinge base2531. In some embodiments, the hinge base2531may be fixed in the hinge cavity2541by penetrating and inserting into the outer wall of the hinge base2531. In some embodiments, a matching thread may be disposed on the inner wall of the first insertion hole2542and the outer wall of the fixing component2550. The fixing component2550may be connected to the first insertion hole2542by screwing to further fix the hinge base2531in the hinge cavity2541. In some embodiments, the first insertion hole2542and the fixing component2550may be connected by an interference fit. In some embodiments, the hinge arm2532may be connected with other components. After connecting with the hinge arm2532, the component may be further able to rotate around the rotation shaft2533by being mounted in the hinge cavity2541of the rod-like component2540with the hinge base2531or other components connected with the rod-like component2540. For example, when the hinge component is used in the MP3 player, the core housing20may be connected to the end of the hinge arm2532away from the hinge base2531. The core housing20of the earphone core50may be connected to the end of the ear hook10away from the circuit housing30through the hinge2530. In some embodiments, the rod-like component2540may be disposed with the hinge cavity2541connected to an end surface of the rod-like component2540. The hinge2530may accommodate the hinge base2531in the hinge cavity41, and further penetrate the fixing component2550through the sidewall of the rod-like component2540through the first insertion hole2542, thereby fixing the hinge base2531accommodated in the hinge cavity2541in the hinge cavity2541. The hinge2530may be detached from the rod-like component2540to facilitate replacement of the hinge2530or the rod-like component2540. In some embodiments, the hinge2530and the core housing20of the MP3 player may be detachable relative to the ear hook10, thereby facilitating replacement when the core housing20of the earphone core50or the ear hook10is damaged. In some embodiments, the hinge base2531may be disposed with a second insertion hole25311corresponding to the first insertion hole2542. The fixing component2550may be further inserted into the second insertion hole25311. In some embodiments, the shape of the second insertion hole25311may match the shape of the fixing component2550. The fixing component2550may be inserted into the second insertion hole25311to fix the hinge base2531after passing through the first insertion hole2542. The shaking of the hinge base2531in the hinge cavity2541may be reduced, and the hinge2530may be fixed more firmly. In some embodiments, the inner wall of the second insertion hole25311may be disposed with matching threads on the outer wall corresponding to the fixing component2550. The fixing component2550and the hinge base2531may be screwed together. In some embodiments, the inner wall of the second insertion hole25311and the outer sidewall at the corresponding contact positions of the fixing component2550may be smooth surfaces. The fixing component2550and the second insertion hole25311may be in an interference fit. In some embodiments, the second insertion hole25311may be disposed through both sides of the hinge base2531. The fixing component2550may further penetrate the entire hinge base2531. The hinge base2531may be firmly fixed in the hinge cavity2541. In some embodiments, the cross-sectional shape of the hinge base2531may match the cross-sectional shape of the hinge cavity2541in a cross section perpendicular to the length direction of the rod-like component2540. A seal may be formed between the hinge base2531and the rod-like component2540after insertion. In some embodiments, the cross-sectional shape of the hinge base2531and the cross-sectional shape of the hinge cavity2541may be any shapes, as long as the hinge base2531may be inserted into the hinge cavity2541from the end of the rod-like component2540away from the hinge arm2532. In some embodiments, the first insertion hole2542may be disposed on the sidewall of the hinge cavity2541, penetrate the sidewall of the hinge cavity2541and communicate with the hinge cavity2541. In some embodiments, the cross-sectional shape of the hinge base2531and the cross-sectional shape of the hinge cavity2541may be both rectangular. The first insertion hole2542may be perpendicular to one side of the rectangle. In some embodiments, the corners of the outer wall of the hinge base2531or the corners of the inner wall of the hinge cavity2541may be rounded. The contact between the hinge base2531and the hinge cavity2541may be smooth. The hinge base2531may be smoothly inserted into the hinge cavity2541. In some embodiments, the hinge component may include a connection line provided outside the hinge2530. In some embodiments, the connection line may be a connection line having an electrical connection function and/or a mechanical connection function. The hinge component may be configured to connect the end of core housing20and the ear hook10away from the circuit housing30. The control circuit or the like related to the core housing20may be disposed in the ear hook10or the circuit housing30. The connecting wire2560may electrically connect a core housing20with a control circuit in the ear hook10or the circuit housing30. In some embodiments, the connecting wire2560may be located at one side of the hinge base2531and the hinge arm2532. The hinge2530may be disposed in the same accommodation space. In some embodiments, the hinge base2531may include a first end surface. The hinge arm2532may have a second end surface opposite to the first end surface. It is easily understood that there is a certain gap between the first end surface and the second end surface, so that the hinge base2531and the hinge arm2532may be relatively rotated around the rotation shaft2533. In some embodiments, during the relative rotation of the hinge arm2532and the hinge base2531, the relative position between the first end surface and the second end surface changes accordingly, so that the gap between the two becomes larger or smaller. In some embodiments, the gap between the first end surface and the second end surface may be always larger than or less than the diameter of the connecting wire2560. The connecting wire2560located outside the hinge2530may not be caught in the gap between the first end surface and the second end surface during the relative rotation of the hinge base2531and the hinge arm2532, thereby reducing the damage of the connecting wire2560by the hinge. In some embodiments, the ratio of the gap between the first end surface and the second end surface to the diameter of the connection line during the relative rotation of the hinge arm2532and the hinge base2531may always be greater than 1.5 (e.g., greater than 1.5, 1.7, 1.9, 2.0, etc.) or less than 0.8 (e.g., less than 0.8, 0.6, 0.4, 0.2, etc.). FIG.12is a schematic diagram illustrating a structure of an exemplary hinge component according to some embodiments of the present disclosure.FIG.13is a schematic diagram illustrating a partial cross-sectional view of an exemplary hinge component according to some embodiments of the present disclosure. As shown inFIG.12andFIG.13, in some embodiments, the hinge component may further include a protective sleeve700. The protective sleeve700may be sleeved on the periphery of the hinge2530and may be bent along with the hinge2530. In some embodiments, the protective sleeve700may include a plurality of annular ridge portions71spaced apart along the length direction of the protective sleeve700and an annular connection part72provided between the annular ridge portions71. The protective sleeve700may be used to connect two adjacent annular ridge portions. In some embodiments, the tube wall thickness of the annular ridge portion71may be greater than the tube wall thickness of the annular connection part72. The length direction of the protective sleeve700may be consistent with the length direction of the hinge2530. The protection sleeve70may be specifically disposed along the length direction of the hinge base2531and the hinge arm2532. The protective sleeve700may include soft material, such as soft silicone, rubber, or the like, or any combination thereof. In some embodiments, the annular ridge portion71may be formed by protruding outwardly from the outer sidewall of the protective sleeve700. The shape of the inner sidewall of the protective sleeve700corresponding to the annular ridge portion71may be not limited herein. For example, the surface of the inner wall may be smooth. As another example, a recess on the inner wall may be disposed at a position corresponding to the annular ridge portion71. The annular connection part72may be configured to connect adjacent annular ridge portions71, specifically connected to the edge region of the annular ridge portion71near the inside of the protective sleeve700. A side of the outer wall of the protective sleeve700may be disposed in a recess with respect to the annular ridge portion71. When the hinge base2531and the hinge arm2532of the hinge2530are relatively rotated around the rotation shaft2533, the angle between the hinge base2531and the hinge arm2532may change. The protective sleeve700may be bent. In some embodiments, when the protective sleeve700is bent with the hinge2530, the annular ridge71and the annular connection part72located in the outer region of the bent shape formed by the protective sleeve700may be in a stretched state. The annular ridge71and annular connection part72located in the inner region of the bent shape may be in a squeezed state. The tube wall thicknesses of the annular ridge portion71and the annular connection part72may refer to the thickness between the inner and outer walls of the protective sleeve700corresponding to the annular ridge portion71and the annular connection part72, respectively. In some embodiments, the thickness of the pipe wall of the annular ridge portion71may be greater than the thickness of the pipe wall of the annular connection part72. The annular ridge portion71may be harder than the annular connection part72. Therefore, when the protective sleeve700is in a bent state, the protective sleeve700on the outer side of the bent shape may be in a stretched state. The annular ridge portion71may provide a certain strength support for the protective sleeve700. When the protective sleeve700that is on the inner side and in the bent state is squeezed, the annular ridge portion71may withstand a certain pressing force, thereby protecting the protective sleeve700and improving the stability of the protective sleeve700. The service life of the protective sleeve700may be extended. In some embodiments, the shape of the protective sleeve700may be consistent with the state of the hinge2530. In some embodiments, two sides of the protective sleeve700along the length direction and rotated around the rotation axis may be stretched or squeezed. In some embodiments, the hinge base2531and the hinge arm2532of the hinge2530may only rotate around the rotation shaft2533within a range of less than or equal to 180°. The protective sleeve700may only be bent toward one side, then one side of the two sides of the protective sleeve700in the length direction may be squeezed. The other side may be stretched. At this time, according to the different forces on both sides of the protective sleeve700, the two sides of the protective sleeve700under different forces may have different structures. In some embodiments, the width of the annular ridge portion71along the length direction of the protective sleeve700when the protective sleeve700is in a bent state toward the outside of the bent shape formed by the protective sleeve700may be greater than the width in the longitudinal direction of the protective sleeve700toward the inside of the bent shape. Increasing the width of the annular ridge71in the length direction of the protective sleeve700may further increase the strength of the protective sleeve. In some embodiments, the angle of the initial angle between the hinge base2531and the hinge arm2532may be less than 180°. If the annular ridges71of the protective sleeve700are evenly arranged, the protective sleeve700will be squeezed in the original state. In some embodiments, the width of the annular ridge71corresponding to the outer region side of the bent shape in the bent state is larger, thereby enlarging the length of the side protective sleeve700. The strength of the protective sleeve700may be improved. The extent of the stretching side may be reduced when the protective sleeve700is bent. At the same time, the width of the annular ridge portion71along the longitudinal direction of the protective sleeve700may be smaller when the protective sleeve700is in a bent state toward the inner region side of the bent shape, which can increase the space of the extruded annular connection part72in the length direction of the protective sleeve700and alleviate the extrusion of the extrusion side. In some embodiments, the width of the annular ridge portion71may gradually decrease from the side of the outer region toward the bent shape to the side of the inner region toward the bent shape. When the protective sleeve700is in the bent state, the width toward the outer region side of the bent shape formed by the protective sleeve700may be greater than the width toward the inner region side of the bent shape. The annular ridge portion71may be disposed around the periphery of the protective sleeve700. In the length direction of the protective sleeve700, one side corresponds to the stretched side, and the other side corresponds to the squeezed side. In some embodiments, the width of the annular ridge portion71may gradually decrease from the side of the outer region facing the bent shape to the side of the inner region facing the bent shape, thereby making the width more uniform. The stability of the protective sleeve700may be improved. In some embodiments, when the protective sleeve700is in a bent state, the annular ridge portion71may be disposed with a groove711on an inner circumferential surface of the protective sleeve700inside the protective sleeve700on the outer region side of the bent shape formed by the protective sleeve700. The groove711may be disposed along a length direction perpendicular to the protective sleeve700. The corresponding annular ridge portion71may be appropriately extended when the protective sleeve700is stretched in the length direction. When the protective sleeve700is in a bent state, the protective sleeve700on the outer side of the bent shape formed by the protective sleeve700may be in a stretched state. A groove711may be disposed on the inner ring surface inside the protective sleeve700corresponding to the corresponding annular ridge portion71, so that when the side protective sleeve is stretched, the annular ridge portion71corresponding to the groove711may be appropriately extended to bear a partial stretch, thus reducing the tensile force experienced by the side protective sleeve, thereby protecting the protective sleeve700. It should be noted that when the protective sleeve700is in a bent state, the annular ridge portion71on the side facing the inner region of the bent shape may not be disposed with a groove711on the inner sidewall of the corresponding protective sleeve700. In some embodiments, the width of the groove711along the length of the protective sleeve700gradually decreases from the side of the outer region facing the bent shape to the side of the inner region facing the bent shape, so that no groove711is disposed on the inner sidewall of the protective sleeve700corresponding to the annular ridge portion71facing the inner region side of the bent shape. In some embodiments, when the hinge component is applied to a speaker device of the present disclosure, the protective sleeve700may be connected to the ear hook10and the core housing20which are respectively disposed on both sides in the longitudinal direction of the protective sleeve700. In some embodiments, the protective sleeve700may also be other structures in the speaker device. For example, the protective cover of some components may be integrally formed, so that the speaker device may be more closed and integrated. It should be noted that the hinge component in the present disclosure embodiment may not only be used in the MP3 player of the speaker device, but may also be used in other apparatuses, such as glasses, the headphone, and the hearing aid. In some embodiments, the hinge component may also include the rod-like component2540, the fixing component2550, the connecting wire2560, the protective sleeve700, etc., or other components related to the hinge2530. The hinge component may realize the corresponding functions of the other components. It should be noted that the above description regarding the MP3 player is merely an example, and should not be considered as a uniquely possible implementation. Obviously, for those skilled in the art, after understanding the basic principles of the MP3 player, the specific ways and steps of the implementation of the MP3 player may be modified or changed without departing from the principle. For example, a count of annular ridge portion71and the annular connection part72may be not limited to the figure, and may be determined according to the actual use. Further, for example, the count of annular ridge portion71and the annular connection part72may be set according to a length of the protective sleeve700, a width of the annular ridge portion71and the annular connection part72along the length of the protective sleeve700. Such modifications are within the scope of the present disclosure. FIG.14is a block diagram illustrating an exemplary speaker device according to some embodiments of the present disclosure. In some embodiments, the speaker device1400may at least include an earphone core1402, an auxiliary function module1404, and a flexible circuit board1406. In some embodiments, the earphone core1402may be configured receive an audio electrical signal and convert the audio electrical signal into a sound signal. The flexible circuit board1406may be configured to provide electrical connections between different modules/components. For example, the flexible circuit board1406may provide electrical connection between the earphone core1402and the external control circuit and/or auxiliary function module1404. In some embodiments, the earphone core1402may at least include a magnetic circuit assembly, a vibration assembly, and a bracket configured for accommodating the magnetic circuit assembly and the vibration assembly. The magnetic circuit assembly may be configured to provide a magnetic field, and the vibration component may be configured to convert received audio electrical signal to a mechanical vibration signal, and generate sound. In some embodiments, the vibration component may include at least a coil and an internal lead. In some embodiments, the earphone core1402may further include an external wire, which can transmit audio current to the coil in the vibration component. One end of the external lead may be connected to the internal lead of the earphone core, and one end of the external lead may be connected to the flexible circuit board1406of the speaker device. In some embodiments, the bracket may include a wiring groove, and the external wire and/or the inner wire may be partially disposed in the wiring groove. More descriptions may be found elsewhere in the present disclosure. In some embodiments, the auxiliary function module1404may be used to receive auxiliary signal(s) and perform auxiliary function(s). The auxiliary function module1404may be a module different from the earphone core and may be used for receiving the auxiliary signal(s) and performing the auxiliary function(s). In the present disclosure, the conversion of the audio signal into the sound signal may be considered as a main function of the speaker device1400, and other functions different from the main function may be considered as the auxiliary function(s) of the speaker device1400. For example, the auxiliary function(s) of the speaker device1400may include receiving a user sound and/or an ambient sound through a microphone, controlling a broadcasting process of the sound signal through a button, or the like, and a corresponding auxiliary function module may include a microphone, a button switch, etc., which may be set according to actual needs. The auxiliary signal(s) may be electric signal(s) related to the auxiliary function(s), optical signal(s) related to the auxiliary function(s), acoustic signal(s) related to the auxiliary function(s), vibration signal(s) related to the auxiliary function(s), or the like, or any combination thereof. The speaker device1400may further include a core housing1408for accommodating the earphone core1402, the auxiliary function module1404, and the flexible circuit board1406. When the speaker device1400is an MP3 player as described according to some embodiments of the present disclosure, an inner wall of the core housing1408may be directly or indirectly connected to the vibration component in the earphone core. When the user wears the MP3 player, an outer wall of the core housing1408may be in contact with the user and transmit the mechanical vibration of the vibration component to an auditory nerve through a bone, so that the human body may hear the sound. In some embodiments, the speaker device may include the earphone core1402, the auxiliary function module1404, the flexible circuit board1406, and the core housing1408. In some embodiments, the flexible circuit board1406may be a flexible printed circuit board (FPC) accommodated in the inner space of the core housing1408. The flexible circuit board1406may have high flexibility and be adapted to the inner space of the core housing1408. Specifically, in some embodiments, the flexible circuit board1406may include a first board and a second board. The flexible circuit board1406may be bent at the first board and the second board so as to adapt to a position of the flexible circuit board in the core housing1408, or the like. More details may refer to descriptions in other parts of the present disclosure. In some embodiments, the speaker device1400may transmit the sound through a bone conduction approach. An outer surface of the core housing1408may have a fitting surface. The fitting surface may be an outer surface of the speaker device1400in contact with the human body when the user wears the speaker device1400. The speaker device1400may compress the fitting surface against a preset area (e.g., a front end of a tragus, a position of a skull, or a back surface of an auricle), thereby effectively transmitting the vibration signal(s) to the auditory nerve of the user through the bone and improving the sound quality of the speaker device1400. In some embodiments, the fitting surface may be abutted on the back surface of the auricle. The mechanical vibration signal(s) may be transmitted from the earphone core to the core housing and transmitted to the back of the auricle through the fitting surface of the core housing. The vibration signal(s) may then be transmitted to the auditory nerve by the bone near the back of the auricle. In this case, the bone near the back of the auricle may be closer to the auditory nerve, which may have a better conduction effect and improve the efficiency of transmitting the sound to the auditory nerve by the speaker device1400. In some embodiments, the speaker device1400may further include a fixing mechanism1410. In some embodiments, the fixing mechanism1410may be a part or the entire of the ear hook10shown inFIG.2. The fixing mechanism1410may be externally connected to the core housing1408and used to support and maintain the position of the core housing1408. In some embodiments, a battery assembly and a control circuit may be disposed in the fixing mechanism1410. The battery assembly may provide electric energy to any electronic component in the speaker device1400. The control circuit may control any function component in the speaker device1400. The function component may include, but be not limited to, the earphone core, the auxiliary function module, or the like. The control circuit may be connected to the battery and other functional components through the flexible circuit board or the wire. FIG.15is a schematic diagram illustrating a structure of an exemplary flexible circuit board located inside a core housing according to some embodiments of the present disclosure. In some embodiments, the flexible circuit board may be disposed with a plurality of pads. Different signal wires (e.g., audio signal wires, auxiliary signal wires) may be electrically connected to different pads through different flexible leads to avoid numerous and complicated internal wires issues, which may occur when both audio signal wires and auxiliary signal wires need to be connected to the earphone core or the auxiliary function module. As shown inFIG.15andFIG.16, a flexible circuit board44may at least include a plurality of first pads45and a plurality of second pads (not shown in the figures). In some embodiments, the flexible circuit board44inFIG.15may correspond to the flexible circuit board1406inFIG.14. At least one of the first pads45may be electrically connected to auxiliary function module(s). The at least one of the first pads45may be electrically connected to at least one of the second pads through a first flexible lead47on the flexible circuit board44. The at least one of the second pads may be electrically connected to an earphone core (not shown in the figures) through external wire(s) (not shown in the figures). At least another one of the first pads45may be electrically connected to auxiliary signal wire(s). The at least another one of the first pads45and the auxiliary function module(s) may be electrically connected through a second flexible lead49on the flexible circuit board44. In the embodiment, the at least one of the first pads45may be electrically connected to the auxiliary function module(s). The at least one of the second pads may be electrically connected to the earphone core through the external wire(s). The one of the at least one of the first pads45may be electrically connected to one of the at least one of the second pads through the first flexible lead47, so that the external audio signal wire(s) and the auxiliary signal wire(s) may be electrically connected to the earphone core and the auxiliary function modules at the same time through the flexible circuit board, which may simplify a layout of the wiring. In some embodiments, the audio signal wire(s) may be wire(s) electrically connected to the earphone core and transmitting audio signal(s) to the earphone core. The auxiliary signal wire(s) may be wire(s) electrically connected to the auxiliary function modules and performing signal transmission with the auxiliary function modules. In some embodiments, referring toFIG.15, specifically, the flexible circuit board44may be disposed with the plurality of pads45and two pads (not shown in the figure). The two pads and the plurality of pads45may be located on the same side of the flexible circuit board44and spaced apart. The two pads may be connected to two corresponding pads45of the plurality of pads45through the flexible lead(s)47on the flexible circuit board44. Further, a core housing41may also accommodate two external wires. One end of each of the external wires may be welded to the corresponding pad, and the other end may be connected to the earphone core, so that the earphone core may be connected to the pads through the external wires. The auxiliary function modules may be mounted on the flexible circuit board44and connected to other pads of the plurality of pads45through the flexible lead(s)49on the flexible circuit board44. In some embodiments, wires may be disposed in the fixing mechanism1410of the speaker device1400. The wires may at least include the audio signal wire(s) and the auxiliary signal wire(s). In some embodiments, there may be multiple wires in the fixing mechanism1410. The wires may include at least two audio signal wires and at least two auxiliary signal wires. For example, the fixing mechanism1410may be the ear hook10as shown inFIG.15. The ear hook10may be connected to the core housing41, and the wires may be disposed in the ear hook10. One end of the plurality of the wires in the ear hook10may be welded to the flexible circuit board44or a control circuit board disposed in the core housing41, and the other end of the plurality of the wire may enter the core housing41and be welded to the pad45on the flexible circuit board44. In some embodiments, one end of each of the two audio signal wires of the plurality of wires in the ear hook10, which may be located in the core housing41, may be welded to the two pads45by two flexible leads47, and the other end may be directly or indirectly connected to the control circuit board. The two pads45may be further connected to the earphone core through the welding of the flexible lead(s)49and the two pads and the welding of the two external wires and the pads, thereby transmitting the audio signal(s) to the earphone core. One end of each of at least two auxiliary signal wires in the core housing41may be welded to the pad45by the flexible lead(s)49, and the other end may be directly or indirectly connected to the control circuit board so as to transmit the auxiliary signal(s) received and transformed by the auxiliary function module(s) to the control circuit (not shown in the figure). In the approach described above, the flexible circuit board44may be disposed in the core housing41, and the corresponding pads may be further disposed on the flexible circuit board44. Therefore, the wires (not shown in the figure) may enter the core housing41and be welded to the corresponding pads, and further connected to the corresponding auxiliary function module(s) through the flexible leads47and the flexible leads49on the pads, thereby avoiding a plurality of wires directly connected to the auxiliary function module(s) to make the wiring in the core housing41complicated. Therefore, the arrangement of the wirings may be optimized, and the space occupied by the core housing41may be saved. In addition, when a plurality of the wires in the ear hook10are directly connected to the auxiliary function module(s), a middle portion of the wires in the ear hook10may be suspended in the core housing41to easily cause vibration, thereby resulting in abnormal sounds to affect the sound quality of the earphone core. According to the approach, the wires in the ear hook10may be welded to the flexible circuit board44and further connected to the corresponding auxiliary function module(s), which may reduce a situation that the wires are suspended from affecting the quality of the earphone core, thereby improving the sound quality of the earphone core to a certain extent. In some embodiments, the flexible circuit board (also referred to as the flexible circuit board44) may be further divided. The flexible circuit board may be divided into at least two regions. One auxiliary function module may be disposed on one of the at least two regions, so that at least two auxiliary function modules may be disposed on the flexible circuit board. Wiring between the audio signal wire(s) and the auxiliary signal wire(s) and the at least two auxiliary function modules may be implemented through the flexible circuit board. In some embodiments, the flexible circuit board may at least include a main circuit board and a first branch circuit board. The first branch circuit board may be connected to the main circuit board and extend away from the main circuit board along one end of the main circuit board. The auxiliary function module(s) may include at least a first auxiliary function module and a second auxiliary function module. The first auxiliary function module may be disposed on the main circuit board, and the second auxiliary function module may be disposed on the first branch circuit board. The plurality of first pads may be disposed on the main circuit board, and the second pads may be disposed on the first branch circuit board. In some embodiments, the first auxiliary function module may be a button switch. The button switch may be disposed on the main circuit board, and the first pads may be disposed corresponding to the button switch. The second auxiliary function module may be a microphone. The microphone may be disposed on the first branch circuit board, and the second pads corresponding to the microphone may be disposed on the first branch circuit board. The first pads corresponding to the button switch on the main circuit board may be connected to the second pads corresponding to the microphone on the first branch circuit board through the second flexible lead(s). The button switch may be electrically connected to the microphone, so that the button switch may control or operate the microphone. In some embodiments, the flexible circuit board may further include a second branch circuit board. The second branch circuit board may be connected to the main circuit board. The second branch circuit board may extend away from the main circuit board along the other end of the main circuit board and be spaced from the first branch circuit board. The auxiliary function module(s) may further include a third auxiliary function module. The third auxiliary function module may be disposed on the second branch circuit board. The plurality of first pads may be disposed on the main circuit board. At least one of the second pads may be disposed on the first branch circuit board, and the other second pads may be disposed on the second branch circuit. In some embodiments, the third auxiliary function module may be a second microphone. The second branch circuit board may extend perpendicular to the main circuit board. The second microphone may be mounted on the end of the second branch circuit board away from the main circuit board. The plurality of pads may be disposed at the end of the main circuit board away from the second branch circuit board. Specifically, as shown inFIG.15andFIG.16, the second auxiliary function module may be the first microphone432a. The third auxiliary function module may be the second microphone432b. As used herein, the first microphone432aand the second microphone432bmay both be MEMS (micro-electromechanical system) microphones, which may have a small working current, relatively stable performance, and high voice quality. The two microphones432may be disposed at different positions of the flexible circuit board44according to actual needs. In some embodiments, the flexible circuit board44may include a main circuit board441(or referred to the main circuit board), and a branch circuit board442(or referred to the first branch circuit board) and a branch circuit board443(or referred to the second branch circuit board) connected to the main circuit board441. The branch circuit board442may extend in the same direction as the main circuit board441. The first microphone432amay be mounted on one end of the branch circuit board442away from the main circuit board441. The branch circuit board443may extend perpendicular to the main circuit board441. The second microphone432bmay be mounted on one end of the branch circuit board443away from the main circuit board441. A plurality of pads45may be disposed on the end of the main circuit board441away from the branch circuit board442and the branch circuit board443. In one embodiment, the core housing41may include a peripheral side wall411and a bottom end wall412connected to one end surface of the peripheral side wall411, so as to form an accommodation space with an open end. As used herein, an earphone core may be disposed in the accommodation space through the open end. The first microphone432amay be fixed on the bottom end wall412. The second microphone432bmay be fixed on the peripheral side wall411. In the embodiment, the branch circuit board442and/or the branch circuit board443may be appropriately bent to suit a position of a sound inlet corresponding to the microphone432on the core housing41. Specifically, the flexible circuit board44may be disposed in the core housing41in a manner that the main circuit board441is parallel to the bottom end wall412. Therefore, the first microphone432amay correspond to the bottom end wall412without bending the main circuit board441. Since the second microphone432bmay be fixed on the peripheral side wall411of the core housing41, it may be necessary to bend the second main circuit board441. Specifically, the branch circuit board443may be bent at one end away from the main circuit board441so that a board surface of the branch circuit board443may be perpendicular to a board surface of the main circuit board441and the branch circuit board442. Further, the second microphone432bmay be fixed at the peripheral side wall411of the core housing41in a direction facing away from the main circuit board441and the branch circuit board442. In one embodiment, the first pads45, the second pads, the first microphone432a, and the second microphone432bmay be disposed on the same side of the flexible circuit board44. The second pads may be disposed adjacent to the second microphone432b. In some embodiments, the second pads may be specifically disposed at one end of the branch circuit board443away from the main circuit board441and have the same direction as the second microphone432band disposed at intervals. Therefore, the second pads may be perpendicular to the direction of the first pads45as the branch circuit board443is bent. It should be noted that the branch circuit board443may not be perpendicular to the board surface of the main circuit board441after being bent, which may be determined according to the arrangement between the peripheral side wall411and the bottom end wall412. Further, another side of the flexible circuit board44may be disposed with a rigid support plate4aand a microphone rigid support plate4bfor supporting the first pads45. The microphone rigid support plate4bmay include a rigid support plate4b1for supporting the first microphone432aand a rigid support plate4b2for supporting the second pads and the second microphone432btogether. In some embodiments, the rigid support plate4a, the rigid support plate4b1, and the rigid support plate4b2may be mainly used to support the corresponding pads and the microphone432, and thus may need to have certain strengths. The materials of the three may be the same or different. The specific material may be polyimide film (PI film), or other materials that may provide the strengths, such as polycarbonate, polyvinyl chloride, etc. In addition, the thicknesses of the three rigid support plates may be set according to the strengths of the rigid support plates, and actual strengths required by the first pads45, the second pads, the first microphone432a, and the second microphone432b, and be not specifically limited herein. In some embodiments, the rigid support plate4a, the rigid support plate4b1, and the rigid support plate4b2may be three different regions of an entire rigid support plate, or three independent bodies spaced apart from each other, and be not specifically limited herein. In one embodiment, the first microphone432aand the second microphone432bmay correspond to two microphone components (not shown in the figure), respectively. In one embodiment, the structures of the two microphone components may be the same. A sound inlet413may be disposed on the core housing41. Further, the bond conduction speaker device may be further disposed with an annular blocking wall414integrally formed on the inner surface of the core housing41at the core housing41, and disposed at the periphery of the sound inlet413, thereby defining an accommodation space (not shown in the figure) connected to the sound inlet413. In one embodiment, the flexible circuit board44may be disposed between a rigid support plate (e.g., the rigid support plate4a, the rigid support plate4b1, and the rigid support plate4b2) and the microphone432. A sound input444may be disposed at a position corresponding to a sound input4b3of the microphone rigid support plate4b. Further, the flexible circuit board44may further extend away from the microphone432, so as to be connected to other functional components or wires to implement corresponding functions. Correspondingly, the microphone rigid support plate4bmay also extend out a distance with the flexible circuit board in a direction away from the microphone432. Correspondingly, the annular blocking wall414may be disposed with a gap matching the shape of the flexible circuit board44to allow the flexible circuit board44to extend out of the accommodation space. In addition, the gap may be further filled with a sealant to further improve the sealing. FIG.17is a schematic diagram illustrating a sectional view of a partial structure of an exemplary core housing according to some embodiments of the present disclosure. In some embodiments, as shown inFIG.17, the flexible circuit board44may include a main circuit board445and a branch circuit board446. The branch circuit board446may extend along an extending direction perpendicular to the main circuit board445. The plurality of first pads45may be disposed at the end of the main circuit board445away from the branch circuit board446. A button switch may be mounted on the main circuit board445. The second pads46may be disposed at the end of the branch circuit board446away from the main circuit board445. The first auxiliary function module may be a button switch431. The second auxiliary function module may be a microphone432. In the embodiment, a board surface of the flexible circuit board44and the bottom end wall412may be disposed in parallel and at intervals, so that the button switch may be disposed towards the bottom end wall412of the core housing41. As described above, an earphone core (also referred to as the earphone core1402) may include a magnetic circuit component, a vibration component, an external wire, and a bracket. In some embodiments, the vibration component may include a coil and an internal lead. The external wire may transmit an audio current to the coil in the vibration component. One end of the external wire may be connected to the internal lead of the earphone core, and the other end may be connected to the flexible circuit board of a speaker. The bracket may have a wiring groove. At least a portion of the external wire and/or the internal lead may be disposed in the wiring groove. In some embodiments, the internal lead and the external wire may be welded to each other. A welding position may be located in the wiring groove. FIG.18is a schematic diagram illustrating a partial section view of an exemplary core housing according to some embodiments of the present disclosure.FIG.19is a schematic diagram illustrating a partially enlarged view of part F inFIG.18. Specifically, referring toFIG.18andFIG.19, an earphone core may include a bracket421, a coil422, and an external wire48. The bracket421may be used to support and protect the entire structure of the earphone core. In the embodiment, the bracket421may be disposed with a wiring groove4211used to accommodate a circuit of the earphone core. The coil422may be disposed on the bracket421and have at least one internal lead423. One end of the internal lead(s)423may be connected to a main circuit in the coil422to lead out the main circuit and transmit an audio current to the coil422through the internal lead423. One end of the external wire48may be connected to the internal lead(s)423. Further, the other end of the external wire48may be connected to a control circuit (not shown in the figure) to transmit the audio current through the control circuit to the coil422through the internal lead423. Specifically, during an assembly stage, the external wire48and the internal lead(s)423may need to be connected together by means of welding, or the like. Due to structural and other factors, after the welding is completed, a length of the wire may not be exactly the same as a length of a channel, and there may be an excess length part of the wire. And if the excess length part of the wire is not disposed reasonably, it may vibrate with the vibration of the coil422, thereby making an abnormal sound and affecting the sound quality of the earphone core. Further, at least one of the external wire48and the internal lead423may be wound and disposed in the wiring groove4211. In an application scenario, the welding position between the internal lead423and the external wire48may be disposed in the wiring groove4211, so that a portion of the external wire48and the internal lead423located near the welding position may be wound in the wiring groove4211. In addition, in order to maintain stability, the wiring groove4211may be further filled with a sealant to further fix the wiring in the wiring groove4211. In the manner described above, the wiring groove4211may be disposed on the bracket421, so that at least one of the external wire48and the internal lead423may be wound into the wiring groove4211to accommodate the excess length part of the wire, thereby reducing the vibration generated inside the channel, and reducing the influence of the abnormal sound caused by the vibration on the sound quality of the earphone core. In one embodiment, the bracket421may include an annular main body4212, a support flange4213, and an outer blocking wall4214. In some embodiments, the annular main body4212, the support flange4213, and the outer blocking wall4214may be integrally formed. In some embodiments, the annular main body4212may be disposed inside the entire bracket421and used to support the coil422. Specifically, a cross-section of the annular main body4212in a direction perpendicular to the radial direction of a ring of the annular main body4212may be consistent with the coil422. The coil422may be disposed at an end of the annular main body4212facing the core housing. The inner side wall and the outer side wall of the annular main body4212may be flush with the inner side wall and the outer side wall of the coil422, respectively, so that the inner side wall of the coil422and the inner side wall of the annular main body4212may be coplanar, and the outer side wall of the coil422and the outer side wall of the annular main body4212may be coplanar. Further, the support flange4213may protrude on the outer side wall of the annular main body4212and extend along the outside of the annular main body4212. Specifically, the support flange4213may extend outward in a direction perpendicular to the outer side wall of the annular main body4212. As used herein, the support flange4213may be disposed at a position between two ends of the annular main body4212. In the embodiment, the support flange4213may protrude around the outer side wall of the annular main body4212to form an annular support flange4213. In other embodiments, the support flange4213may also be formed by protruding at a portion of the outer side wall of the annular main body4212according to needs. The outer blocking wall4214may be connected to the support flange4213and spaced apart from the annular main body4212along the side of the annular main body4212. As used herein, the outer blocking wall4214may be sleeved on the periphery of the annular main body4212and/or the coil422at intervals. Specifically, the outer blocking wall4214may be partially sleeved around the periphery of the annular main body4212and the coil422according to actual needs, or partially sleeved around the periphery of the annular main body4212. It should be noted that, in the embodiment, a portion of the outer blocking wall4214close to the wiring groove4211may be sleeved on a portion of the periphery of the annular main body4212. Specifically, the outer blocking wall4214may be disposed on a side of the support flange4213away from the core housing. In some embodiments, the outer side wall of the annular main body4212, the side wall of the support flange4213away from the core housing, and the inner side wall of the outer blocking wall4214may together define the wiring groove4211. In one embodiment, a wiring channel424may be disposed on the annular main body4212and the support flange4213. The internal lead(s)423may extend inside the wiring groove4211via the wiring channel424. In some embodiments, the wiring channel424may include a sub-wiring channel4241on the annular main body4212and a sub-wiring channel4242on the support flange4213. The sub-wiring channel4241may be disposed through the inner side wall and the outer side wall of the annular main body4212. A wiring port42411communicating with one end of the sub-wiring channel4241may be disposed on a side of the annular main body4212near the coil422. A wiring port42412communicating with the other end of the sub-wiring channel4241may be disposed on a side of the core housing near the support flange4213facing the core housing. The sub-wiring channel4242may penetrate the support flange4213in a direction towards the outside of the core housing. The wiring port42421communicating with the end of the sub-wiring channel4242may be disposed on a side of the support flange4213facing the core housing. The wiring port42422communicating with the other end of the sub-wiring channel4242may be disposed on a side away from the core housing. In some embodiments, the wiring port42412and the wiring port42421may communicate through a space between the support flange4213and the annular main body4212. Further, the internal lead(s)423may enter the wiring port42411, extend along the sub-wiring channel4241, exit from the wiring port42412to enter a region between the annular main body4212and the support flange4213, further enter the sub-wiring channel4242from the wiring port42421, and extend into the wiring groove4211after passing through the wiring port42422. In one embodiment, the top of the outer blocking wall4214may be disposed with a slot42141. The external wire48may extend inside the wiring groove4211through the slot42141. In some embodiments, one end of the external wire48may be disposed on the flexible circuit board44. The flexible circuit board44may be specifically disposed on an inner side of the earphone core facing the core housing. In the embodiment, the support flange4213may be further extended to a side of the outer blocking wall4214away from the annular main body4212to form an outer edge. Further, the outer edge may surround and abut on the inner side wall of the core housing. Specifically, the outer edge of the support flange4213may be disposed with a slot42131, so that the external wire48on the inner side of the earphone core facing the core housing may be extended to the outer side of the support flange4213facing the core housing through the slot42131, and then to the slot42141, and enter the wiring groove4211through the slot42141. Further, the inner side wall of the core housing may be disposed with a guide groove416. One end of the guide groove416may be located on one side of the flexible circuit board44and the other end may communicate with the slot42131and extend in a direction towards the outside of the core housing, so that the external wire48extends from the flexible circuit board to a second wiring groove by passing through the guide groove416. In one embodiment, the bracket421may further include two side blocking walls4215spaced along the circumferential direction of the annular main body4212and connected to the annular main body4212, the support flange4213, and the outer blocking wall4214, thereby defining the wiring groove4211between the two side blocking walls4215. Specifically, the two side blocking walls4215may be oppositely disposed on the support flange4213and protrude towards the outer side of the core housing along the support flange4213. In some embodiments, a side of the two side blocking walls4215facing the annular main body4212may be connected to the outer side wall of the annular main body4212. A side away from the annular main body4212may terminate at the outer side wall of the outer blocking wall4214. The wiring port42422and the slot42141may be defined between the two side blocking walls4215. Therefore, the internal lead(s)423exiting from the wiring port42422and the external wire48entering through the slot42141may extend into the wiring groove4211defined by the two side blocking walls4215. It should be noted that the above descriptions of the speaker device are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications on the specific manners of the speaker device may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the branch circuit board may further include third pads and a third flexible circuit board. Such variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the speaker device described above may transmit the sound to the user through air conduction. When the air condition is used to transmit the sound, the speaker device may include one or more sound sources. The sound source may be located at a specific position of the user's head, for example, the top of the head, the forehead, the cheek, a temple, an auricle, the back of an auricle, etc., without blocking or covering an ear canal.FIG.20is a schematic diagram illustrating sound transmission through air conduction according to some embodiments of the present disclosure. As shown inFIG.20, a sound source2210and a sound source2220may generate sound waves with opposite phases (“+” and “−” in the figure may indicate the opposite phases). For brevity, the sound sources used herein may refer to sound outlets of a speaker device that outputs sounds. For example, the sound source2210and the sound source2220may be two sound outlets respectively located at a specific position (e.g., the core housing20or the circuit housing30) of the speaker device. In some embodiments, the sound source2210and the sound source2220may be generated by a same vibration device2201. The vibration device2201may include a diaphragm (not shown inFIG.20). When the diaphragm is driven to vibrate by an electric signal, a front side of the diaphragm may drive air to vibrate. The sound source2210may be formed at a sound output hole through a sound guiding channel2212. A back side of the diaphragm may drive air to vibrate, and the sound source2220may be formed at the sound output hole through a sound guiding channel2222. The sound guiding channel refers to a sound transmission route from the diaphragm to the corresponding outlet. In some embodiments, the sound guiding channel may be a route surrounded by a specific structure (e.g., the core housing20or the circuit housing30) on the speaker device. It should be noted that in some alternative embodiments, the sound source2210and the sound source2220may be generated by different vibrating diaphragms of different vibration devices, respectively. Among the sounds generated by the sound source2210and the sound source2220, one portion of the sounds may be transmitted to the ear of a user to form a sound heard by the user. Another portion of the sound may be transmitted to the environment to form a leaked sound. Considering that the sound source2210and the sound source2220are relatively close to the ears of the user, for convenience of description, the sound transmitted to the ear of the user may be referred to as a near-field sound. The leaked sound transmitted to the environment may be referred to as a far-field sound. In some embodiments, the near-field/far-field sounds with different frequencies generated by the speaker device may be related to a distance between the sound source2210and the sound source2220. Generally, the near-field sound generated by the speaker device may increase along with an increment of the distance between the two sound sources, and the far field sound (i.e., the leaked sound) may increase along with an increment of a frequency. For sounds with different frequencies, the distance between the sound source2210and the sound source2220may be designed, respectively, so that a low-frequency near-field sound (e.g., a sound with a frequency less than 800 Hz) generated by the speaker device may be relatively great, and a far-field sound with the relatively high frequency (e.g., a sound with a frequency greater than 2000 Hz) may be relatively small. In order to implement the above purpose, the speaker device may include two or more sets of dual sound sources. Each set of the dual sound sources may include two sound sources similar to the sound source2210and the sound source2220, and generate sounds with a specific frequency, respectively. Specifically, a first set of the dual sound sources may be used to generate a sound with a relatively low frequency. A second set of the dual sound sources may be used to generate a sound with a relatively great frequency. To increase a volume of the near-field sound with the relatively low frequency, the distance between two sound sources in the first set of the dual sound sources may be set with a relatively large value. Since the low-frequency near-field sound may have a relatively long wavelength, the relatively great distance between the two sound sources may not cause a relatively great phase difference in the far-field, and thereby reducing sound leakage in the far-field. In some embodiments, to reduce the far-field sound with the relatively high frequency, the distance between the two sound sources in the second set of the dual sound sources may be set with a relatively small value. Since the far field sound with the relatively high frequency may have a relatively short wavelength, the relatively small distance between the two sound sources may avoid the generation of a relatively large phase difference in the far-field, thereby reducing the sound leakage. The distance between the two sound sources of the second set of the dual sound sources may be less than the distance between the two sound sources of the first set of the dual sound sources. The beneficial effects of the embodiments of the present disclosure may include but are not limited to the following. (1) The protective sleeve at the ear hook elastically abuts with the core housing improves the waterproof performance of the speaker device; (2) the ear hook and the core housing of the speaker device are molded using different molds, thereby reducing the processing difficulty of the mold and the molding difficulty in the production of the ear hook and the housing; (3) the core housing and the ear hook of the speaker device may be connected through a hinge component, and the fitting position of the core housing of the earphone core and the human skin may be adjusted; (4) at least one of the external wire and the internal lead may be wound into the wiring groove to accommodate the excess length part of the wire, which may reduce the vibration generated inside the channel, and reduce the influence of the abnormal sound caused by the vibration on the sound quality of the earphone core, thereby improving the sound quality of the speaker device. It should be noted that different embodiments may have different beneficial effects. In different embodiments, the possible beneficial effects may be any one or a combination of the beneficial effects described above, or any other beneficial effects. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “unit,” “module,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python, or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claimed subject matter may lie in smaller than all features of a single foregoing disclosed embodiment.
120,026